id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
2333956695
chore(deps): pin dependencies This PR contains the following updates: Package Type Update Change @commitlint/cli (source) devDependencies pin ^19.3.0 -> 19.3.0 @commitlint/config-conventional (source) devDependencies pin ^19.2.2 -> 19.2.2 @diba1013/eslint-config (source) devDependencies pin ^0.11.1 -> 0.11.1 @diba1013/prettier-config (source) devDependencies pin ^0.11.1 -> 0.11.1 actions/checkout action pinDigest -> a5ac7e5 actions/checkout action pinDigest -> f43a0e5 actions/setup-node action pinDigest -> 7c12f80 eslint (source) devDependencies pin 8 -> 8.57.0 husky devDependencies pin ^9.0.11 -> 9.0.11 lint-staged devDependencies pin ^15.2.5 -> 15.2.5 prettier (source) devDependencies pin ^3.3.1 -> 3.3.1 renovatebot/github-action action pinDigest -> 21d88b0 Add the preset :preserveSemverRanges to your config if you don't want to pin your dependencies. Configuration 📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied. ♻ Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox. 👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired. [ ] If you want to rebase/retry this PR, check this box This PR has been generated by Renovate Bot. Edited/Blocked Notification Renovate will not automatically rebase this PR, because it does not recognize the last commit author and assumes somebody else may have edited the PR. You can manually request rebase by checking the rebase/retry box above. ⚠️ Warning: custom changes will be lost. ⚠️ Artifact update problem Renovate failed to update artifacts related to this branch. You probably do not want to merge this PR as-is. ♻ Renovate will retry this branch, including artifacts, only when one of the following happens: any of the package files in this branch needs updating, or the branch becomes conflicted, or you click the rebase/retry checkbox if found above, or you rename this PR's title to start with "rebase!" to trigger it manually The artifact failure details are included below: File name: package.json Post-upgrade command 'pnpm lint:fix || true' has not been added to the allowed list in allowedPostUpgradeCommands File name: package.json Post-upgrade command 'pnpm lint:fix || true' has not been added to the allowed list in allowedPostUpgradeCommands File name: package.json Post-upgrade command 'pnpm lint:fix || true' has not been added to the allowed list in allowedPostUpgradeCommands File name: package.json Post-upgrade command 'pnpm lint:fix || true' has not been added to the allowed list in allowedPostUpgradeCommands File name: .github/workflows/schedule.yaml Post-upgrade command 'pnpm lint:fix || true' has not been added to the allowed list in allowedPostUpgradeCommands File name: .github/workflows/build.yaml Post-upgrade command 'pnpm lint:fix || true' has not been added to the allowed list in allowedPostUpgradeCommands File name: .github/workflows/build.yaml Post-upgrade command 'pnpm lint:fix || true' has not been added to the allowed list in allowedPostUpgradeCommands File name: package.json Post-upgrade command 'pnpm lint:fix || true' has not been added to the allowed list in allowedPostUpgradeCommands File name: package.json Post-upgrade command 'pnpm lint:fix || true' has not been added to the allowed list in allowedPostUpgradeCommands File name: package.json Post-upgrade command 'pnpm lint:fix || true' has not been added to the allowed list in allowedPostUpgradeCommands File name: package.json Post-upgrade command 'pnpm lint:fix || true' has not been added to the allowed list in allowedPostUpgradeCommands File name: .github/workflows/schedule.yaml Post-upgrade command 'pnpm lint:fix || true' has not been added to the allowed list in allowedPostUpgradeCommands
gharchive/pull-request
2024-06-04T16:42:49
2025-04-01T04:33:59.851848
{ "authors": [ "diba1013" ], "repo": "diba1013/renovate-config", "url": "https://github.com/diba1013/renovate-config/pull/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1640671001
Bug: Extra space in list view formats for URLs/folders What happened? As can be seen the by the screenshot below, when there's only 1 item in a list, there will be extra space below the first item in the list that will add space between the first element and then the border of the list. This looks to be only an issue in the URLs/folders lists, but the accounts list looks fine. URLs/folders (broken): Users (displaying properly with only 1 element): Version upstream (ghcr.io/diced/zipline:trunk) What browser(s) are you seeing the problem on? Chromium-based (Chrome, Edge, Brave, Opera, mobile chrome/chromium based, etc) Zipline Logs No response Browser Logs No response Additional Info No response Originally I removed it, but decided to add it back in by making the tables take the entire width of the page.
gharchive/issue
2023-03-25T21:33:04
2025-04-01T04:33:59.857054
{ "authors": [ "diced", "xSkeletor" ], "repo": "diced/zipline", "url": "https://github.com/diced/zipline/issues/345", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
108121165
Issue with placeholder I tried replacing the database to mySql following instructions you have mentioned. But i am hitting this error. Do i need to create business-schema-mysql.sql file ? If i need to create then what will be the contents in it ? Exception in thread "main" org.springframework.beans.factory.BeanDefinitionStoreException: Invalid bean definition with name 'org.springframework.jdbc.datasource.init.DataSourceInitializer#0' defined in null: Could not resolve placeholder 'batch.business.schema.script' in string value "${batch.business.schema.script}"; nested exception is java.lang.IllegalArgumentException: Could not resolve placeholder 'batch.business.schema.script' in string value "${batch.business.schema.script}" at org.springframework.beans.factory.config.PlaceholderConfigurerSupport.doProcessProperties(PlaceholderConfigurerSupport.java:211) at org.springframework.context.support.PropertySourcesPlaceholderConfigurer.processProperties(PropertySourcesPlaceholderConfigurer.java:180) at org.springframework.context.support.PropertySourcesPlaceholderConfigurer.postProcessBeanFactory(PropertySourcesPlaceholderConfigurer.java:155) at org.springframework.context.support.PostProcessorRegistrationDelegate.invokeBeanFactoryPostProcessors(PostProcessorRegistrationDelegate.java:265) at org.springframework.context.support.PostProcessorRegistrationDelegate.invokeBeanFactoryPostProcessors(PostProcessorRegistrationDelegate.java:162) at org.springframework.context.support.AbstractApplicationContext.invokeBeanFactoryPostProcessors(AbstractApplicationContext.java:609) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:464) at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.refresh(EmbeddedWebApplicationContext.java:120) at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:691) at org.springframework.boot.SpringApplication.run(SpringApplication.java:320) at org.springframework.boot.SpringApplication.run(SpringApplication.java:952) at org.springframework.boot.SpringApplication.run(SpringApplication.java:941) at de.codecentric.batch.SpringBatchAdmin.main(SpringBatchAdmin.java:51) Caused by: java.lang.IllegalArgumentException: Could not resolve placeholder 'batch.business.schema.script' in string value "${batch.business.schema.script}" at org.springframework.util.PropertyPlaceholderHelper.parseStringValue(PropertyPlaceholderHelper.java:174) at org.springframework.util.PropertyPlaceholderHelper.replacePlaceholders(PropertyPlaceholderHelper.java:126) at org.springframework.core.env.AbstractPropertyResolver.doResolvePlaceholders(AbstractPropertyResolver.java:194) at org.springframework.core.env.AbstractPropertyResolver.resolveRequiredPlaceholders(AbstractPropertyResolver.java:158) at org.springframework.context.support.PropertySourcesPlaceholderConfigurer$2.resolveStringValue(PropertySourcesPlaceholderConfigurer.java:175) at org.springframework.beans.factory.config.BeanDefinitionVisitor.resolveStringValue(BeanDefinitionVisitor.java:282) at org.springframework.beans.factory.config.BeanDefinitionVisitor.resolveValue(BeanDefinitionVisitor.java:204) at org.springframework.beans.factory.config.BeanDefinitionVisitor.visitIndexedArgumentValues(BeanDefinitionVisitor.java:150) at org.springframework.beans.factory.config.BeanDefinitionVisitor.visitBeanDefinition(BeanDefinitionVisitor.java:84) at org.springframework.beans.factory.config.BeanDefinitionVisitor.resolveValue(BeanDefinitionVisitor.java:169) at org.springframework.beans.factory.config.BeanDefinitionVisitor.visitPropertyValues(BeanDefinitionVisitor.java:141) at org.springframework.beans.factory.config.BeanDefinitionVisitor.visitBeanDefinition(BeanDefinitionVisitor.java:82) at org.springframework.beans.factory.config.BeanDefinitionVisitor.resolveValue(BeanDefinitionVisitor.java:169) at org.springframework.beans.factory.config.BeanDefinitionVisitor.visitList(BeanDefinitionVisitor.java:228) at org.springframework.beans.factory.config.BeanDefinitionVisitor.resolveValue(BeanDefinitionVisitor.java:192) at org.springframework.beans.factory.config.BeanDefinitionVisitor.visitPropertyValues(BeanDefinitionVisitor.java:141) at org.springframework.beans.factory.config.BeanDefinitionVisitor.visitBeanDefinition(BeanDefinitionVisitor.java:82) at org.springframework.beans.factory.config.BeanDefinitionVisitor.resolveValue(BeanDefinitionVisitor.java:169) at org.springframework.beans.factory.config.BeanDefinitionVisitor.visitPropertyValues(BeanDefinitionVisitor.java:141) at org.springframework.beans.factory.config.BeanDefinitionVisitor.visitBeanDefinition(BeanDefinitionVisitor.java:82) at org.springframework.beans.factory.config.PlaceholderConfigurerSupport.doProcessProperties(PlaceholderConfigurerSupport.java:208) ... 12 more I think the root cause is, after going through the process its still trying to read batch-hsql.properties Consider moving to Spring Cloud Data Flow like Spring recommends for Spring Batch Admin users. See https://github.com/spring-projects/spring-batch-admin
gharchive/issue
2015-09-24T12:32:12
2025-04-01T04:33:59.865290
{ "authors": [ "dickerpulli", "ggittu" ], "repo": "dickerpulli/spring-batch-admin-spring-boot", "url": "https://github.com/dickerpulli/spring-batch-admin-spring-boot/issues/3", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
738267692
Add unicode support Add unicode support Summary Currently within Dictu all string based activities are done on the number of bytes a string contains, rather than the number of characters, obviously this means some non-ascii characters will be dealt with wrong. We should change this to instead include support for unicode characters within the default strings. Example Dictu Version: 0.11.0 >>> "Ā".len(); 2 A potentially promising library: https://github.com/sheredom/utf8.h 👀 👀 :OOOOO omg exciting stuff!!!
gharchive/issue
2020-11-07T15:56:07
2025-04-01T04:33:59.884636
{ "authors": [ "Jason2605", "liz3" ], "repo": "dictu-lang/Dictu", "url": "https://github.com/dictu-lang/Dictu/issues/317", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2250129134
发生了什么? https://codesandbox.io/p/sandbox/logicflow-cdfxwc?file=%2Fsrc%2Fcomponents%2FLF.vue%3A6%2C45 点击保存触发setProperties后会多创建一个vue实例,页面上只有一个节点。 logicflow/core版本 1.2.17 logicflow/extension版本 1.2.18 logicflow/engine版本 No response 浏览器&环境 No response 可以看到 在56行的判断是否执行更新函数中依赖 在 48行的 shouldUpdate函数,当判断到 properties 发生变化后就会执行setHtml函数,就导致了更新,如果需要解决当前问题,建议重写 shouldUpdate 方法
gharchive/issue
2024-04-18T08:49:15
2025-04-01T04:33:59.914232
{ "authors": [ "admin1949", "yyp0716" ], "repo": "didi/LogicFlow", "url": "https://github.com/didi/LogicFlow/issues/1579", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
451447644
picker里面的children是否可以换成其他参数名字 picker 没有 children. 如果是说的 CascadePicker 的话,也只能是约定的数据结构。 就是CascadePicker这个 原数据结构是: const cascadeData = [ { value: 'Fruit', text: 'Fruit', children: [ { value: 'Apple', text: 'Apple', children: [{ value: 1, text: 'One' }, { value: 2, text: 'Two' }] }, { value: 'Orange', text: 'Orange', children: [{ value: 3, text: 'Three'}, { value: 4, text: 'Four' }] } ] } ] 我的数据结构一样 字段不一样(children名字换了) const cascadeData = [ { value: 'Fruit', text: 'Fruit', item: [ { value: 'Apple', text: 'Apple', item: [{ value: 1, text: 'One' }, { value: 2, text: 'Two' }] }, { value: 'Orange', text: 'Orange', item: [{ value: 3, text: 'Three'}, { value: 4, text: 'Four' }] } ] } ]
gharchive/issue
2019-06-03T12:10:47
2025-04-01T04:33:59.920324
{ "authors": [ "dolymood", "lhongtao" ], "repo": "didi/cube-ui", "url": "https://github.com/didi/cube-ui/issues/497", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
872196691
BUG:机器不能挂载多个节点 第一次挂载t.a成功 第二次挂载test.b报错 {"err":"tx-ops-n9e01 already belongs to copxxx"} 版本信息 # ./n9e-server -v Version: 4.0.0 # cat /etc/redhat-release CentOS Linux release 7.3.1611 (Core) # uname -a Linux tx3-ops-n9e01.bj 3.10.0-514.21.1.el7.x86_64 #1 SMP Thu May 25 17:04:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux 机器只能属于一个租户,可以挂在租户下面的多个不同的节点。正常来说,在资产管理的页面只是用于分配租户,具体的挂载是去服务树的页面挂载,可以去B站搜索滴滴夜莺v3的视频,有讲解如何使用的
gharchive/issue
2021-04-30T09:09:11
2025-04-01T04:33:59.944860
{ "authors": [ "UlricQin", "bbaobelief" ], "repo": "didi/nightingale", "url": "https://github.com/didi/nightingale/issues/679", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1941801415
Hotfix: Helm 3.13.1 Helm 3.13.3 doesn't exist... How did the CI/CD succeed earlier? It succeeded because this if was resolved to false 🤔 if: github.event.review.state == 'approved' || github.event_name == 'push' || github.event.inputs.run-update-deployments
gharchive/pull-request
2023-10-13T11:52:08
2025-04-01T04:33:59.950157
{ "authors": [ "rblaine95" ], "repo": "didx-xyz/aries-cloudapi-python", "url": "https://github.com/didx-xyz/aries-cloudapi-python/pull/500", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
290672745
Improve docs Any help with docs would be very welcome. And, since I'm not a native English speaker, I could have made some mistakes. If you find an error, please do submit a PR with the correction. Thank you, guys. @diegohaz thanks for the explanation ❤️ Have you thought to follow an approach similar to recompact and squash all the HOCs into a single one? Not sure how it's implemented there but it looks they solved a very similar problem as the presented here 😉 With #36, component stack was reduced a little. Now we can see it within snaphots: stack array; full component stack; generated html With #55, component stack will be hugely reduced, even compared to changes in #36: Before: After: With #66 🎉 Group components in the sidebar navigation I think a great first step to improve the docs would be to group the components in the sidebar navigation. I found myself wanting to see all the components that have State components, or maybe this group would be called "Behavior Components" I'm extremely new to this project so I'm still figuring out what good groupings would be. Here's what I have so far: Behavior (Hidden, Step, Popover, ...) Presentational (Backdrop, Block, Box, Shadow, ...) Text (Heading, Label, Link, ...) Layout (Grid, Inline, Fit, InlineBlock, ...) Basic text explanations for all components Another good thing to do would be to add basic text descriptions for each component (I have no idea what <Fit /> does just by the example). This is where the native english speakers can help. I'll try to help document as I learn the library better or I can help review any PRs that you make. Document component props, state components, and behaviors Lastly, it would be good if every component had a description of the following Props that it can take (for example I don't know what destroy does on the Hidden component) Any state component that it provides and document the shape of the object that gets passed to the child function (downshift does a good job with this I think) Any behaviors the component provides (e.g Hidden.Show, Hidden.Hide) Hi, @wordofchristian. Thank you so much for your feedback. I agree with all your points. I still don't know how we should group components in docs, but I agree that it should be revisited. I'm gonna send you an invite to be a collaborator on the project so you'll have more power on reviewing PRs and other things. @Thomazella and I are actively working (on our free time) on new docs on the feature/docs branch. If you you'd like to join us, send me your email address and I'll invite you. Nice. I agree too. Let's make these docs better! 🚀🚀⭐️🌈
gharchive/issue
2018-01-23T01:05:03
2025-04-01T04:34:00.016182
{ "authors": [ "Thomazella", "diegohaz", "lluia", "wordofchristian" ], "repo": "diegohaz/reas", "url": "https://github.com/diegohaz/reas/issues/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1478466623
build(deps): bump alpine from 3.14.0 to 3.14.1 in /docker/ci/alpine - (32) Motivation Bumps alpine from 3.14.0 to 3.14.1 in docker file. Testplan CI /canary :broken_heart: Test Failed - ci-test This PR is duplicate of PR 10545
gharchive/pull-request
2022-12-06T07:44:01
2025-04-01T04:34:00.036228
{ "authors": [ "ankitkacn", "bors-diem", "dhaneshacn" ], "repo": "diem/diem", "url": "https://github.com/diem/diem/pull/10527", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
543784152
debug_query + batch insert with SQLite results in a compilation error Setup Versions Rust: rustc 1.40.0 (73528e339 2019-12-16) Diesel: 1.4.3 Database: SQLite Operating System: Linux Feature Flags diesel: sqlite Problem Description debug_query does not work with batch inserts and SQLite. What are you trying to accomplish? let mappings: Vec<_> = ...; let q = diesel::insert_into(analysis_chats::table).values(mappings); println!("{}", diesel::debug_query::<Sqlite, _>(&q)); What is the expected output? INSERT being printed to the console. It does not seem that I do anything which requires the DEFAULT support from the database, and most definitely it should not affect debug printing - if I remove debug printing, my code works just fine. What is the actual output? Compilation error: error[E0277]: the trait bound `diesel::sqlite::Sqlite: diesel::backend::SupportsDefaultKeyword` is not satisfied --> src/db/analyses.rs:72:40 | 72 | debug!(""; "query" => %diesel::debug_query::<Sqlite, _>(&q)); | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the trait `diesel::backend::SupportsDefaultKeyword` is not implemented for `diesel::sqlite::Sqlite` | = note: required because of the requirements on the impl of `diesel::query_builder::QueryFragment<diesel::sqlite::Sqlite>` for `diesel::insertable::OwnedBatchInsert<diesel::query_builder::ValuesClause<(diesel::insertable::ColumnInsertValue<db::schema::analysis_chats::columns::analysis_id, diesel::expression::bound::Bound<diesel::sql_types::BigInt, i64>>, diesel::insertable::ColumnInsertValue<db::schema::analysis_chats::columns::chat_id, diesel::expression::bound::Bound<diesel::sql_types::BigInt, i64>>), db::schema::analysis_chats::table>, db::schema::analysis_chats::table>` = note: required because of the requirements on the impl of `diesel::query_builder::QueryFragment<diesel::sqlite::Sqlite>` for `diesel::query_builder::InsertStatement<db::schema::analysis_chats::table, diesel::insertable::OwnedBatchInsert<diesel::query_builder::ValuesClause<(diesel::insertable::ColumnInsertValue<db::schema::analysis_chats::columns::analysis_id, diesel::expression::bound::Bound<diesel::sql_types::BigInt, i64>>, diesel::insertable::ColumnInsertValue<db::schema::analysis_chats::columns::chat_id, diesel::expression::bound::Bound<diesel::sql_types::BigInt, i64>>), db::schema::analysis_chats::table>, db::schema::analysis_chats::table>>` = note: required because of the requirements on the impl of `std::fmt::Display` for `diesel::query_builder::DebugQuery<'_, diesel::query_builder::InsertStatement<db::schema::analysis_chats::table, diesel::insertable::OwnedBatchInsert<diesel::query_builder::ValuesClause<(diesel::insertable::ColumnInsertValue<db::schema::analysis_chats::columns::analysis_id, diesel::expression::bound::Bound<diesel::sql_types::BigInt, i64>>, diesel::insertable::ColumnInsertValue<db::schema::analysis_chats::columns::chat_id, diesel::expression::bound::Bound<diesel::sql_types::BigInt, i64>>), db::schema::analysis_chats::table>, db::schema::analysis_chats::table>>, diesel::sqlite::Sqlite>` = note: required by `std::fmt::Display::fmt` Are you seeing any additional errors? No Steps to reproduce Can be reproduced reliably with the code specified above - any kind of table + INSERT of a vector will result in this error. Checklist [x] I have already looked over the issue tracker for similar issues. [x] This issue can be reproduced on Rust's stable channel. (Your issue will be closed if this is not the case) I'm not sure if I would call that expected behaviour or if that's a bug that should be fixed. It's definitely not surprising for me. The underlying issue here is: Sqlite does not support batch inserts. Diesel does provide support for this feature, because we think it's something that is used quite often. We do this by just doing an insert per item inside of an transaction. That means we do not have one query but N + 2 queries here. debug_query currently only works with QueryFragment (so something that is one query at max). It is possible to fix that by adding some manual impls at the right place (See #2260), but I'm not sure if that's desired (cc @diesel-rs/contributors ) I see, totally makes sense. The important thing, I think, is to have an ability to debug a query in any possible way, and I think that #2260 solves this nicely.
gharchive/issue
2019-12-30T07:48:46
2025-04-01T04:34:00.045589
{ "authors": [ "netvl", "weiznich" ], "repo": "diesel-rs/diesel", "url": "https://github.com/diesel-rs/diesel/issues/2258", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
231728272
Fix sqlite appveyor Recent changes to cargo mean that cargo test won't look for dlls in paths outside of the target dir even if a build script tries to add them - the path/dylib path must be manually updated outside of cargo. Added a commit to generate an import library to fix the msvc tests also. I have experience in dealing with appveyor or sqlite on windows, but this looks good… and green! So, 👍 from me. Anyone else want to have a look? @sgrif? It should be there as long as the AppVeyor build machine doesn't change radically. It's the version from Visual Studio 2013, but any of the 10-odd tool chains on the image would be fine. I somewhat arbitrarily chose the version from VS2103 since mysql also requires VS2013 at this point (although I have an incoming PR that allows 2015 and 2017 to work with mysql also.) The other alternative I was able to think of was to put the VS tools in the path, but that is a global change that will indirectly affect rustc and this way will not.
gharchive/pull-request
2017-05-26T20:32:33
2025-04-01T04:34:00.048309
{ "authors": [ "killercup", "mcgoo" ], "repo": "diesel-rs/diesel", "url": "https://github.com/diesel-rs/diesel/pull/924", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
2334214152
Fetch Subject to generated link Contact Details philip.ornfeldt@outlook.com What benefits does the suggestion solve? This suggestion will fetch the subject row and implement it as a room name, displaying it on the tab in the browser. Feature suggestion description When a person types a meeting subject the generated link should say something about what the meeting is about. So when connecting to that meeting it says for example stand-up meeting in the tab header. Alternative solutions No response Additional information No response I have a solution for this brewing in my fork of this repo. Will post an update for review later this week
gharchive/issue
2024-06-04T19:20:50
2025-04-01T04:34:00.056332
{ "authors": [ "Philldomd" ], "repo": "diggsweden/jitsi-outlook", "url": "https://github.com/diggsweden/jitsi-outlook/issues/45", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
807194740
ส่งงานกับรายชื่อกลุ่มครับ (มี 2 คน) ตอนแรกส่งMainผิดนึกว่าPushแล้วไม่ขึ้นก็เลยหลายรอบหน่อยครับ ส่งงานกลุ่มกับงานเดี่ยวครับ ตรวจได้ทั้งที่นี้กับ main ของHeadrepo (Me) ตอนนี้งานไฟล์พังครับ ใช้ได้ถึงตอน 8cd8b90
gharchive/pull-request
2021-02-12T12:24:43
2025-04-01T04:34:00.065925
{ "authors": [ "ElijaChinda" ], "repo": "digitake/pwa-course-2021", "url": "https://github.com/digitake/pwa-course-2021/pull/50", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
138757519
Support for custom pages It would be nice to have support for custom pages like say a page for disclaimer or about me. This page should not have discuss enabled and may be not even the side bars Hi @mmrath, I've added all three options to the theme. You can disable the comments, the profile and widgets per page seperately in the frontmatter. Take a look in the frontmatter for more information. The docs have been updated too. Let me know if you are missing something. Cheers, Digitalcraftsman
gharchive/issue
2016-03-06T05:42:50
2025-04-01T04:34:00.101385
{ "authors": [ "digitalcraftsman", "mmrath" ], "repo": "digitalcraftsman/hugo-icarus-theme", "url": "https://github.com/digitalcraftsman/hugo-icarus-theme/issues/29", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1440841591
QR code for Share Request Determine format of data in Share Request QR code. Deep link? Follow the DIF Presentation Exchange Spec? This has been implemented, for use by DiBiHo, closing.
gharchive/issue
2022-11-08T20:13:19
2025-04-01T04:34:00.110964
{ "authors": [ "bmuramatsu", "dmitrizagidulin" ], "repo": "digitalcredentials/learner-credential-wallet", "url": "https://github.com/digitalcredentials/learner-credential-wallet/issues/262", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1434992604
Import multiple Cards As a user I want to import more than one card, e.g. for my children. Acceptance Criteria: [ ] More than one card can be activated via the app [ ] The user can switch between different, activated cards (e.g. using a DropBox with the Name + ExpirationYear) @michael-markl we now have a more concrete issue for the sozialpass.(#1084) I'm gonna close this
gharchive/issue
2022-11-03T17:01:08
2025-04-01T04:34:00.115809
{ "authors": [ "f1sh1918", "michael-markl" ], "repo": "digitalfabrik/entitlementcard", "url": "https://github.com/digitalfabrik/entitlementcard/issues/612", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1991082118
Remove rn-calendar-events patch Is your feature request related to a problem? Please describe. I added a patch for rn-calendar-events in https://github.com/digitalfabrik/integreat-app/pull/2561 . I also opened an issue in their repo https://github.com/wmcmahan/react-native-calendar-events/issues/445 . If they update the package, we should remove our patch. Blocked by https://github.com/wmcmahan/react-native-calendar-events/issues/445 Didn't need the patch in the end.
gharchive/issue
2023-11-13T17:06:33
2025-04-01T04:34:00.118057
{ "authors": [ "LeandraH" ], "repo": "digitalfabrik/integreat-app", "url": "https://github.com/digitalfabrik/integreat-app/issues/2562", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1403018942
Feedback can only be sent for translated Events Describe the Bug When creating a event which only has a German translation, I cannot send feedback to the e.g. Spanish event page, that duplicates the content in the German language. Instead the request fails. In the frontend we are not able to determine the default language of an event therefore we should be able to send feedback for the non-default languages. Steps to Reproduce curl -X "POST" "https://cms-test.integreat-app.de/testumgebung/es/wp-json/extensions/v3/feedback/event" -d 'slug=sprach-cafe-im-fußball-stadion&comment=asdf' Expected Behavior Should successfully create feedback. curl -X "POST" "https://cms-test.integreat-app.de/testumgebung/de/wp-json/extensions/v3/feedback/event" -d 'slug=sprach-cafe-im-fußball-stadion&comment=asdf' Actual Behavior Request to default language is successful: curl -X "POST" "https://cms-test.integreat-app.de/testumgebung/de/wp-json/extensions/v3/feedback/event" -d 'slug=sprach-cafe-im-fußball-stadion&comment=asdf' 404 NotFound for spanish: curl -X "POST" "https://cms-test.integreat-app.de/testumgebung/de/wp-json/extensions/v3/feedback/event" -d 'slug=sprach-cafe-im-fußball-stadion&comment=asdf' The same also goes for POIs. Thanks a lot for the report! :+1: Is this essentially the same problem as #1718? Yep. Didn't see this one. 👍🏼
gharchive/issue
2022-10-10T11:27:26
2025-04-01T04:34:00.121389
{ "authors": [ "sarahsporck", "timoludwig" ], "repo": "digitalfabrik/integreat-cms", "url": "https://github.com/digitalfabrik/integreat-cms/issues/1743", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2046107388
[DIG-89] Add Social Links Field to User Model Description: Extend the user model to include a socials field, representing links to various social media profiles. This field will be utilized in the backend routes related to /user to provide users' social links in the API response. Expected Changes: Modify the user model to include a socials field. Ensure proper validation of social links in the model. Update the existing database entries to include social links. Adapt existing /user routes to include the socials field in the response. Additional Info: Choose an appropriate data type for storing social links. Update relevant documentation regarding the new field. 🔴 For reference, check how phoneNumber, bio, name etc are implemented DIG-89 @pranshugupta54 can i work on this issue ? is this still required, if yes can I work on it? @pranshugupta54 Can I take up this issue? @Chakit22 am working on it , will open a pr tomorrow
gharchive/issue
2023-12-18T09:06:32
2025-04-01T04:34:00.148616
{ "authors": [ "Chakit22", "Tharuneshwarv", "mendacium-a11y", "pranshugupta54" ], "repo": "digitomize/digitomize", "url": "https://github.com/digitomize/digitomize/issues/342", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
436465711
Usul tambah nav bar menu dinamis / kategori usul bisa ditambahkan nav bar untuk menu dinamis untuk menarik pengguna yang sudah lama menggunakan tema default, (karena hanya di tema default saja menu dinamis tampil) Terima kasih, akan dipertimbangkan
gharchive/issue
2019-04-24T02:17:58
2025-04-01T04:34:00.252839
{ "authors": [ "balongbesuk", "dikisiswanto" ], "repo": "dikisiswanto/OpenSID-Cosmos", "url": "https://github.com/dikisiswanto/OpenSID-Cosmos/issues/24", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1324070681
Reach top searches endpoint Feature Request Form Put an [x] if you meet the condition, else leave [ ]. [x] I've searched the Issues [x] I've read the basic concepts [ ] I'm using the latest version Description Hello, since a few months now, instagram allow you to search through media directly, and surface top matches. (see first screen below). We currently can search through accounts and tags, but not media directly. I'd like to be able to hit the "Top" endpoint and get best media matching my keywords. ⚠️ I'm happy to contribute with a PR if someone has some quick docs on the endpoint to use. It is already exists. check fbsearch.search_flat You're right, sorry I mean the endpoint that lists media directly. That one is listing only usernames and hashtags . I think that api endpoint is available from v240 or so. with current library it won't work,(may be). here is the url. https://i.instagram.com/api/v1/fbsearch/top_serp/?search_surface=top_serp&timezone_offset=1800&count=30&query=space check my fork. I have added. oh that's great! testing right now So I just tried it, it returns only users and tags unfortunately :( Maybe version problem. oh nevermind, it's working perfectly ❤️ you should open a PR on this repo! 👏 yes please open PR, this is very useful Hello! @kingbotss I tried fbsearch.topSearch(), but it returns only 6 items, is there a way I can do a sort of pagination? thanks!
gharchive/issue
2022-08-01T08:59:08
2025-04-01T04:34:00.259980
{ "authors": [ "heltred", "kingbotss", "tkrugg", "yiojo" ], "repo": "dilame/instagram-private-api", "url": "https://github.com/dilame/instagram-private-api/issues/1638", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1933714008
🛑 android-13.0.0_r3 is down In 8041cab, android-13.0.0_r3 (http://aospxref.com/android-13.0.0_r3/) was down: HTTP code: 404 Response time: 226 ms Resolved: android-13.0.0_r3 is back up in 7487dfc after 9 hours, 47 minutes.
gharchive/issue
2023-10-09T20:08:54
2025-04-01T04:34:00.294308
{ "authors": [ "tiann" ], "repo": "dimenspace/aosp-uptime", "url": "https://github.com/dimenspace/aosp-uptime/issues/53", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
274184485
OCaml 4.06.0 compatibility fix Since OCaml 4.06.0 the -safe-string has been turned on by default, meaning that strings are immutable (by default). As a further consequence it is no longer meaningful to use strings as mutable character buffers. This patch uses a bytes buffer rather than string. Could you merge this, please? Sorry, i just saw this PR. I'm no longer maintaining this project. I'm happy to give the commit rights to whoever wants to maintain it Any news about merging this by the way? Still the same, I'm no longer maintaining this project. Moving forward, you can switch to one of the following project that is still maintained: cppo: https://github.com/mjambon/cppo ppx_optcomp: https://github.com/janestreet/ppx_optcomp Alternatively, I'm happy to transfer the project to whoever would want to maintain it.
gharchive/pull-request
2017-11-15T14:54:28
2025-04-01T04:34:00.306096
{ "authors": [ "XVilka", "dhil", "diml", "ghuysmans" ], "repo": "diml/optcomp", "url": "https://github.com/diml/optcomp/pull/4", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
1429256327
fix locktime check according to komodo rules (s1 december hf) fix tx locktime check like in komodo: there is a change since season 1 hf (see IsFinalTx in komodo code) fixed 49b0d1b3c9261eb57f16662027af908ebb0adaed 49b0d1b3c9261eb57f16662027af908ebb0adaed fixed https://github.com/dimxy/zebra/commit/49b0d1b3c9261eb57f16662027af908ebb0adaed see main repo
gharchive/issue
2022-10-31T04:57:58
2025-04-01T04:34:00.307967
{ "authors": [ "dimxy" ], "repo": "dimxy/zebra", "url": "https://github.com/dimxy/zebra/issues/10", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1944871061
[Bug] java.lang.Error: Unable to select the default theme DARK_MODERN #Describe the bug Updated, which auto enabled the theme #Environment Java: JetBrains s.r.o. 17.0.8.1 OS: Mac OS X aarch64 IDE: WebStorm 2023.2.3 Version: 1.10.4 #Stacktrace# java.lang.Error: Unable to select the default theme DARK_MODERN at com.github.dinbtechit.vscodetheme.VSCodeThemeManager.switchToVSCodeTheme(VSCodeThemeManager.kt:60) at com.github.dinbtechit.vscodetheme.VSCodeThemeManager.switchToVSCodeTheme$default(VSCodeThemeManager.kt:47) at com.github.dinbtechit.vscodetheme.startup.VSCodeStartupNotifyActivity.runActivity(VSCodeStartupNotifyActivity.kt:79) at com.intellij.ide.startup.impl.StartupManagerImpl.runActivityAndMeasureDuration(StartupManagerImpl.kt:327) at com.intellij.ide.startup.impl.StartupManagerImpl.access$runActivityAndMeasureDuration(StartupManagerImpl.kt:72) at com.intellij.ide.startup.impl.StartupManagerImpl$runPostStartupActivities$4$2.invoke$lambda$0(StartupManagerImpl.kt:280) at com.intellij.util.concurrency.ContextRunnable.run(ContextRunnable.java:24) at com.intellij.openapi.project.SmartModeScheduler$addLast$1.invoke(SmartModeScheduler.kt:89) at com.intellij.openapi.project.SmartModeScheduler$addLast$1.invoke(SmartModeScheduler.kt:89) at com.intellij.openapi.project.SmartModeScheduler.addLast$lambda$0(SmartModeScheduler.kt:89) at com.intellij.openapi.project.SmartModeScheduler$RunnableDelegate.run(SmartModeScheduler.kt:49) at com.intellij.openapi.project.SmartModeScheduler.doRun(SmartModeScheduler.kt:137) at com.intellij.openapi.project.SmartModeScheduler.runAllWhileSmart(SmartModeScheduler.kt:129) at com.intellij.openapi.application.TransactionGuardImpl.runWithWritingAllowed(TransactionGuardImpl.java:208) at com.intellij.openapi.application.TransactionGuardImpl.access$100(TransactionGuardImpl.java:21) at com.intellij.openapi.application.TransactionGuardImpl$1.run(TransactionGuardImpl.java:190) at com.intellij.openapi.application.impl.ApplicationImpl.runIntendedWriteActionOnCurrentThread(ApplicationImpl.java:861) at com.intellij.openapi.application.impl.ApplicationImpl$4.run(ApplicationImpl.java:478) at com.intellij.openapi.application.impl.FlushQueue.doRun(FlushQueue.java:79) at com.intellij.openapi.application.impl.FlushQueue.runNextEvent(FlushQueue.java:121) at com.intellij.openapi.application.impl.FlushQueue.flushNow(FlushQueue.java:41) at java.desktop/java.awt.event.InvocationEvent.dispatch(InvocationEvent.java:318) at java.desktop/java.awt.EventQueue.dispatchEventImpl(EventQueue.java:792) at java.desktop/java.awt.EventQueue$3.run(EventQueue.java:739) at java.desktop/java.awt.EventQueue$3.run(EventQueue.java:733) at java.base/java.security.AccessController.doPrivileged(AccessController.java:399) at java.base/java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:86) at java.desktop/java.awt.EventQueue.dispatchEvent(EventQueue.java:761) at com.intellij.ide.IdeEventQueue.defaultDispatchEvent(IdeEventQueue.kt:690) at com.intellij.ide.IdeEventQueue._dispatchEvent$lambda$10(IdeEventQueue.kt:593) at com.intellij.openapi.application.impl.ApplicationImpl.runWithoutImplicitRead(ApplicationImpl.java:1485) at com.intellij.ide.IdeEventQueue._dispatchEvent(IdeEventQueue.kt:593) at com.intellij.ide.IdeEventQueue.access$_dispatchEvent(IdeEventQueue.kt:67) at com.intellij.ide.IdeEventQueue$dispatchEvent$processEventRunnable$1$1$1.compute(IdeEventQueue.kt:369) at com.intellij.ide.IdeEventQueue$dispatchEvent$processEventRunnable$1$1$1.compute(IdeEventQueue.kt:368) at com.intellij.openapi.progress.impl.CoreProgressManager.computePrioritized(CoreProgressManager.java:787) at com.intellij.ide.IdeEventQueue$dispatchEvent$processEventRunnable$1$1.invoke(IdeEventQueue.kt:368) at com.intellij.ide.IdeEventQueue$dispatchEvent$processEventRunnable$1$1.invoke(IdeEventQueue.kt:363) at com.intellij.ide.IdeEventQueueKt.performActivity$lambda$1(IdeEventQueue.kt:997) at com.intellij.openapi.application.TransactionGuardImpl.performActivity(TransactionGuardImpl.java:105) at com.intellij.ide.IdeEventQueueKt.performActivity(IdeEventQueue.kt:997) at com.intellij.ide.IdeEventQueue.dispatchEvent$lambda$7(IdeEventQueue.kt:363) at com.intellij.openapi.application.impl.ApplicationImpl.runIntendedWriteActionOnCurrentThread(ApplicationImpl.java:861) at com.intellij.ide.IdeEventQueue.dispatchEvent(IdeEventQueue.kt:405) at java.desktop/java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:207) at java.desktop/java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:128) at java.desktop/java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:117) at java.desktop/java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:113) at java.desktop/java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:105) at java.desktop/java.awt.EventDispatchThread.run(EventDispatchThread.java:92) Caused by: java.util.NoSuchElementException: Array contains no element matching the predicate. at com.github.dinbtechit.vscodetheme.VSCodeThemeManager.switchToVSCodeTheme(VSCodeThemeManager.kt:68) ... 49 more closing as dupe of https://github.com/dinbtechit/vscode-theme/issues/150
gharchive/issue
2023-10-16T10:18:38
2025-04-01T04:34:00.311462
{ "authors": [ "jezmck" ], "repo": "dinbtechit/vscode-theme", "url": "https://github.com/dinbtechit/vscode-theme/issues/152", "license": "BSD-2-Clause", "license_type": "permissive", "license_source": "github-api" }
435013607
Microsatellite loci for hg38 Hello - I am trying to use MSISensor on WES paired tumor-normal data aligned to hg38. I downloaded the fastq files from UCSC and ran the scan command to generate the list of microsatellite loci for hg38. The program has been stalled for hours. Is this normal? Is there a way to thread/fork this so it does this faster? msisensor scan -d /mnt/hg38.fa.gz -o /mnt/msisensor_files/microsatellites_hg38.list Before using the scan command, you need to decompress the file hg38.fa.gz. You can use "gunzip hg38.fa.gz" to extract the *.gz file and then run the command "msisensor scan -d /mnt/hg38.fa -o /mnt/msisensor_files/microsatellites_hg38.list". Thank you! Worked like a charm and only took a few minutes to complete. Is the .bed file required to run msisensor? If so what should the bed file be - tumor or normal or reference genome? Sincerely, Akshata On Apr 18, 2019, 7:22 PM -0700, ZhaoDanOnGitHub notifications@github.com, wrote: Before using the scan command, you need to decompress the file hg38.fa.gz. You can use "gunzip hg38.fa.gz" to extract the *.gz file and then run the command "msisensor scan -d /mnt/hg38.fa -o /mnt/msisensor_files/microsatellites_hg38.list". — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or mute the thread. The .bed file is not required. You can specify the area you want to analyze through the .bed file. The .bed file contains 3 columns, which are the chromosome number, start position, end position, gene name. Such as: 1 11867 12229 DDX11L1 1 12611 12723 DDX11L1 1 13219 14411 DDX11L1 There is a example.bed file under the test file, you can refer to it.
gharchive/issue
2019-04-19T01:06:07
2025-04-01T04:34:00.316123
{ "authors": [ "ZhaoDanOnGitHub", "audyavar" ], "repo": "ding-lab/msisensor", "url": "https://github.com/ding-lab/msisensor/issues/40", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2292922379
The issue concerns the initialization of weights and biases in the SGP module. Hi, This work is so interesting. However, I have some questions about the initialization of weights and biases in the SGP module. Due to my lack of coding experience, I can not understand why the weights and biases of the convolutions in the SGP block are initialized to 0. When I was debugging, the weights and biases of these convolutions were 0 in both training and testing. What I understood was that they had no effect. So how does the SGP block play a role in the code? How to understand it? (Note: If init_conv_var is set to non-0, the results will drop a lot.), Thank you. Hi, for the THUMOS14 dataset, since the action lengths are relatively short, we found in our experiments that initializing from zero weights (i.e., retaining only the residual connections at the very beginning) can make the training more stable. For datasets like HACS and Activitynet, which contain many long actions, enabling the SGP layer during initialization can achieve better results. Hi, for the THUMOS14 dataset, since the action lengths are relatively short, we found in our experiments that initializing from zero weights (i.e., retaining only the residual connections at the very beginning) can make the training more stable. For datasets like HACS and Activitynet, which contain many long actions, enabling the SGP layer during initialization can achieve better results. Thank you for your reply. However, through experiments, I found that throughout the training and testing process, the weights and biases of the convolutions involving the SGP block are still 0, equivalent to the SGP convolution always being 0, and only the residual connections are retained. Hi, for the THUMOS14 dataset, since the action lengths are relatively short, we found in our experiments that initializing from zero weights (i.e., retaining only the residual connections at the very beginning) can make the training more stable. For datasets like HACS and Activitynet, which contain many long actions, enabling the SGP layer during initialization can achieve better results. Thank you for your reply. However, through experiments, I found that throughout the training and testing process, the weights and biases of the convolutions involving the SGP block are still 0, equivalent to the SGP convolution always being 0, and only the residual connections are retained. The characteristic of THUMOS14 dataset is that a video has dozens or hundreds of actions, and many actions have only a length of several features. For this kind of dataset using a large window to aggregate features is not necessary, but for large-scale and highly varying datasets such as HACS, enabling the multi-scale feature extraction will be more necessary. The characteristic of THUMOS14 dataset is that a video has dozens or hundreds of actions, and many actions have only a length of several features. For this kind of dataset using a large window to aggregate features is not necessary, but for large-scale and highly varying datasets such as HACS, enabling the multi-scale feature extraction will be more necessary. What you said makes sense, thank you for your reply.
gharchive/issue
2024-05-13T14:02:31
2025-04-01T04:34:00.327262
{ "authors": [ "dingfengshi", "lixueli8" ], "repo": "dingfengshi/TriDet", "url": "https://github.com/dingfengshi/TriDet/issues/37", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
127483593
api:routes fails to resolve constructor containing Auth::user()->id Running php artisan api:routes: [ErrorException] Trying to get property of non-object Looking further into the logs I see that it's in the constructor it fails, probably because of Auth, but I'm a rookie so I'm not able to confirm anything. This is my constructor: protected $user; public function __construct() { $this->user = Auth::user()->id; // Here it's complaining } Am I doing something wrong? @sagaio , It means that no user is logged: so Auth::user() is null, so your error. Make sure the user is authenticated before, suing a middleware for instance You can try stg like that protected $user; public function __construct() { Auth::loginUsingId(1); $this->user = Auth::user()->id; } @lucasmichot Thank you! I should've tested this with the usual route:list command instead of going here directly.
gharchive/issue
2016-01-19T16:24:09
2025-04-01T04:34:00.330173
{ "authors": [ "lucasmichot", "sagaio" ], "repo": "dingo/api", "url": "https://github.com/dingo/api/issues/817", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
438041931
Update LICENSE year Description of the PR Just legalities Changes Proposed in this Pull Request (List new items in CHANGELOG.MD) Semver Classification [x] This PR only includes documentation or non-code changes. [ ] This PR fixes a bug and does not change the (intended) framework interface. [ ] This PR adds methods or properties to the framework interface. [ ] This PR removes or renames methods or properties in the framework interface. Copyrights generally last up to 70 years in most countries. still good to keep updated tho. (its only MIT so not like it really matters as you can basically do anything with it as long as credit is given)
gharchive/pull-request
2019-04-28T10:37:49
2025-04-01T04:34:00.365550
{ "authors": [ "ImUrX", "MrJacz" ], "repo": "dirigeants/klasa", "url": "https://github.com/dirigeants/klasa/pull/677", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1640219084
startup time is very long The startup time for this utility is too long. Startup time for vcs: $ time vcs --version vcs 0.3.0 real 0m0.170s user 0m0.150s sys 0m0.020s Compared to bash: $ time bash --version GNU bash, version 5.1.16(1)-release (x86_64-pc-linux-gnu) Copyright (C) 2020 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software; you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. real 0m0.002s user 0m0.001s sys 0m0.001s Or even a python3 program: $ time python3 -c 'import sys; print(sys.version)' 3.10.6 (main, Mar 10 2023, 10:55:28) [GCC 11.3.0] real 0m0.016s user 0m0.015s sys 0m0.001s And this is on some fairly ridiculous hardware: $ time cat /proc/cpuinfo | grep 'model name' | head -n1 model name : 11th Gen Intel(R) Core(TM) i7-1165G7 @ 2.80GHz real 0m0.002s user 0m0.005s sys 0m0.000s It could be the python package structure. A PR to improve that would probably be welcome.
gharchive/issue
2023-03-25T00:07:38
2025-04-01T04:34:00.367715
{ "authors": [ "alecGraves", "christophebedard" ], "repo": "dirk-thomas/vcstool", "url": "https://github.com/dirk-thomas/vcstool/issues/254", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1043752035
[api-docs] Separate two hyperlinks for better readability A new commit was made into the api-docs repo. This issue was created automatically. Need to review if we made any change to the library for this commit. https://github.com/discord/discord-api-docs/commit/6cd58500f2ea370cd2ae86ce11a52918b474526b ncn
gharchive/issue
2021-11-03T15:34:31
2025-04-01T04:34:00.386307
{ "authors": [ "Skillz4Killz", "itohatweb" ], "repo": "discordeno/discordeno", "url": "https://github.com/discordeno/discordeno/issues/1437", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
857860180
chore(memberType): Use GuildMemberManager#resolve instead of Guild#member Guild#member is getting removed in v13 so replace it with GuildMemberManager#resolve. @Gawdl3y ?
gharchive/pull-request
2021-04-14T12:42:13
2025-04-01T04:34:00.394173
{ "authors": [ "1chiSensei" ], "repo": "discordjs/Commando", "url": "https://github.com/discordjs/Commando/pull/395", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
915361762
v0 Oracle Changes As per our spec meeting discussions, this PR: Splits the oracle key into separate announcement and attestation keys Adds Schnorr signatures by the nonce keys to the announcement message Allows for range timestamps in oracle events I've added a new oracle_keys message but I'm not sure I'm the best person to write in Oracle.md about how it is used, and likewise to write rationales for why we need two public keys and all the proofs of knowledge. @LLFourn would you be willing to write at least these last two things? Meeting note: attestation schemes should be declared separately from the event information. Taking this as an example: $ curl 'https://h00.ooo/x/BitMEX/BXBT/2022-01-12T01:00:00.price?n=20' | jq .announcement.oracle_event.data -r |jq { "id": "/x/BitMEX/BXBT/2022-01-12T01:00:00.price?n=20", "expected-outcome-time": "2022-01-12T01:00:00", "descriptor": { "type": "digit-decomposition", "is_signed": false, "n_digits": 20, "unit": null }, "schemes": { "olivia-v1": { "nonces": [ "ea4c2c81a65e4df1c98515bf2af6a0e63ed086f803e4637130447056f2ff15c8", "e7a46ba921fab39fe9e953e430f72e7325c43aba4d181517b9078dfc09637054", "f9dce3712dfa25c3ab516bef99c5e828b349e53a134321938612f5b5af9dd4bf", "74ef83448b8fc533a1dc33f1f58d1c0bd6cfb086f9369d627d917884ccc21da7", "63c0e43218a9e31e0ff5b34ab492a0e9fa100eebfc888c7efc51791a55cd8283", "9e0c36a0607c16d6d6284c640a8ac00ab65a772c3fa0981be022520a9f44e47a", "e288c14d070f463a7d71bc330b4786c574aa8be02f7e7ad00e4fc9a6bdcbb32b", "e758b66c5df5ef161f00081fe87685230a0b2846863cf3da00828d112b71220f", "1ed08c216fddc19622553925ef9d2753715c7c01201a9a47ae4233a4878cc030", "8e5bc962f77346e721f25fe7871a03e95bf38a86e564c5729c86dc2fc5eb6871", "68843c83a7c9829f6ef5c4ae77067f9d7f72703ee9b8c6d5498a873180b15574", "aee0085e8c98a2afbc7d43d5774e145cfa0168d9e22506dc79fa1b74718e0043", "e4c038f95dac570ffe276c16a3518983abcc8216e9d318de39ca42c70d54ea3e", "cb4f5cc032ca1e343229256bb0d911b0e04ec2f409d0fa837cedd973ac45b742", "1e63f2116013988df505b26ddebec1adaeb41c7a2b55baf375f599431b5fa242", "1792e2fed1c346900702dcc208e286ea8a6865975cd961c834a275c505072247", "a5672e32a331cc360317a7329fc7f70d7843586ab4a24b961336183cc764bef9", "52272eca966fe72012abffcc481276dce404c27e4db783374a5a7c001f9477d2", "98649a7620abd662e2ff699659f58b7cf19d9ad4e5257180cc36ba79fff97c68", "92dddc74b7491067fab985962c60b8e0ce43e820e35405f6ac2a577e4b6eca6e" ] }, "ecdsa-v1": {} } } so in the oracle_event you have the descriptor and other metadata like times on one level and then you have a list of attestation schemes and their parameters. Note that nonces and public keys are always parameters to a certain attestation scheme (my oracle is still missing the public key which should be in there with the nonces). @Tibo-lg @nkohen I believe this needs to be updated with the format in #163. We need to figure out which messages we want to keep as tlvs and what data structures we need to have keep the format in #163 Unless there is strong arguments, I think it'd be best to move everything to the same format as #163. I can update this PR if we agree on that (or probably make a new one as I'm not sure I can push to @nkohen branch). @Tibo-lg @nkohen I believe this needs to be updated with the format in #163. We need to figure out which messages we want to keep as tlvs and what data structures we need to have keep the format in #163 Unless there is strong arguments, I think it'd be best to move everything to the same format as #163. I can update this PR if we agree on that (or probably make a new one as I'm not sure I can push to @nkohen branch). I think it comes down to this: Do we expect announcements/attestations to be sent over the network directly to a peer, or do we always expected them to bet fetched from a 3rd party service like oracle.suredbits.com ? If we think the former, we should keep TLVs, if we think the latter, I think it might be ok to remove them (although I do think this makes parsing harder for the 3rd party service, there is no easy identifier for what the thing is being submitted to the service. A TLV is unique, whereas a subtype is not). Do we expect announcements/attestations to be sent over the network directly to a peer, or do we always expected them to bet fetched from a 3rd party service like oracle.suredbits.com ? If we think the former, we should keep TLVs, if we think the latter, I think it might be ok to remove them (although I do think this makes parsing harder for the 3rd party service, there is no easy identifier for what the thing is being submitted to the service. A TLV is unique, whereas a subtype is not). I would think that we should at least plan for enabling peers to exchange announcements/attestations directly. But in that case I think they'd need to be in wire message format, not TLV? I'm not sure how you have implemented the wire message format, but in our code base the wire message format uses the nested TLV. case class LnMessage[+T <: TLV](tlv: T) extends NetworkElement { require(tlv.tpe.toLong <= 65535L, s"LN Message format requires UInt16 types") val tpe: UInt16 = UInt16(tlv.tpe.toInt) val payload: ByteVector = tlv.value override lazy val bytes: ByteVector = tpe.bytes ++ payload val typeName: String = tlv.typeName } The key line is val tpe: UIn16 = UInt16(tlv.tpe.toInt) Since the announcement is no longer a TLV, it cannot have a tpe which is only defined for TLVs. How are you proposing we send non TLVs with the wire message format? I'm not sure what you mean by "nested TLV"? The wire message doesn't have any length prefix and uses a u16 instead of a bigsize as written in the specs. the specs. Meanwhile all typed sub-messages (which follow TLV format) will ^ What I am talking about. I can detail our implementation in Scala if you'd like but I think we have a misunderstanding of what the specification says. You adhere to this with all of the test vectors on #163 (which is why we are compatible), but want to break this convention -- IIUC -- on this PR. If we are intending to send announcements/attestations over the wire independently they will need to be TLVs Sorry I should have linked to the spec after #163 : https://github.com/Tibo-lg/dlcspecs-1/blob/serialization-update-proposal/Messaging.md#message-format You adhere to this with all of the test vectors on https://github.com/discreetlogcontracts/dlcspecs/pull/163 (which is why we are compatible), but want to break this convention -- IIUC -- on this PR. This is because you wanted to keep the oracle message format intact so as not to have to update your oracle infrastructure before this PR. If we are intending to send announcements/attestations over the wire independently they will need to be TLVs I don't see why. But maybe it will be easier to discuss it during the spec meeting? It feels indeed like we are not understanding each others. Sorry I should have linked to the spec after #163 : https://github.com/Tibo-lg/dlcspecs-1/blob/serialization-update-proposal/Messaging.md#message-format You adhere to this with all of the test vectors on #163 (which is why we are compatible), but want to break this convention -- IIUC -- on this PR. This is because you wanted to keep the oracle message format intact so as not to have to update your oracle infrastructure before this PR. If we are intending to send announcements/attestations over the wire independently they will need to be TLVs I don't see why. But maybe it will be easier to discuss it during the spec meeting? It feels indeed like we are not understanding each others. A TLV stream is defined for each wire message How are you suggesting to parse an announcement if it isn't a TLV? IIUC, we will have the length of the announcement, but no unique identifier? What is the unique identifier for an announcement so that I can identify it as an announcement before parsing the payload? Perhaps it would be useful to make this more concrete, can you give me an example announcement you are envisioning? with first a u16 type prefix IMO, you are using a TLV but just not calling it a TLV :-). Or perhaps my mental model of TLVs is incorrect. But functionally speaking, I think we are on the same page. An announcement needs a type identifier similar to offer_dlc in 163 right? https://github.com/discreetlogcontracts/dlcspecs/pull/163/files#diff-e0f5b925f91a1c09c6daf26f9a7d28816cb9ff9f08863faca719b7ee0a1cc065R67 If that is the case, I don't really care about definitions, we are referring to the same thing with different definitions. I'll do some research later to correct my definitions -- if they are wrong -- so we can use a common language to describe the protocol. An announcement needs a type identifier similar to offer_dlc in 163 right? Yes! The main difference between TLV and wire message format is that wire message don't have the length part (because you get that from the network layer already) and use u16 instead of bigsize for the type prefix. Also note that wire messages include TLV extensions which might make things confusing (I'm not sure we want oracle messages to have that though, but it's a detail). If you think something could be made clearer in the specs let me know! I think the next question is how do announcements get serialized inside of other TLVs? I.e. offer_dlc has announcements inside of it, do you think it should be serialized in wire message format inside of the offer? Yes! The main difference between TLV and wire message format is that wire message don't have the length part (because you get that from the network layer already) and use u16 instead of bigsize for the type prefix. What do you mean by "network layer"? Our length is serialized as part of the TLV, i.e. type length value. I mean that the buffer you receive from the network socket should have a certain length, so you don't need a field to tell you the same thing. You should already be doing that for other messages like offer I think? (Also if the message is wellformed you don't even need that information, you'll just be parsing the content and be able to retrieve all the information you expect) Doesn't this fall apart when your messages are larger than a tcp frame? My understanding is we need things like #192 because our messages sizes may be too large, specifically with adaptor signatures. With TLVs, you know what the payload length should be as its given to you in the very first tcp frame. You should already be doing that for other messages like offer I think? This might be the source of some misunderstanding on my part, we do infer the length as you say here: https://github.com/bitcoin-s/bitcoin-s/blob/8c5288d75833d56d388247cdc928cf6694c46f46/core/src/main/scala/org/bitcoins/core/protocol/tlv/LnMessage.scala#L30 My comment above stands, but this does make more sense now * it would be useless in many cases This really isn't a compelling argument in my opinion, the point is that it is needed in some cases to have a coherent idea of the message that is being sent to you across the wire. Perhaps you have a deeper understanding of networking than I do, but I don't understand how the networking layer can have knowledge about our protocol? * It would be more difficult to handle disconnects/reconnects (if I disconnect while sending a segmented message, and restart sending the same message upon reconnecting, the receiving peer will not notice) I don't understand this. Why would I keep my peers bytes in my buffer if they disconnected from me? It could be days before they reconnect? Roughly what #192 does is to have a length field but only for messages that need them. There might be better ways to do it (and actually just re-looking at it now I feel it can be even simpler) but I think that's a discussion that we should have in #192 (and I'd be really happy to have it :) !). Ok, at least we are in agreement that a length field is needed for some messages. Rebased on #163 and updated to match the new serialization @Christewart @nkohen please check. @LLFourn I liked your idea of being able to support multiple attestation schemes, can you check if what I've done matches with what you had in mind? @Tibo-lg I think that looks ok. The one thing I'd consider doing is putting the attestation key with the attestation scheme data itself. Probably "oracle_keys" should just have the announcement key and each attestation scheme inside each announcement should declare its attestation key. Each scheme should use a different attestation key. Clients could then by convention require that the same attestation key is used for every announcement (of the same scheme). @Tibo-lg I think that looks ok. The one thing I'd consider doing is putting the attestation key with the attestation scheme data itself. Probably "oracle_keys" should just have the announcement key and each attestation scheme inside each announcement should declare its attestation key. Each scheme should use a different attestation key. Clients could then by convention require that the same attestation key is used for every announcement (of the same scheme). Makes sense, updated. I pushed updated test vectors including these oracle changes. The questions that I have left: Do we really need a timestamp for the metadata? (https://github.com/discreetlogcontracts/dlcspecs/pull/167#discussion_r896482864) Do we need the oracle metadata in each announcement? (https://github.com/discreetlogcontracts/dlcspecs/pull/167#discussion_r896286653) Are we sure we don't want to give proof of knowledge for attestation key and nonces? (more cosmetic) Should we remove all the Oracle prefixes from the field names? It feels redundant to me (and I think we don't add them consistently). Closes #183 Added proof of knowledge in the Schnorr scheme (@LLFourn when you have time to check) Updated specs and test vectors to use regular Schnorr signatures for proof of knowledge as agreed during spec meeting (using a type prefix to enable using something different in the future).
gharchive/pull-request
2021-06-08T18:59:55
2025-04-01T04:34:00.433085
{ "authors": [ "Christewart", "LLFourn", "Tibo-lg", "nkohen" ], "repo": "discreetlogcontracts/dlcspecs", "url": "https://github.com/discreetlogcontracts/dlcspecs/pull/167", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
174534883
Impossible to disk cleanup the registry (2.5.0) due to broken garbage cleanup? Hi, I am trying to cleanup a 2.5.0 registry which is getting its main disk full. I deleted some tags, run the garbage cleanup cron job, disk was freeed, but when I try to repush an image that was deleted, I hit the following error "Layer already exists". I have also originally posted here: https://github.com/docker/docker-registry/issues/988 While I think this is the appropriate place to mention the issue. Can someone from Docker confirm the issue? This is really an annoying bug, because it prevents the real "Continuous Integration" to happen. Can't really confirm anything without more information. It is possible that there was a still a tag or manifest which was referencing the layer, so seeing "Layer already exist" itself would not be indicative of an error. Are you having any other symptoms? You mentioned that that disk space was freed, but perhaps not as much as anticipated and it is filling up again too quickly? In addition to deleting the tags, are you also deleting manifests which may be untagged? Steps to reproduce: Delete an image tag Run the garbage collector Repush the same image+tag Will try to give you my exact steps with curl next monday, but I think in the 988 ticket it is well explained. As seen in docker/docker-registry#988 restarting the registry is needed and solves the problem. I think the recommended way to do GC is to stop registry, GC, and then start it (to be sure that nothing is getting uploaded during GC). It might be complicated when running the registry in a docker container though, as stopping the server would stop the container and not allow to exec the GC in it... The good way is probably to restart the server in read-only mode, GC, restart in read-write mode. @eesprit by restarting the server in read-only mode, you mean creating a second config2.yml with this section as difference: maintenance: readonly: enabled: true And then restarting the server with that readonly option, run GC, restart the server with the old config? Or is there an API way to tell the running server to put itself in readonly mode? I don't thing there is an API call for that (the Docker Private Registry expose one, but this does not seem to exist in the distribution registry). So yes, I think the only way is to switch conf files. By the way, as far as I understand, the maintenance/readonly is under storage, not at the "root" of the config file. I am also able to reproduce this (exactly as zoobab says it). The issue also exists. This issue cased by the registry cache. I disable the registry cache to solve this problem. And it has worked. Closing as wildly outdated. Please open a new issue after testing on the latest available release.
gharchive/issue
2016-09-01T14:14:32
2025-04-01T04:34:00.461888
{ "authors": [ "dmcgowan", "dynek", "eesprit", "milosgajdos", "wutongjie23hao", "zoobab" ], "repo": "distribution/distribution", "url": "https://github.com/distribution/distribution/issues/1939", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2086775229
UI Updated opening a draft PR so we can stay on the same page about the current progress in development. [ ] Updating UI of dashboard/users to include searching on the basis of segments and userProperty [ ] populating list of segments and properties [ ] updating user api to handle 2 more parameters - propertyType and propertyValue [ ] indexing table for optimal performance any suggestions would be appreciated welcome @sy425191 ! I'm pulling off myself from the issue, anyone from the community can take this issue. Reason: When I run project on localhost, my machine can't handle it coz its bulky, and on GitHub Codespaces: It fails to build, with some error in installing features on Devcontainer. I really wish if I could continue the development, but right now can't see any way @sy425191 dev containers have been fixed btw! @maxgurewitz That's great, dev containers really help
gharchive/pull-request
2024-01-17T18:45:30
2025-04-01T04:34:00.471631
{ "authors": [ "maxgurewitz", "sy425191" ], "repo": "dittofeed/dittofeed", "url": "https://github.com/dittofeed/dittofeed/pull/588", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2478560703
🛑 WS Port is down In 0e8f635, WS Port ($COBI_MQTT) was down: HTTP code: 0 Response time: 0 ms Resolved: WS Port is back up in 97eb163 after 8 hours, 46 minutes.
gharchive/issue
2024-08-21T17:12:42
2025-04-01T04:34:00.474992
{ "authors": [ "diveliastudio" ], "repo": "diveliastudio/cobi-upptime", "url": "https://github.com/diveliastudio/cobi-upptime/issues/7", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1565138378
🛑 HIAS is down In 96abf5b, HIAS ($HIAS) was down: HTTP code: 0 Response time: 0 ms Resolved: HIAS is back up in dee9907.
gharchive/issue
2023-01-31T23:14:23
2025-04-01T04:34:00.477038
{ "authors": [ "diveliastudio" ], "repo": "diveliastudio/upptime", "url": "https://github.com/diveliastudio/upptime/issues/232", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1580866531
"return new throw" ?!!! What the heck?! What the heck is going on Replace return throw new Exception to return new Exception!!! wth mb u mean throw new Exception();?
gharchive/issue
2023-02-11T14:33:38
2025-04-01T04:34:00.478350
{ "authors": [ "SashaTalk", "diveloper53" ], "repo": "diveloper53/SimpleBytes-PHP", "url": "https://github.com/diveloper53/SimpleBytes-PHP/issues/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
123103259
jsdoc documentation This creates a better page structure for the API to be seen. closed because jsdoc has too many problems at the moment
gharchive/issue
2015-12-19T18:37:32
2025-04-01T04:34:00.485199
{ "authors": [ "aetheon" ], "repo": "divhide/node-divhide", "url": "https://github.com/divhide/node-divhide/issues/35", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
66955305
django 1.8 Is there an expected timeframe for support of Django 1.8? Django 1.8 support will be provided in a 3.1 patch release. No commitment to a specific ETA, but should land in a few weeks. A more detailed timeframe after final 3.1 release. I'm working on this enhancement... ;)
gharchive/issue
2015-04-07T18:05:16
2025-04-01T04:34:00.486244
{ "authors": [ "nostalgiaz", "rando305", "yakky" ], "repo": "divio/django-cms", "url": "https://github.com/divio/django-cms/issues/3993", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
151075065
Fixed a typo. a now poll => a new poll thanks again @Eraldo LGTM LGTM, merging.
gharchive/pull-request
2016-04-26T08:35:36
2025-04-01T04:34:00.487353
{ "authors": [ "Eraldo", "FinalAngel", "mkoistinen" ], "repo": "divio/django-cms", "url": "https://github.com/divio/django-cms/pull/5217", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
418269444
Fixed #6457 -- Fixed rendering of permission icons Back port #6624 Coverage remained the same at 78.189% when pulling 689993fda4be2cb10425f0751bc781ab5da1f6b1 on vThaian:backport/3.5.x/6624 into 3cf82a023cf5657ecc1e25e8c76b59bd3539bba1 on divio:release/3.5.x.
gharchive/pull-request
2019-03-07T11:48:48
2025-04-01T04:34:00.489437
{ "authors": [ "coveralls", "vThaian" ], "repo": "divio/django-cms", "url": "https://github.com/divio/django-cms/pull/6626", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
265163903
Untracked files causes rebase not to work The error "Unable to manipulate commits while repo is in unclean state" is shown even when in the repo there are files that are untracked, while this should not happen. Sublime Text version number: 3148 Git version number: 1.9.1 OS type and version: Ubuntu 14.04 touch foo.bar in the repo dir enter in GitSavvy rebase mode Try to do any action The command will fail since with the said error. The reason for this because some action does not allow untracked files. Well, using git rebase from command line correctly works so, untracked files might not stop sublime to delete or reorder a commit for example. @stoivo Maybe untracked files do not cause any issues? @randy3k, should we close this issue? On your preference. It will get closed when #826 gets merged to master anyway. This is merged into dev and will be released in the next release. Feel free to reopen if it is't solved.
gharchive/issue
2017-10-13T04:04:54
2025-04-01T04:34:00.496521
{ "authors": [ "3v1n0", "randy3k", "stoivo" ], "repo": "divmain/GitSavvy", "url": "https://github.com/divmain/GitSavvy/issues/789", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2459329493
🛑 Arcadja is down In 434060d, Arcadja (https://www.arcadja.com/auctions/en/) was down: HTTP code: 0 Response time: 0 ms Resolved: Arcadja is back up in 83164fc after 8 minutes.
gharchive/issue
2024-08-10T23:18:41
2025-04-01T04:34:00.499860
{ "authors": [ "divtiply" ], "repo": "divtiply/artupptime", "url": "https://github.com/divtiply/artupptime/issues/1529", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2404959661
🛑 Heritage Auctions is down In a6febf9, Heritage Auctions (https://www.ha.com/) was down: HTTP code: 403 Response time: 159 ms Resolved: Heritage Auctions is back up in 17a6174 after 2 hours, 57 minutes.
gharchive/issue
2024-07-12T07:40:26
2025-04-01T04:34:00.502345
{ "authors": [ "divtiply" ], "repo": "divtiply/artupptime", "url": "https://github.com/divtiply/artupptime/issues/735", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
535730521
allow remote login to db server from zabbix server Description of PR Type of change Feature Pull Request Bugfix Pull Request Docs Pull Request Fixes an issue Thanks! 👍
gharchive/pull-request
2019-12-10T13:29:16
2025-04-01T04:34:00.537369
{ "authors": [ "Vinclame", "dj-wasabi" ], "repo": "dj-wasabi/ansible-zabbix-server", "url": "https://github.com/dj-wasabi/ansible-zabbix-server/pull/166", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
51205341
zabbixapi gem not loaded when using ruby1.8 If I enable manage_resources in zabbix::server on a machine running ruby 1.8.7, I get the following error: err: Could not run Puppet configuration client: no such file to load -- zabbixapi (I'm using debian, and puppet seems to have a hardcoded dependency on ruby 1.8, despite ruby 1.9.3 also being installed on the system.) Do you intend this module to work with ruby 1.8.7, or only newer versions? I've downloaded the Debian 7.7 iso images and installed an fresh debian into vbox. I installed ruby1.8 first and than the apt repository from puppetlabs and installed puppet (It installed the ruby1.9 to). I can't reproduce it this way, so I'll have to wait on your comment about how to reproduce this. p.s. Happy new year ;) I've downloaded the Debian 7.7 iso images and installed an fresh debian into vbox. I installed ruby1.8 first and than the apt repository from puppetlabs and installed puppet (It installed the ruby1.9 to). I can't reproduce it this way, so I'll have to wait on your comment about how to reproduce this. p.s. Happy new year ;) @dj-wasabi Thank you for the attempted fix. I pulled this down and attempted to run again but I'm still seeing the same error. I purged all existing instances of zabbix-api using gems before running puppet.
gharchive/issue
2014-12-07T03:01:12
2025-04-01T04:34:00.540625
{ "authors": [ "dj-wasabi", "elricsfate", "lucas42" ], "repo": "dj-wasabi/puppet-zabbix", "url": "https://github.com/dj-wasabi/puppet-zabbix/issues/22", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
226201541
export to doc or docx hey, I tried using CKEditor in my django project , and I need to export after editing some text but i can't find any plugin or button to export to doc or docs! what should i do? is CK has any feature for this ? There's probably no ready-made plugin, certainly not in django-ckeditor (it's also out of scope) You should be able to put something together using for example html5lib/lxml/Beautifulsoup to parse the HTML, and https://python-docx.readthedocs.io/ or a comparable package to reassemble the parsed document structure.
gharchive/issue
2017-05-04T07:49:45
2025-04-01T04:34:00.544168
{ "authors": [ "matthiask", "mrillusion" ], "repo": "django-ckeditor/django-ckeditor", "url": "https://github.com/django-ckeditor/django-ckeditor/issues/390", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
316561134
how can i setup an inline-editor? as the title says,how to do it?tried many times, except ignore the django-ckeditor i can not achieve it?thanks for answer inline editor? In what context and where? Hi, I'm also trying to create an inline editor. I'm using it in a Django Model Form. A sample of the code I'm using to create the editor: title = forms.CharField(widget=CKEditorWidget(config_name='title')) It appears as a normal Editor field. I tried using the previous workaround from CKEditor 4.4: CKEDITOR.disableAutoInline = true; CKEDITOR.inline( 'id_name', {customConfig: "{% static 'js/title.js' %}"} ); but I get an error saying that 'id_name' is already in use: Uncaught The editor instance "id_name" is already attached to the provided element. I'm not sure if the inline editor is still available in Django CKEditor 5, but if it is, could you please detail how it is used? Thanks, Spoon look at the ckeditor website, the inline-editor works convenient and greatful ,so i want to setup it into my blogs,really! please show us an example,how to do it with django-ckeditor. Thanks very much, and, i found some setup does not work properly ,eg: Are all settings like config.name = 'value'; could write into django settings.py like 'name': 'value', I have tried to change it in config.js (in collected statics) but no effect at all, very confused. best wishes , thanks I see there is pieces of information on update logs on the docs pages, but nothing concrete except the workaround mentioned above (which doesnt work, at least it doesnt for my configuration, might be worth you trying if youre just reading this) I can see one possibly relevant mention in the changelog, for v4.4.4: "Fixes for inline editor" I'm guessing that this commit is what that refers to: https://github.com/django-ckeditor/django-ckeditor/commit/440cda35d2921dd6356a8e732a8e2a4bd188e601 But I'm not clear what that's doing, and it was so long ago (we're now at v6.5.1), that I'm not sure it's helpful for figuring this out. I've tried editing a copy of ckeditor/ckeditor-init.js and just changing CKEDITOR.replace(...) with CKEDITOR.inline(...) but it doesn't get run, because at this point t.getAttribute("data-processed") is "1" instead of "0" for me. I guess it would need a more substantial rewrite to work. (I'm using the CKEditorWidget() for a TextField in a ModelForm.) To enable an inline editor using django-ckeditor instead of the default CKEDITOR.replace, you can use the CKEDITOR_CONFIGS setting in your settings.py file to configure the CKEditor instance. Here's an example of how to enable inline editing: Add the ckeditor app to your INSTALLED_APPS setting in settings.py: INSTALLED_APPS = [ 'ckeditor', ] In your settings.py file, add the following CKEDITOR_CONFIGS setting: CKEDITOR_CONFIGS = { 'default': { 'toolbar': 'full', 'height': 300, 'width': '100%', 'startupMode': 'inline', } } This sets the startupMode to 'inline', which enables inline editing. In your template, load the CKEditor JavaScript and CSS files: {% load static %} <!DOCTYPE html> <html> <head> <title>My Page</title> <script src="{% static 'ckeditor/ckeditor.js' %}"></script> <link rel="stylesheet" href="{% static 'ckeditor/ckeditor.css' %}"> </head> <body> <div contenteditable="true">My editable content</div> <script> CKEDITOR.inline('div'); </script> </body> </html> To enable an inline editor with django-ckeditor, you can simply provide the CKEDITOR.inline configuration option in your template. Here's an example: {% load static %} <script src="{% static 'ckeditor/ckeditor-init.js' %}"></script> <script src="{% static 'ckeditor/ckeditor/ckeditor.js' %}"></script> <textarea id="my-textarea">Hello, World!</textarea> <script> // Initialize CKEditor CKEDITOR.inline('my-textarea', { toolbar: 'Basic', removeButtons: 'Underline,Strike,Subscript,Superscript' }); </script> In this example, we're using the CKEDITOR.inline function to initialize an inline editor on a element with an ID of "my-textarea". We're also providing a custom configuration object that specifies a basic toolbar and removes some unnecessary buttons. Note that in order to use CKEDITOR.inline, you need to include both ckeditor-init.js and ckeditor.js in your template. These files are typically included in your base.html or header.html template, and loaded using Django's static template tag. Override the ckeditor/ckeditor-init.js file in our project with our own version. Create the ckeditor folder in your static folder and copy the ckeditor folder from django-ckeditor package into it Create a new file called ckeditor-init.js in the same static/ckeditor folder Edit ckeditor-init.js file and replace the CKEDITOR.replace call with CKEDITOR.inline. Here's an example of what it might look like: function generateInlineEditor(id) { CKEDITOR.inline(id, { // Here you can customize editor settings removePlugins: 'elementspath', height: 200, toolbar: [ {name: 'basicstyles', groups: ['basicstyles', 'cleanup'], items: ['Bold', 'Italic', 'Underline', 'Subscript', 'Superscript']}, {name: 'paragraph', groups: ['list', 'indent', 'blocks', 'align'], items: ['-', 'NumberedList','BulletedList', '-', 'Outdent', 'Indent', '-', 'Blockquote', 'AlignLeft', 'AlignCenter', 'AlignRight']}, {name: 'links', items: ['Link', 'Unlink']}, {name: 'others', items: ['Styles', 'Format', 'Font', 'FontSize', 'TextColor', 'BGColor', 'Source']} ] }); } // Initialize all inline editors with class 'inline-editor' var editors = document.getElementsByClassName('inline-editor'); for (var i = 0; i < editors.length; i++) { generateInlineEditor(editors[i].getAttribute('id')); } In your Django template, include your ckeditor-init.js file, and add a div with a unique id and the "inline-editor" class to define your editable area <head> <script src="{% static 'ckeditor/ckeditor.js' %}"></script> <script src="{% static 'ckeditor/ckeditor-init.js' %}"></script> </head> <div id="my-editor" class="inline-editor">{{ my_content }}</div>
gharchive/issue
2018-04-22T08:45:40
2025-04-01T04:34:00.558625
{ "authors": [ "deanmcginndm", "lifeyf", "philgyford", "riklaunim", "some1ataplace", "spoonlyorange" ], "repo": "django-ckeditor/django-ckeditor", "url": "https://github.com/django-ckeditor/django-ckeditor/issues/483", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
99506264
Make a proper filesystem log and email that instead of the failure emails (From TODO.md) Make a proper filesystem log and email that instead of the failure emails Dupe of #67
gharchive/issue
2015-08-06T19:19:59
2025-04-01T04:34:00.564325
{ "authors": [ "ZuluPro", "benjaoming" ], "repo": "django-dbbackup/django-dbbackup", "url": "https://github.com/django-dbbackup/django-dbbackup/issues/83", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
108198838
IntegerField has invalid lookup: in I'm starting to work with LDAP and using this app (thanks @jlaine!) I'm developing a django site to help me manage my LDAP db. Following the examples, I've created an LdapGroup model but instead of making the cn the pk, I chose the gidNumber. All works just fine but when deleting a group. I get an "TypeError: IntegerField has invalid lookup: in". I've found this IntegerField definition (/ldapdb/models/fields.py): def get_prep_lookup(self, lookup_type, value): "Perform preliminary non-db specific lookup checks and conversions" if lookup_type in ('exact', 'gte', 'lte'): return value raise TypeError("IntegerField has invalid lookup: %s" % lookup_type) I've fixed the error simply by adding 'in' to that tuple. The question is: shouldn't be there by default? Or it has something to do with how LDAP works? Looks like a bug ;) This might be fixed as a side-effect of the current work on support for Django 1.10.
gharchive/issue
2015-09-24T19:35:56
2025-04-01T04:34:00.566541
{ "authors": [ "mdemicheli", "rbarrois" ], "repo": "django-ldapdb/django-ldapdb", "url": "https://github.com/django-ldapdb/django-ldapdb/issues/81", "license": "BSD-2-Clause", "license_type": "permissive", "license_source": "github-api" }
233398939
Unable to use layer inside docker container. This is a really weird situation. I hit this thing while trying to fix asgi_rabbitmq build. This is what I've got when I try to use IPC layer inside Docker container. $ docker-compose run --rm py36dj111 /code/.tox3.6.1/py36-django111/bin/python -i Starting asgirabbitmq_rabbitmq_1 ... done Python 3.6.1 (default, Mar 23 2017, 02:34:11) [GCC 4.9.2] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from asgi_ipc import IPCChannelLayer >>> layer = IPCChannelLayer() >>> layer.send('foo', {'baz': 'bar'}) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/code/.tox3.6.1/py36-django111/lib/python3.6/site-packages/asgi_ipc/core.py", line 61, in send channel_size = self.message_store.length(channel) File "/code/.tox3.6.1/py36-django111/lib/python3.6/site-packages/asgi_ipc/store.py", line 134, in length with self.mutate_value() as value: File "/usr/local/lib/python3.6/contextlib.py", line 82, in __enter__ return next(self.gen) File "/code/.tox3.6.1/py36-django111/lib/python3.6/site-packages/asgi_ipc/store.py", line 50, in mutate_value value = pickle.load(self.mmap) _pickle.UnpicklingError: invalid load key, '\x00'. >>> layer = IPCChannelLayer() >>> layer.flush() >>> layer.send('foo', {'baz': 'bar'}) As you can see, send will be successful only after first use of flush. I can't reproduce this behavior outside of Docker container. It works fine on my machine. But Travis builds use Docker for the build, so exactly same error happens in the CI. https://travis-ci.org/proofit404/asgi_rabbitmq/jobs/236879296#L4355-L4356 It is curious because asgi_ipc and asgi_redis builds passed without any trouble. Any suggestions for future research? I had another report of it working "very slowly" inside Docker this week as well, to the point where I suspect it was not actually working - it's possible something about the IPC communication does not agree with Docker? Based on your error, it looks like the shared memory is not working correctly, but I don't really know how to proceed. I can confirm that this error happens only on python3. Hm. Maybe try changing the pickle format and see if that affects it? But in the traceback above load is called before dump. So if I understand correctly pickle tries to read from uninitialized memory and fails. We can set errors mode into a different mode or tries to flush layer memory on first UnpickleError. Ah, it tries to unpickle it and then fails out at EOFError if it's empty - it seems in this case the memory is zeroed but has a length, so it probably just needs to check if it's empty as well. Sorry, I can't find the way to check if memory map or shared memory is empty. Well all pickles start with an 0x80 opcode, so checking if the first byte of the mmap is 0x00 should be enough. If this build will be successful with this fix, I'll do PR. Yep, looks like problem solved. Great! Closing this then.
gharchive/issue
2017-06-03T22:27:37
2025-04-01T04:34:00.593149
{ "authors": [ "andrewgodwin", "proofit404" ], "repo": "django/asgi_ipc", "url": "https://github.com/django/asgi_ipc/issues/26", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
353468614
Fixed #29706 -- Missing using=db call in RenameContentType._rename Ticket #29706 @charettes Here is a pull request for the issue I spoke with you about. Can you add a test? tests/contenttypes_tests/test_operations.py might be the place.
gharchive/pull-request
2018-08-23T16:55:49
2025-04-01T04:34:00.595164
{ "authors": [ "digismack", "timgraham" ], "repo": "django/django", "url": "https://github.com/django/django/pull/10332", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
728983123
Translated content becomes ugly due to line-down on new admin side-bar [https://code.djangoproject.com/ticket/32141](Ticket #32141) The first line is the result of the change. The second line is the original version. The third line is the original translated version Closing per ticket-32141.
gharchive/pull-request
2020-10-25T09:49:37
2025-04-01T04:34:00.596947
{ "authors": [ "felixxm", "xncbf" ], "repo": "django/django", "url": "https://github.com/django/django/pull/13602", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
744450241
Fixed #32201 -- Remove obsolete isort:skip's Changes Remove unneeded isort:skip Ticket https://code.djangoproject.com/ticket/32201#no3 Questions django/docs/internals/contributing/writing-code/coding-style.txt contains reference to isort:skip. I was unsure if this needed to be updated to reflect that it is no longer being used. Can you tell me how do I open my issue for django I am new to django and github I have some questions about django @seamus-quinn Thanks :+1: Welcome aboard :boat:
gharchive/pull-request
2020-11-17T06:24:49
2025-04-01T04:34:00.599293
{ "authors": [ "Saad-py", "felixxm", "seamus-quinn" ], "repo": "django/django", "url": "https://github.com/django/django/pull/13687", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
1011482895
Fixed #33155 -- Make ModelChoiceIteratorValue hashable See: https://code.djangoproject.com/ticket/33155 Please include a regression test and target your patch to the main branch rather than stable/3.1.x. I imagine the committer may backport this to the stable/4.0.x branch but it probably doesn't qualify for older versions per our supported versions policy, 3.1 especially is only receiving fixes for security and data loss issues.
gharchive/pull-request
2021-09-29T22:00:15
2025-04-01T04:34:00.601392
{ "authors": [ "aljazkosir", "timgraham" ], "repo": "django/django", "url": "https://github.com/django/django/pull/14915", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
1710117259
Fixed #34539 -- Restored get_prep_value() call when adapting JSONFields. Regression in 5c23d9f0c32f166c81ecb6f3f01d5077a6084318, JSONField was not following the Field API anymore by not calling get_prep_value from get_db_prep_value Ticket 34539 Hi @Chadys 👋 Thanks for the patch 🏆. Please update the commit message with the format Fixed #xxxxx -- <description in past tense> (see commit log). Just pushed some changes, I think I addressed all comments @Chadys Thanks :+1: Welcome aboard :boat:
gharchive/pull-request
2023-05-15T13:33:27
2025-04-01T04:34:00.603590
{ "authors": [ "Chadys", "felixxm", "shangxiao" ], "repo": "django/django", "url": "https://github.com/django/django/pull/16858", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
61974367
Fixed #24486 -- Documented method to provide output_field to mixed F expressions This should probably be back ported to avoid redundant tickets being created since the solution is non-obvious to someone without internal knowledge. When F(datetime) + F(delta) are combined in a query, the expression is unable to determine the output type without special casing the code. Since F expressions do not accept an output_field argument, another type is required to provide an output_field at a higher layer. The other option would be: finish = F('start') + F('duration') finish.output_field = DateTimeField() Race.objects.annotate(finish=finish) But I find that less palatable. merged in 820381d38bc02ea8b92837ce869e7332a7db9913, thanks.
gharchive/pull-request
2015-03-16T05:15:35
2025-04-01T04:34:00.605422
{ "authors": [ "jarshwah", "timgraham" ], "repo": "django/django", "url": "https://github.com/django/django/pull/4329", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
170042104
Fixed 27036 - locmem email backend should accept generators https://code.djangoproject.com/ticket/27036 buildbot, add to whitelist. merged in 004ba05bcaab9133bc2b7f943f6c3198da38dbc0, thanks!
gharchive/pull-request
2016-08-08T22:35:53
2025-04-01T04:34:00.607065
{ "authors": [ "MDziwny", "timgraham" ], "repo": "django/django", "url": "https://github.com/django/django/pull/7047", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
196572522
BugFix: --database option not works Because of system checks before loading the module So, we need to disable system checks as I committed. Hi, please create a Trac ticket with steps to reproduce the issue. I couldn't reproduce a problem with manage.py dumpdata --database=other polls. A test is also required for all bug fixes.
gharchive/pull-request
2016-12-20T02:38:55
2025-04-01T04:34:00.608349
{ "authors": [ "salehi", "timgraham" ], "repo": "django/django", "url": "https://github.com/django/django/pull/7720", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
2230890911
Debug mode fix This is a fix for issue #51, which in production the DEBUG is set to True, Thing that are updated Separated the compose file into two: compose.yml for local development and compose-prod.yml for production. The same approach has been applied to the Dockerfile, with separate Dockerfiles for the production and local environments. Production container's start script now includes collectstatic, and the server has been changed to gunicorn. Production Backend staticfile changed from whitenoise.storage.CompressedManifestStaticFilesStorage to django.contrib.staticfiles.storage.StaticFilesStorage Finally, in production.py, check if REDIS_URL is set before configuring the CACHES settings. Production .django env USE_DOCKER=yes IPYTHONDIR=/app/.ipython DJANGO_SETTINGS_MODULE=config.settings.production DJANGO_SECRET_KEY=[required] DJANGO_ADMIN_URL=[required] MAILGUN_API_KEY=[not-required-if-not-implemented] MAILGUN_DOMAIN=[not-required-if-not-implemented] SENTRY_DSN= DJANGO_DEBUG=False Just fixed it, it was on server side.
gharchive/pull-request
2024-04-08T10:58:48
2025-04-01T04:34:00.615935
{ "authors": [ "davidmgvaz", "theShinigami" ], "repo": "djangocon/2024.djangocon.eu", "url": "https://github.com/djangocon/2024.djangocon.eu/pull/52", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
429196637
Panic on connection timeout I've been testing changes at https://github.com/djc/quinn/pull/291 and witnessing EndpointInner::drive_recv() panics: thread 'Crust-Event-Loop' panicked at 'called `Option::unwrap()` on a `None` value', src/libcore/option.rs:345:21 note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace. stack backtrace: 0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace at src/libstd/sys/unix/backtrace/tracing/gcc_s.rs:39 1: std::sys_common::backtrace::_print at src/libstd/sys_common/backtrace.rs:70 2: std::panicking::default_hook::{{closure}} at src/libstd/sys_common/backtrace.rs:58 at src/libstd/panicking.rs:200 3: std::panicking::default_hook at src/libstd/panicking.rs:215 4: std::panicking::rust_panic_with_hook at src/libstd/panicking.rs:478 5: std::panicking::continue_panic_fmt at src/libstd/panicking.rs:385 6: rust_begin_unwind at src/libstd/panicking.rs:312 7: core::panicking::panic_fmt at src/libcore/panicking.rs:85 8: core::panicking::panic at src/libcore/panicking.rs:49 9: <core::option::Option<T>>::unwrap at /rustc/2aa4c46cfdd726e97360c2734835aa3515e8c858/src/libcore/macros.rs:10 10: quinn::endpoint::EndpointInner::drive_recv at /home/povilas/maidsafe/quinn/quinn/src/endpoint.rs:201 11: <quinn::endpoint::EndpointDriver as futures::future::Future>::poll at /home/povilas/maidsafe/quinn/quinn/src/endpoint.rs:135 12: <futures::future::map_err::MapErr<A, F> as futures::future::Future>::poll at /home/povilas/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-0.1.25/src/future/map_err.rs:30 13: <alloc::boxed::Box<F> as futures::future::Future>::poll at /home/povilas/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-0.1.25/src/future/mod.rs:113 14: <futures::task_impl::Spawn<T>>::poll_future_notify::{{closure}} at /home/povilas/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-0.1.25/src/task_impl/mod.rs:326 15: <futures::task_impl::Spawn<T>>::enter::{{closure}} at /home/povilas/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-0.1.25/src/task_impl/mod.rs:396 16: futures::task_impl::std::set at /home/povilas/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-0.1.25/src/task_impl/std/mod.rs:78 17: <futures::task_impl::Spawn<T>>::enter at /home/povilas/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-0.1.25/src/task_impl/mod.rs:396 18: <futures::task_impl::Spawn<T>>::poll_fn_notify at /home/povilas/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-0.1.25/src/task_impl/mod.rs:288 19: <futures::task_impl::Spawn<T>>::poll_future_notify at /home/povilas/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-0.1.25/src/task_impl/mod.rs:326 20: <tokio_current_thread::scheduler::Scheduled<'a, U>>::tick I did some debugging and it turns out this happens when one of the connections times out. Here's what happens in such case: proto::Connection sends EndpointEvent::Drained: https://github.com/djc/quinn/blob/ab9e08ed359c8e356e907af8c557448c1548824a/quinn-proto/src/connection.rs#L502 this event is picked up by EndpointInner::handle_event() which then removes the connection from the hashmap: https://github.com/djc/quinn/blob/ab9e08ed359c8e356e907af8c557448c1548824a/quinn/src/endpoint.rs#L274 then UDP packet arrives which turns out belongs to the removed connection and this panics: https://github.com/djc/quinn/blob/ab9e08ed359c8e356e907af8c557448c1548824a/quinn/src/endpoint.rs#L201 Seems like inner.handle() shouldn't parse such UDP packets? Maybe Endpoint and proto::Endpoint states are out of sync? After the EndpointEvent::Drained event has been processed by the proto::Endpoint, it should not be possible for the UDP packet to "belong" to that connection, since the proto::Endpoint can no longer link it to that now-unknown connection. So what's surprising here is that the UDP packet arrives (by your timeline, after the Drained event is processed) and still "belongs" to the removed connection. I'm not sure if some future UDP packet with a proper CID triggers that or not, but I've been doing some debugging and it seems that when Endpoint removes connection proto::Endpoint removes this connection as well. But what is weird that in the subsequent UDP datagrams make proto::Endpoint::handle() yield ConnectionHandle which was removed according to some new connection ID. I did some logging and here's what I see just before the panic: [endpoint_inner] new conn: ConnectionHandle(51) [connection] send EndpointEvent::Drained: Idle timeout [endpoint_inner] remove conn: ConnectionHandle(51) [proto::endpoint] received EndpointEvent::Drained, removing connection: ConnectionHandle(51) rm from connection_ids: ff8dc18e24163438 rm from connection_ids: d45292e33575f476 rm from connection_ids: b98864c06eadef8b rm from connection_ids: 02dfa60773e056f8 rm from connection_ids: e5693af35cfc9aae rm from connection_ids: b7013e4c8b593b97 rm from connection_ids: 7e0a793bcefb2fa1 [proto::endpoint] handle(): got packet from removed conn: 51 - CID(359ac321af059c95) was still in connection_ids [endpoint_inner] event for non-existant conn: ConnectionHandle(51) Now I'm not sure how connection ID 359ac321af059c95 got associated with connection handle 51. To make a sense of these log messages you can take a look at https://github.com/povilasb/quinn/commit/620c5dedcbf3610abbc9d4344b71d51940bb85de Does a new connection arrive soon after the removal of ConnectionHandle(51) which might trigger the recycling of that particular handle? I wouldn't be surprised if we have race conditions there. I did some logging and here's what I see just before the panic: Nice work! I count 7 removed CIDs there; we attempt to maintain 8 CIDs for every connection. While the new message-passing shape of things means this won't always be the case, this might indicate that ConnectionMeta.loc_cids is missing a CID that's nonetheless stored in Endpoint.connection_ids, leading to our failure to remove it. Ralith, seems like you're right: [proto::endpoint] received EndpointEvent::Drained, removing connection: ConnectionHandle(70) IDs (from self.connection_ids) associated with this conn handle: 5bd7b4df68342439 08679b5d28683c23 e6faadaba4336b83 dff615b62f0d1ff1 6eacaf6ffb704343 852b7ab65d600bbc f69b8a7045855aa4 45f90b7be83d8430 rm from connection_ids: 5bd7b4df68342439 rm from connection_ids: dff615b62f0d1ff1 rm from connection_ids: e6faadaba4336b83 rm from connection_ids: 45f90b7be83d8430 rm from connection_ids: f69b8a7045855aa4 rm from connection_ids: 6eacaf6ffb704343 rm from connection_ids: 852b7ab65d600bbc [proto::endpoint] handle(): got packet from removed conn: 70 - CID(08679b5d28683c23) was still in connection_ids So ConnectionMeta.loc_cids is missing one of the IDs, will check why.
gharchive/issue
2019-04-04T10:17:29
2025-04-01T04:34:00.647620
{ "authors": [ "Ralith", "djc", "povilasb" ], "repo": "djc/quinn", "url": "https://github.com/djc/quinn/issues/292", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
548708772
How would one handle SNI with multiple domains? Hello, I am expressing big interest in your crate and have vaguely gone through the documentation and examples, but one thing is not very clear for me. Usually, one might want to create a web server which handles requests for multiple domains, each with their own certificate, but I cannot see a way of doing this with quinn, or am I wrong? Based on your server example, you first create the ServerConfig: ... server_config.certificate(quinn::CertificateChain::from_certs(vec![cert]), key)?; } And then you create the Endpoint using that config: let mut endpoint = quinn::Endpoint::builder(); endpoint.listen(server_config.build()); And finally you instantiate the listener and start accepting new connections: let mut runtime = Builder::new().threaded_scheduler().enable_all().build()?; let (endpoint_driver, mut incoming) = { let (driver, endpoint, incoming) = runtime.enter(|| endpoint.bind(&options.listen))?; info!("listening on {}", endpoint.local_addr()?); (driver, incoming) }; runtime.spawn(async move { while let Some(conn) = incoming.next().await { ... handle_connection(root.clone(), conn).unwrap_or_else(move |e| { error!("connection failed: {reason}", reason = e.to_string()) }),... Ok(()) } And then you handle the streams with the handle_connection fn: async fn handle_connection(root: Arc<Path>, conn: quinn::Connecting) -> Result<()> { let quinn::NewConnection { driver, connection, mut bi_streams, .. } = conn.await?; ... All of the above has been taken from your server.rs example. So, my question is, with multiple domains you can only know the SNI hostname once the connection has been created and socket established; with your crate, how would one go about doing TLS handshake with the appropriate SSL Certificate for that SNI Hostname (domain) once the connection is established, and not beforehand like it is now? With Rustls and Tokio you first spawn the TcpListener, then start accepting connections over TcpStream and then you create a TlsSession from that stream, and from then you can read the encrypted data in plaintext through that TlsSession and write back to it to the encrypted stream. All of this happens once the TcpSocket is created and if one would like, they could attach a different SSL Certificate & Key pair per TcpStream. Maybe I've got something wrong or missed something, but how would one accomplish the same thing with your crate? Any assistance/guidelines/examples would be highly appreciated! Hi, thanks for your interest! You can supply the correct certificate using the same mechanism you would in TCP, i.e. by configuring the rustls cert_resolver used by the server. The rustls ServerConfig is exposed in the crypto field of quinn's own ServerConfig. What I think we are missing is a way to actually look up what was negotiated, similar to the existing protocol accessor. I'll see about drafting a fix for that. You are actually right! In my current implementation I've been creating a new rustls::ServerConfig for each new TcpStream instanced and I've been pretty much doing set_single_cert all time around, haha, so rid. It was only after you mentioned the cert_resolver, that I actually sat and read through the Rustls documentation. So now it is time to sit and experiment with Quinn, hopefully that goes fine without any issues. Thank you! If you do run into further issues, please reach out and let us know!
gharchive/issue
2020-01-13T05:13:03
2025-04-01T04:34:00.655312
{ "authors": [ "Ralith", "djc", "tuboythers" ], "repo": "djc/quinn", "url": "https://github.com/djc/quinn/issues/595", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
148941811
Trying to push state parameter I want to push a state parameter so in my callback URL I have the user_id from my own backend inside the URL. Trying to do this but not getting back the parameter in callback... $provider = new djchen\OAuth2\Client\Provider\Fitbit([ 'clientId' => '227xxx', 'clientSecret' => '228c6c2f47eb4612fbd5203f8e203xxx', 'redirectUri' => 'http://xxx-env.us-east-1.elasticbeanstalk.com/apiget/fitbitapi/userid/', 'state ' => '4' ]); Dupe of #15
gharchive/issue
2016-04-17T11:05:16
2025-04-01T04:34:00.669651
{ "authors": [ "acondurache", "djchen" ], "repo": "djchen/oauth2-fitbit", "url": "https://github.com/djchen/oauth2-fitbit/issues/18", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
373897619
iconv.h: No such file or directory github.com/djimenez/iconv-go go\src\github.com\djimenez\iconv-go\converter.go:8:19: fatal error: iconv.h: No such file or directory compilation terminated. The package is just using cgo to use an iconv implementation on your system. An appropriate iconv has to be available and usable by the c compiler go is calling out to. I would suggest using https://godoc.org/golang.org/x/text/encoding (which is pure go) if you don't have a legacy reason to need to use a specific iconv implementation.
gharchive/issue
2018-10-25T11:19:06
2025-04-01T04:34:00.677872
{ "authors": [ "djimenez", "kalsolio" ], "repo": "djimenez/iconv-go", "url": "https://github.com/djimenez/iconv-go/issues/38", "license": "bsd-2-clause", "license_type": "permissive", "license_source": "bigquery" }
1320179425
🛑 Writeguard is down In 8f75566, Writeguard (https://www.writeguard.com) was down: HTTP code: 0 Response time: 0 ms Resolved: Writeguard is back up in 9ef2fb8.
gharchive/issue
2022-07-27T22:21:12
2025-04-01T04:34:00.724019
{ "authors": [ "djsnipa1" ], "repo": "djsnipa1/upptime", "url": "https://github.com/djsnipa1/upptime/issues/1020", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
424188948
HashMap initialization at compile time First, THANK YOU for this library, which has sped my code up by a tremendous amount over dynamic arrays and other build-in structures. It is overall nicely written and very easy to use; next on my agenda is to learn how to use with stdx.allocator. I made the mistake of declaring HashMap in a struct thusly struct A { auto myvar = HashMap!(string, int)(256); this(...) { ... } } which caused the compiler to emit the cryptic message Error: static variable instance cannot be read at compile time. Removing the bucket count param to the constructor changed the error to Error: cannot interpret HashMap!(string, int, Mallocator, generateHash, true, true) at compile time which finally clued me in that maybe it cannot initialize ANYTHING at compile time. Changing the struct member declaration simply to HashMap!(string,int) myvar; and moving the constructor call to the struct constructor allowed compilation. Questions: Why can I not initialize HashMap as struct member at compile time? (other data structures in this library can) Is there any other way to handle this besides what I've done? Wow, those error messages are not helpful for the consumer (that's more of a comment, and more of a frontend problem). 1. Why can I not initialize HashMap as struct member at compile time? You can. This works: struct A { auto myvar = HashMap!(string, int)(); } Is there any other way to handle this besides what I've done? What you've done is the best solution. Wow, those error messages are not helpful for the consumer (that's more of a comment, and more of a frontend problem). The first error message points at a limitation of std.experimental.allocator. Maybe it could be avoided by making instance a static property function instead of a static variable; then, at least, GCAllocator maybe could be usable at compile-time. I can't reproduce the second error message, maybe a compiler or library update fixed it.
gharchive/issue
2019-03-22T12:36:12
2025-04-01T04:34:00.758381
{ "authors": [ "CyberShadow", "jblachly" ], "repo": "dlang-community/containers", "url": "https://github.com/dlang-community/containers/issues/142", "license": "BSL-1.0", "license_type": "permissive", "license_source": "github-api" }
2507756331
Fix Websocket Bugs Sometimes, the WebSocket connection between the delivery service and the web client does not function properly. As a result, messages can only be fetched from the delivery service after a page reload, instead of being instantly delivered via WebSocket. The cause of this issue is still unclear, but we need to monitor it closely. Steps to (occasionally) reproduce the issue: Open DM3 in two separate sessions. Send messages back and forth. One client can receive messages, while the other cannot. https://github.com/user-attachments/assets/4480d6bf-d01e-4653-9fda-d6175272fa49 demo of what it looks like
gharchive/issue
2024-09-05T13:04:38
2025-04-01T04:34:00.863402
{ "authors": [ "AlexNi245", "malteish" ], "repo": "dm3-org/dm3", "url": "https://github.com/dm3-org/dm3/issues/1170", "license": "BSD-2-Clause", "license_type": "permissive", "license_source": "github-api" }
98736146
NullPointerException when vessel is opening reporting view Problem was detected by inspection of production system log but for other reasons. A NullPointerException occured when a vessel opened the Greenpos view without having send its first Greenpos Report. This has been verified by inspecting the database content, which verifies that the vessel had not reported at the time of error. 2015-06-02 02:00:17,670 ERROR [io.undertow.request] (default task-82) UT005023: Exception handling request to /rest/greenpos/latest/211753000: j$ at org.apache.shiro.web.servlet.AdviceFilter.cleanup(AdviceFilter.java:196) [shiro-web-1.2.2.jar:1.2.2] at org.apache.shiro.web.servlet.AdviceFilter.doFilterInternal(AdviceFilter.java:148) [shiro-web-1.2.2.jar:1.2.2] at org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:125) [shiro-web-1.2.2.jar:1.2.2] at org.apache.shiro.web.servlet.ProxiedFilterChain.doFilter(ProxiedFilterChain.java:66) [shiro-web-1.2.2.jar:1.2.2] at org.apache.shiro.web.servlet.AbstractShiroFilter.executeChain(AbstractShiroFilter.java:449) [shiro-web-1.2.2.jar:1.2.2] at org.apache.shiro.web.servlet.AbstractShiroFilter$1.call(AbstractShiroFilter.java:365) [shiro-web-1.2.2.jar:1.2.2] at org.apache.shiro.subject.support.SubjectCallable.doCall(SubjectCallable.java:90) [shiro-core-1.2.2.jar:1.2.2] at org.apache.shiro.subject.support.SubjectCallable.call(SubjectCallable.java:83) [shiro-core-1.2.2.jar:1.2.2] at org.apache.shiro.subject.support.DelegatingSubject.execute(DelegatingSubject.java:383) [shiro-core-1.2.2.jar:1.2.2] at org.apache.shiro.web.servlet.AbstractShiroFilter.doFilterInternal(AbstractShiroFilter.java:362) [shiro-web-1.2.2.jar:1.2.2] at org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:125) [shiro-web-1.2.2.jar:1.2.2] at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60) [undertow-servlet-1.1.0.Final.jar:1.1.0.Final] at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:132) [undertow-servlet-1.1.0.Final.jar:1.1.0.F$ at io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:85) [undertow-servlet-1.1.0.Final.jar:1.1.0.Final] at io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:61) [undertow-servlet-$ at io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36) [undertow-servlet-1.1.0.Final$ at org.wildfly.extension.undertow.security.SecurityContextAssociationHandler.handleRequest(SecurityContextAssociationHandler.java:78) at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) [undertow-core-1.1.0.Final.jar:1.1.0.Final] at io.undertow.servlet.handlers.security.SSLInformationAssociationHandler.handleRequest(SSLInformationAssociationHandler.java:131) [unde$ at io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.handleRequest(ServletAuthenticationCallHandler.java:56) [under$ at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) [undertow-core-1.1.0.Final.jar:1.1.0.Final] at io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest(AbstractConfidentialityHandler.java:45) [undertow-core-1.1$ at io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHandler.handleRequest(ServletConfidentialityConstraintHandler.j$ at io.undertow.security.handlers.AuthenticationMechanismsHandler.handleRequest(AuthenticationMechanismsHandler.java:58) [undertow-core-1$ at io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.handleRequest(CachedAuthenticatedSessionHandler.java:70) [und$ at io.undertow.security.handlers.SecurityInitialHandler.handleRequest(SecurityInitialHandler.java:76) [undertow-core-1.1.0.Final.jar:1.1$ at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) [undertow-core-1.1.0.Final.jar:1.1.0.Final] at org.wildfly.extension.undertow.security.jacc.JACCContextIdHandler.handleRequest(JACCContextIdHandler.java:61) at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) [undertow-core-1.1.0.Final.jar:1.1.0.Final] at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) [undertow-core-1.1.0.Final.jar:1.1.0.Final] at io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:261) [undertow-servlet-1.1.0.Final.j$ at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:247) [undertow-servlet-1.1.0.Final.jar:$ at io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:76) [undertow-servlet-1.1.0.Final.jar:1.1.0.$ at io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:166) [undertow-servlet-1.1.0.Final.jar:$ at io.undertow.server.Connectors.executeRootHandler(Connectors.java:197) [undertow-core-1.1.0.Final.jar:1.1.0.Final] at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:759) [undertow-core-1.1.0.Final.jar:1.1.0.Final] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [rt.jar:1.8.0_31] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [rt.jar:1.8.0_31] at java.lang.Thread.run(Thread.java:745) [rt.jar:1.8.0_31] Caused by: org.jboss.resteasy.spi.UnhandledException: java.lang.NullPointerException at org.jboss.resteasy.core.ExceptionHandler.handleApplicationException(ExceptionHandler.java:76) [resteasy-jaxrs-3.0.10.Final.jar:] at org.jboss.resteasy.core.ExceptionHandler.handleException(ExceptionHandler.java:212) [resteasy-jaxrs-3.0.10.Final.jar:] at org.jboss.resteasy.core.SynchronousDispatcher.writeException(SynchronousDispatcher.java:149) [resteasy-jaxrs-3.0.10.Final.jar:] at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:372) [resteasy-jaxrs-3.0.10.Final.jar:] at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:179) [resteasy-jaxrs-3.0.10.Final.jar:] at org.jboss.resteasy.plugins.server.servlet.ServletContainerDispatcher.service(ServletContainerDispatcher.java:220) [resteasy-jaxrs-3.0$ at org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:56) [resteasy-jaxrs-3.0.10.Final.j$ at org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:51) [resteasy-jaxrs-3.0.10.Final.j$ at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) [jboss-servlet-api_3.1_spec-1.0.0.Final.jar:1.0.0.Final] at io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:85) [undertow-servlet-1.1.0.Final.jar:1.1.0.Final] at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:130) [undertow-servlet-1.1.0.Final.jar:1.1.0.F$ at org.apache.shiro.web.servlet.ProxiedFilterChain.doFilter(ProxiedFilterChain.java:61) [shiro-web-1.2.2.jar:1.2.2] at org.apache.shiro.web.servlet.AdviceFilter.executeChain(AdviceFilter.java:108) [shiro-web-1.2.2.jar:1.2.2] at org.apache.shiro.web.servlet.AdviceFilter.doFilterInternal(AdviceFilter.java:137) [shiro-web-1.2.2.jar:1.2.2] ... 37 more Caused by: java.lang.NullPointerException at dk.dma.embryo.common.json.AbstractRestService.getResponse(AbstractRestService.java:88) [embryo-common-2.4.jar:] at dk.dma.arcticweb.reporting.json.GreenPosRestService.latest(GreenPosRestService.java:100) [embryo-reporting-2.4.jar:] at dk.dma.arcticweb.reporting.json.GreenPosRestService$Proxy$_$$_WeldClientProxy.latest(Unknown Source) [embryo-reporting-2.4.jar:] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [rt.jar:1.8.0_31] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) [rt.jar:1.8.0_31] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_31] at java.lang.reflect.Method.invoke(Method.java:483) [rt.jar:1.8.0_31] at org.jboss.resteasy.core.MethodInjectorImpl.invoke(MethodInjectorImpl.java:137) [resteasy-jaxrs-3.0.10.Final.jar:] at org.jboss.resteasy.core.ResourceMethodInvoker.invokeOnTarget(ResourceMethodInvoker.java:296) [resteasy-jaxrs-3.0.10.Final.jar:] at org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:250) [resteasy-jaxrs-3.0.10.Final.jar:] at org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:237) [resteasy-jaxrs-3.0.10.Final.jar:] at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:356) [resteasy-jaxrs-3.0.10.Final.jar:] ... 47 more Introduced massive amount of debugging text when e.g. vessel list is debugged.
gharchive/issue
2015-08-03T12:49:16
2025-04-01T04:34:00.905250
{ "authors": [ "tejl" ], "repo": "dma-dk/Embryo", "url": "https://github.com/dma-dk/Embryo/issues/5", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1554095658
AttributeError: module 'whisper.utils' has no attribute 'write_vtt' Hi. Excited, but can't get past here and can't find documentation for the error (using Colab): Transcribing audio with whisper-large elapsed: 15.524232387542725 Transcribing audio with whisper-tiny elapsed: 2.897756814956665 AttributeError Traceback (most recent call last) in 215 with open(outpath,'w') as f: 216 # to do: upstream PR to control verbosity --> 217 whisper.utils.write_vtt( 218 whispers[k]["segments"], # ...really? 219 file=f AttributeError: module 'whisper.utils' has no attribute 'write_vtt' https://github.com/dmarx/video-killed-the-radio-star/issues/101#issuecomment-1401322230
gharchive/issue
2023-01-24T01:07:29
2025-04-01T04:34:00.913383
{ "authors": [ "HarlanBrothers", "Meysmerized" ], "repo": "dmarx/video-killed-the-radio-star", "url": "https://github.com/dmarx/video-killed-the-radio-star/issues/102", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
205691899
Fix --resume and --begin_epoch off by one issue with rcnn train_end2end.py When resuming training, should load from resume_epoch - 1 so we can begin at resume_epoch. Per the docs at python/mxnet/module/base_module.py begin_epoch : int Default `0`. Indicate the starting epoch. Usually, if we are resuming from a checkpoint saved at a previous training phase at epoch N, then we should specify this value as N+1. Yet, for example, if after 3 epochs, we attempt to run with --resume --begin_epoch 4 we'll receive error message: Traceback (most recent call last): File "train_end2end.py", line 184, in <module> main() File "train_end2end.py", line 181, in main lr=args.lr, lr_step=args.lr_step) File "train_end2end.py", line 71, in train_net arg_params, aux_params = load_param(prefix, begin_epoch, convert=True) File "/root/mxnet/example/rcnn/rcnn/utils/load_model.py", line 49, in load_param arg_params, aux_params = load_checkpoint(prefix, epoch) File "/root/mxnet/example/rcnn/rcnn/utils/load_model.py", line 15, in load_checkpoint save_dict = mx.nd.load('%s-%04d.params' % (prefix, epoch)) File "/usr/local/lib/python2.7/dist-packages/mxnet-0.9.2-py2.7.egg/mxnet/ndarray.py", line 1247, in load ctypes.byref(names))) File "/usr/local/lib/python2.7/dist-packages/mxnet-0.9.2-py2.7.egg/mxnet/base.py", line 75, in check_call raise MXNetError(py_str(_LIB.MXGetLastError())) mxnet.base.MXNetError: [19:36:28] src/io/local_filesys.cc:154: Check failed: allow_null LocalFileSystem: fail to open "/media/ngv/output/e2e-0004.params" So I think in the case of resuming, we should be loading from begin_epoch -1. I've verified that resuming works as I'd expect with this fix in place. @precedenceguo Suppose we train 1 epoch, the first run would be --begin_epoch 0 and the checkpoint from epoch 0 is saved as prefix-0001.params. Now we want to resume from the checkpoint from epoch 0, as is in the doc, we should specify begin_epoch as 0+1 which is 1. So we load prefix-0001.params. Everything works. In other words, don't forget epoch 0 and in your case after 3 epoch, you should resume from 3. Therefore I find this change unfit to the module convention. @precedenceguo thanks for the clarification, apologies for attempting to submit an incorrect patch!
gharchive/pull-request
2017-02-06T20:00:53
2025-04-01T04:34:01.124530
{ "authors": [ "krosaen", "piiswrong", "precedenceguo" ], "repo": "dmlc/mxnet", "url": "https://github.com/dmlc/mxnet/pull/4908", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
394185415
[TOPI, CUDA] Update cuda softmax schedule for spatial inputs As pointed out in https://discuss.tvm.ai/t/softmax-is-really-slow/1365/4, The current cuda softmax schedule is hard coded for 2D inputs (after fully connected layer), so it is super slow on spatial inputs (image segmentation, object detection etc). For (1, 16, 256, 256) input, it generates the following schedule. // attr [compute] storage_scope = "global" allocate compute[float32 * 1 * 256 * 256] // attr [compute] storage_scope = "global" allocate compute[float32 * 1 * 256 * 256] produce compute { // attr [iter_var(blockIdx.x, , blockIdx.x)] thread_extent = 1 for (i1, 0, 256) { for (i2, 0, 256) { compute[((i1*256) + i2)] = -340282346638528859811704183484516925440.000000f for (k, 0, 16) { compute[((i1*256) + i2)] = max(compute[((i1*256) + i2)], A[(((i1*256) + i2) + (k*65536))]) } } } } produce compute { // attr [iter_var(blockIdx.x, , blockIdx.x)] thread_extent = 1 // attr [compute.rf] storage_scope = "local" allocate compute.rf[float32 * 1 * 1 * 1 * 1] // attr [reduce_temp0] storage_scope = "local" allocate reduce_temp0[float32 * 1] for (ax1, 0, 256) { for (ax2, 0, 256) { // attr [iter_var(threadIdx.x, Range(min=0, extent=64), threadIdx.x)] thread_extent = 64 produce compute.rf { compute.rf[0] = 0.000000f if ((threadIdx.x < 16)) { compute.rf[0] = (compute.rf[0] + exp((A[(((ax1*256) + ax2) + (threadIdx.x*65536))] - compute[((ax1*256) + ax2)]))) } } // attr [comm_reducer(result=[(x + y)], lhs=[x], rhs=[y], identity_element=[0.000000f])] reduce_scope = reinterpret((uint64)0) tvm_thread_allreduce((uint32)1, compute.rf[0], (uint1)1, reduce_temp0, threadIdx.x) if ((threadIdx.x == 0)) { compute[((ax1*256) + ax2)] = reduce_temp0[0] } } } } produce compute { // attr [iter_var(blockIdx.x, , blockIdx.x)] thread_extent = 1 // attr [iter_var(threadIdx.x, Range(min=0, extent=64), threadIdx.x)] thread_extent = 64 for (i2, 0, 256) { for (i3, 0, 256) { if (likely((threadIdx.x < 16))) { compute[(((i2 + (threadIdx.x*256))*256) + i3)] = (exp((A[(((i2 + (threadIdx.x*256))*256) + i3)] - compute[((i2*256) + i3)]))/compute[((i2*256) + i3)]) } } } } I updated the softmax schedule so that it works well for 4D and 5D inputs. Since the number of classes is typically small for these inputs, I simply use injective schedule (i.e., without using shared memory reduce). Below is the updated schedule with this PR. @vinx13 @merrymercy @eqy please review. // attr [compute] storage_scope = "global" allocate compute[float32 * 1 * 256 * 256] // attr [compute] storage_scope = "global" allocate compute[float32 * 1 * 256 * 256] produce compute { // attr [iter_var(blockIdx.x, , blockIdx.x)] thread_extent = 128 // attr [iter_var(threadIdx.x, , threadIdx.x)] thread_extent = 512 compute[((((blockIdx.x*2) + (threadIdx.x/256))*256) + (threadIdx.x % 256))] = -340282346638528859811704183484516925440.000000f for (k, 0, 16) { compute[((((blockIdx.x*2) + (threadIdx.x/256))*256) + (threadIdx.x % 256))] = max(compute[((((blockIdx.x*2) + (threadIdx.x/256))*256) + (threadIdx.x % 256))], A[(((((blockIdx.x*2) + (threadIdx.x/256))*256) + (threadIdx.x % 256)) + (k*65536))]) } } produce compute { // attr [iter_var(blockIdx.x, , blockIdx.x)] thread_extent = 128 // attr [iter_var(threadIdx.x, , threadIdx.x)] thread_extent = 512 compute[((((blockIdx.x*2) + (threadIdx.x/256))*256) + (threadIdx.x % 256))] = 0.000000f for (k, 0, 16) { compute[((((blockIdx.x*2) + (threadIdx.x/256))*256) + (threadIdx.x % 256))] = (compute[((((blockIdx.x*2) + (threadIdx.x/256))*256) + (threadIdx.x % 256))] + exp((A[(((((blockIdx.x*2) + (threadIdx.x/256))*256) + (threadIdx.x % 256)) + (k*65536))] - compute[((((blockIdx.x*2) + (threadIdx.x/256))*256) + (threadIdx.x % 256))]))) } } produce compute { // attr [iter_var(blockIdx.x, , blockIdx.x)] thread_extent = 256 // attr [iter_var(threadIdx.x, , threadIdx.x)] thread_extent = 512 for (i0.i1.fused.i2.fused.i3.fused.outer, 0, 8) { compute[((((((blockIdx.x*2) + (threadIdx.x/256))/256)*65536) + (((((blockIdx.x*2) + (threadIdx.x/256)) % 256)*256) + (threadIdx.x % 256))) + (i0.i1.fused.i2.fused.i3.fused.outer*131072))] = (exp((A[((((((blockIdx.x*2) + (threadIdx.x/256))/256)*65536) + (((((blockIdx.x*2) + (threadIdx.x/256)) % 256)*256) + (threadIdx.x % 256))) + (i0.i1.fused.i2.fused.i3.fused.outer*131072))] - compute[(((((blockIdx.x*2) + (threadIdx.x/256)) % 256)*256) + (threadIdx.x % 256))]))/compute[(((((blockIdx.x*2) + (threadIdx.x/256)) % 256)*256) + (threadIdx.x % 256))]) } } LGTM, just wondering if it is possible to combine max_elem and expsum kernel, since they have the same thread extent @vinx13 please https://docs.tvm.ai/contribute/code_review.html#approve-and-request-changes-explicitly @vinx13 ok, I'll update this pr to fuse max_elem and expsum. @vinx13 I'm getting an error at compute_at below. max_elem should be obviously the producer of expsum. Do you have any idea why? softmax = outs[0] expsum = softmax.op.input_tensors[2] max_elem = expsum.op.input_tensors[1] axes = expsum.op.axis fused = s[expsum].fuse(*axes) bx, tx = s[expsum].split(fused, factor=num_thread) s[max_elem].compute_at(s[expsum], tx) tvm/src/schedule/bound.cc:168: Check failed: found_attach || stage_attach.size() == 0 Invalid Schedule, cannot find the producer compute(compute, 000001AF5B4B8070) along the loop nest specified by compute_at of consumer compute(compute, 000001AF5B4B6970) hmm, it's weird that we are seeing compute(softmax, 0x31e14f0) instead of expsum. Besides, max_elem IS one of the producers to softmax. @masahi I think it is currently impossible to produce two tensors in one kernel. Let's keep using schedule_injective for now Yeah I just realized that, since max_elem is fed into both expsum and normalize, we can't fuse max_elem into expsum. Did you check the performance regression on resnet-18 or mobilenet? Our model has one large softmax input previously: maxelem is [1, 204800, 1], expsum is [1, 204800, 1] too. Could this schedule can handle this situation or not? My previous method is to fuse all axies and do split like this (for max_elem / expsum / softmax): max_threads = tvm.target.current_target(allow_none=False).max_num_threads n, c, hw = s[max_elem].op.axis fused = s[max_elem].fuse(n, c, hw) fused, vec = s[max_elem].split(fused, 16) bb, tt = s[max_elem].split(fused, max_threads) s[max_elem].bind(bb, tvm.thread_axis("blockIdx.x")) s[max_elem].bind(tt, tvm.thread_axis("threadIdx.x")) s[max_elem].vectorize(vec) My test environment is Mali-G71. The execution time could be from 900ms to 5ms. Maybe this PR is better way, but I haven't tested it. But this workload should be one good test case for you. @merrymercy @FrozenGene This PR applies to 4D or 5D inputs only. For 2D inputs, there is no change and the same existing schedule will apply. Thanks, @masahi @merrymercy @FrozenGene @vinx13 , this is now merged
gharchive/pull-request
2018-12-26T15:08:12
2025-04-01T04:34:01.135221
{ "authors": [ "FrozenGene", "masahi", "merrymercy", "tqchen", "vinx13" ], "repo": "dmlc/tvm", "url": "https://github.com/dmlc/tvm/pull/2338", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
254181905
[BACKEND] initial llvm codegen for amdgpu The test results: $ python test_gemm.py skip because nvptx -mcpu=sm_20 is not enabled.. skip because rocm is not enabled.. skip because metal is not enabled.. skip because opencl is not enabled.. skip because cuda is not enabled.. $ $ python test_codegen_device.py $ check your runtime as it reports rocm not enabled. Need to change src/runtime/module.cc to add rocm enable checj Update code here https://github.com/dmlc/tvm/blob/master/src/runtime/module.cc#L103 @tqchen I am getting the following error: $ python test_codegen_device.py [22:30:06] /home/aditya/tvm/dmlc-core/include/dmlc/./logging.h:308: [22:30:06] src/runtime/module.cc:74: Module[hip] does not support GetSource Stack trace returned 10 entries: [bt] (0) /home/aditya/tvm/lib/libtvm.so(_ZN3tvm7runtime10ModuleNode9GetSourceERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x30f) [0x7f3ecfc6d16f] [bt] (1) /home/aditya/tvm/lib/libtvm.so(+0x955fd2) [0x7f3ecfc6ffd2] [bt] (2) /home/aditya/tvm/lib/libtvm.so(TVMFuncCall+0x5e) [0x7f3ecfc7961e] [bt] (3) /usr/lib/x86_64-linux-gnu/libffi.so.6(ffi_call_unix64+0x4c) [0x7f3ed4d10e40] [bt] (4) /usr/lib/x86_64-linux-gnu/libffi.so.6(ffi_call+0x2eb) [0x7f3ed4d108ab] [bt] (5) /usr/lib/python2.7/lib-dynload/_ctypes.x86_64-linux-gnu.so(_ctypes_callproc+0x48f) [0x7f3ed4f203df] [bt] (6) /usr/lib/python2.7/lib-dynload/_ctypes.x86_64-linux-gnu.so(+0x11d82) [0x7f3ed4f24d82] [bt] (7) python(PyObject_Call+0x43) [0x4b0cb3] [bt] (8) python(PyEval_EvalFrameEx+0x5faf) [0x4c9faf] [bt] (9) python(PyEval_EvalCodeEx+0x255) [0x4c2765] Traceback (most recent call last): File "test_codegen_device.py", line 88, in <module> test_add_pipeline() File "test_codegen_device.py", line 85, in test_add_pipeline check_target("rocm", host="llvm") File "test_codegen_device.py", line 46, in check_target code = mdev.get_source() File "/home/aditya/tvm/python/tvm/module.py", line 34, in get_source return _GetSource(self, fmt) File "/home/aditya/tvm/python/tvm/_ffi/function.py", line 255, in my_api_func return flocal(*args) File "/home/aditya/tvm/python/tvm/_ffi/_ctypes/function.py", line 183, in __call__ ctypes.byref(ret_val), ctypes.byref(ret_tcode))) File "/home/aditya/tvm/python/tvm/_ffi/base.py", line 62, in check_call raise TVMError(py_str(_LIB.TVMGetLastError())) tvm._ffi.base.TVMError: [22:30:06] src/runtime/module.cc:74: Module[hip] does not support GetSource implement get source here https://github.com/dmlc/tvm/blob/master/src/runtime/rocm/rocm_module.cc#L46 reference https://github.com/dmlc/tvm/blob/master/src/runtime/cuda/cuda_module.cc#L78 Summarizing, It would be great to know a way to dump IR to see whether all the intrinsic, device code and meta data are generated correctly (even data layout and target triple). hipModuleLaunchKernel doesn't support the primary method from cudaLaunchKernel. It uses extra_args. I am seeing a segfault when running $ python test_codegen_device.py [23:11:18] src/runtime/rocm/rocm_module.cc:64: HSACO Bus error (core dumped) I have to debug on it more. @tqchen I don't expect code to run as we have a new feature coming in from HIP (https://github.com/adityaatluri/HIP/commit/8a7328fd9de7f1d174e5f4b75de734fb4032f5b6). I'll write a CPP test to check whether the IR generated is valid or not. But, the purpose of this PR is to get good IR generated. Can you elaborate a bit on what is expected? For example, is the problem lies in the additional argument packing, or other parts? We might be able to change the runtime accordingly to solve this issue It would be nice to get a runnable code. Specifically, we can pre-pack the arguments into a single buffer, if necessary, without going through the HIP CUDA compatible API. For example, in Metal runtime everything is packed into an array of ArgUnion, and the device code will r eceive a packed arguments instead https://github.com/dmlc/tvm/blob/master/src/runtime/metal/metal_module.mm#L202 @tqchen I am able to see good IR now. https://gist.github.com/adityaatluri/1ac1ff72b927e42fdd8a61f98176039a Can you confirm the gap between this and actual kernel test that runs? Thanks Can you explain a bit more about what you meant? I mean directly run the test via the Rocm module and verify the correctness of the kernel Gotcha. Turns out the IR is not valid. These lines are causing bad output results. https://gist.github.com/adityaatluri/1ac1ff72b927e42fdd8a61f98176039a#file-tvm-amdgcn-ll-L10 Do you know where they are coming from ? This is shift left, used for address calculation, in condition if (blockIdx.x * 256 + threadIdx.x < n) { ... } blockIdx.x * 256 becomes blockIdx << 8 which corresponds to that line Do you know which code block generate this? Should due to LLVM's constant folder, which automatically folds into the Mul into left shift, is the shift not supported by AMD ISA? It does support shl/shr but I don't think we need to mul workitem id with 8. Also, I didn't see the last arg i32. Let me retest. After retest, the data output got validated. nice, can we directly use RocmModule to run the test instead of the current test that is de-coupled from the compiler? We need new HIP which the team is working on. Once it lands, it'll make it easier to launch kernel. I see what you mean by looking at the test code you provided. Actually the metadata is already available in TVMRuntime, and we are using this to pack the data. So one possible way is simply implement parameter packing in TVM. For example, https://github.com/dmlc/tvm/blob/master/src/runtime/pack_args.h#L150 packs non pointer argument into a continuous memory region of one buffer(ArgUnion). If we know the parameter packing requirement(e.g. alignment of each value) What we need is to implement a PackFuncArg, similar to https://github.com/dmlc/tvm/blob/master/src/runtime/pack_args.h#L216 and gives us the packed pointer back to the callback @adityaatluri can you followup with the comments with argument packing and meta data in runtime? Thanks! Yes. Can you review the codegen changes? I have added two additional comments, the code now lgtm. The only thing we need to do is add parameter packing and do unittests. The CI machine was down due to power outrage and is back online again. The build should be triggered when you push next commit @tqchen The bus error (core dumped) is due the following line: https://github.com/adityaatluri/tvm/blob/rocm-codegen-v1/src/runtime/rocm/rocm_module.cc#L145 Any thoughts? Last two comments, and we can merge this in. Thanks for the work to make this happen @masahi can you try LLVM 5.0? There are few issues with rocm runtime which will be fixed soon. @adityaatluri Thanks for the quick response. I'll try llvm 5.0 after I am back from work. By the way, the opencl backend with rocm's opencl stack works fine on my Nano. I can pass all tests in https://github.com/dmlc/tvm/tree/master/topi/tests/python . Thank you for trying it out. @adityaatluri I built tvm with llvm 5.0 from the official ubunutu package, but test_gemm.py still hangs my entire system. It gives a familiar error message, saying 'Memory access fault by GPU node -1 on address ...' . I think something is wrong with codegen. With opencl backend, when I do this: f_opencl = tvm.build(s, [A, B, C], "opencl") dev_module_opencl = f_opencl.imported_modules[0] print(dev_module_opencl.get_source()) I get a valid opencl kernel string. But for rocm backend, f_rocm = tvm.build(s, [A, B, C], "rocm") dev_module_rocm = f_rocm.imported_modules[0] print(repr(dev_module_rocm.get_source())) just prints out '\x7fELF\x02\x01\x01@' Any ideas? Can you do LOG(WARNING)<<ll; after https://github.com/dmlc/tvm/blob/master/src/codegen/llvm/codegen_amdgpu.cc#L175 Ok I get this https://gist.github.com/masahi/2c81b07aaf2f2e58cd0a053fe3d1fb02#file-myadd_kernel0-ll Does it look good? I'm correctly linking against llvm 5.0. $ldd libtvm.so linux-vdso.so.1 => (0x00007ffd97336000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fb7d954c000) libLLVM-5.0.so.1 => /usr/lib/x86_64-linux-gnu/libLLVM-5.0.so.1 (0x00007fb7d5f92000) libhip_hcc.so => /opt/rocm/lib/libhip_hcc.so (0x00007fb7d5ced000) libOpenCL.so.1 => /usr/lib/x86_64-linux-gnu/libOpenCL.so.1 (0x00007fb7d5ae2000) libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007fb7d575f000) libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fb7d5456000) libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007fb7d5240000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fb7d5022000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fb7d4c58000) /lib64/ld-linux-x86-64.so.2 (0x000055f5d1d3a000) libffi.so.6 => /usr/lib/x86_64-linux-gnu/libffi.so.6 (0x00007fb7d4a50000) libedit.so.2 => /usr/lib/x86_64-linux-gnu/libedit.so.2 (0x00007fb7d4817000) libtinfo.so.5 => /lib/x86_64-linux-gnu/libtinfo.so.5 (0x00007fb7d45ee000) libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007fb7d43d4000) libunwind.so.8 => /usr/lib/x86_64-linux-gnu/libunwind.so.8 (0x00007fb7d41b8000) libhc_am.so => /opt/rocm/lib/libhc_am.so (0x00007fb7d3f96000) libbsd.so.0 => /lib/x86_64-linux-gnu/libbsd.so.0 (0x00007fb7d3d80000) liblzma.so.5 => /lib/x86_64-linux-gnu/liblzma.so.5 (0x00007fb7d3b5e000) libhsa-runtime64.so.1 => /opt/rocm/hsa/lib/libhsa-runtime64.so.1 (0x00007fb7d38c7000) libhsakmt.so.1 => /opt/rocm/libhsakmt/lib/libhsakmt.so.1 (0x00007fb7d36a8000) libelf.so.1 => /usr/lib/x86_64-linux-gnu/libelf.so.1 (0x00007fb7d3490000) librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007fb7d3288000) libpci.so.3 => /lib/x86_64-linux-gnu/libpci.so.3 (0x00007fb7d307a000) libresolv.so.2 => /lib/x86_64-linux-gnu/libresolv.so.2 (0x00007fb7d2e5f000) libudev.so.1 => /lib/x86_64-linux-gnu/libudev.so.1 (0x00007fb7d2e3e000) Can you compile the ir to asm using llc -march=amdgcn -mcpu=gfx900 <file.ll>? Ok, the output of llc-5.0 -march=amdgcn -mcpu=gfx803 myadd_kernel.ll (not gfx900, my card is R9 Nano) https://gist.github.com/masahi/6c7f270240891bc7e1e82dd221e05903 I can also disassemble 'rocm_kernel.co', generated in here The output of $/opt/rocm/hcc/compiler/bin/llvm-objdump -disassemble -mcpu=gfx803 rocm_kernel.co https://gist.github.com/masahi/ba5b376d2ecd15c72f6ee599a84287a7 The asm looks good to me. Are you still getting runtime error? Yes, I either get 'Memory access fault error', or no error but the output array is [0., 0., 0., ....] . I noticed that the way you call ROCMWrappedFunc is different from opencl and cuda modules. For rocm, there are four arguments to operator(), but for opencl and cuda, only three. Why is that? When I print the value of packed_nbytes, it sayes 28 or 20 (the operator() is called twice, don't know why) @masahi Great! That is the bug we are seeing that I mentioned. [12:04:46] src/runtime/rocm/rocm_device_api.cc:126: Doing GPUCopy [12:04:46] src/runtime/rocm/rocm_device_api.cc:128: HtoD: 0.400863 [12:04:46] src/runtime/rocm/rocm_device_api.cc:126: Doing GPUCopy [12:04:46] src/runtime/rocm/rocm_device_api.cc:128: HtoD: 0.0277189 [12:04:46] src/runtime/rocm/rocm_device_api.cc:126: Doing GPUCopy [12:04:46] src/runtime/rocm/rocm_device_api.cc:128: HtoD: 0 [12:04:46] src/runtime/rocm/rocm_device_api.cc:126: Doing GPUCopy [12:04:46] src/runtime/rocm/rocm_device_api.cc:137: DtoH: 0 [12:04:46] src/runtime/rocm/rocm_device_api.cc:126: Doing GPUCopy [12:04:46] src/runtime/rocm/rocm_device_api.cc:137: DtoH: 0.400863 Traceback (most recent call last): File "test_codegen_device.py", line 88, in <module> test_add_pipeline() File "test_codegen_device.py", line 85, in test_add_pipeline check_target("rocm", host="llvm") File "test_codegen_device.py", line 55, in check_target c.asnumpy(), a.asnumpy()) File "/usr/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1391, in assert_allclose verbose=verbose, header=header) File "/usr/lib/python2.7/dist-packages/numpy/testing/utils.py", line 733, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=1e-07, atol=0 (mismatch 100.0%) x: array([ 0., 0., 0., ..., 0., 0., 0.], dtype=float32) y: array([ 0.400863, 0.975084, 0.134123, ..., 0.170033, 0.325066, 0.891966], dtype=float32) Do you have any interesting observations? Nothing so far. So the value of packed_nbytes to be 28 or 20 is definitely wrong? Can you join the dlpack slack channel? We can discuss more there. Sure, but how can I join? Haven't used slack before. It is dlpack.slack.com Ok, I'll ping @tqchen to send me an invite. @masahi You can send an email to my uw email address
gharchive/pull-request
2017-08-31T01:33:38
2025-04-01T04:34:01.168622
{ "authors": [ "adityaatluri", "masahi", "tqchen" ], "repo": "dmlc/tvm", "url": "https://github.com/dmlc/tvm/pull/402", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
117511243
How to extract "bestError" from xgb.train How to extract error terms from xgb.train output. Right now, it gives “handle” and “raw” only There are two methods you can try currently: Use the sink function in R to save the output information Add parameter early.stop.round to xgboost/xgb.train, then there will be more slots in the returned model object.
gharchive/issue
2015-11-18T04:28:09
2025-04-01T04:34:01.171176
{ "authors": [ "hetong007", "shivonkar" ], "repo": "dmlc/xgboost", "url": "https://github.com/dmlc/xgboost/issues/631", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
636655319
Add explicit cast to pass 32-bit CRAN check Addresses warning from CRAN submission: https://github.com/dmlc/xgboost/pull/5701#issuecomment-640438882 Relevant warning: https://ci.appveyor.com/project/tqchen/xgboost/builds/33364594/job/hfjrlq89aanhx63m#L1381 It's odd that this warning didn't trigger an alarm. Probably because AppVeyor doesn't run R CMD check.
gharchive/pull-request
2020-06-11T01:43:23
2025-04-01T04:34:01.172914
{ "authors": [ "hcho3" ], "repo": "dmlc/xgboost", "url": "https://github.com/dmlc/xgboost/pull/5777", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
347612304
Material ui babel module not found Started having this issue from today. Yesterday everything worked fine. Environment Tech Version material-ui 1.3.1 React 16.4.1 Chrome Chrome 67.0.3396.99 Platform Windows 10 Steps to reproduce npm start Expected behavior Application started properly Actual behavior Getting this error on startup: Failed to compile ./node_modules/@material-ui/core/ButtonBase/ButtonBase.js Module not found: Can't resolve '@babel/runtime/helpers/builtin/assertThisInitialized' in '...\node_modules@material-ui\core\ButtonBase' Could you upgrade please? https://github.com/mui-org/material-ui/releases/tag/v1.4.3 Worked! Thank you!
gharchive/issue
2018-08-04T11:34:21
2025-04-01T04:34:01.188094
{ "authors": [ "TrySound", "jovankricka" ], "repo": "dmtrKovalenko/material-ui-pickers", "url": "https://github.com/dmtrKovalenko/material-ui-pickers/issues/556", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2588045473
Warning and error with I am getting this warning on the start of my project: `WARNING: The following packages have a svelte field in their package.json but no exports condition for svelte. svelte-scrollactive@0.0.9 Please see https://github.com/sveltejs/vite-plugin-svelte/blob/main/docs/faq.md#missing-exports-condition for details.` And then this error: Unknown file extension ".svelte" for ../node_modules/svelte-scrollactive/scrollactive.svelte at Object.getFileProtocolModuleFormat [as file:] (node:internal/modules/esm/get_format:217:9) at defaultGetFormat (node:internal/modules/esm/get_format:243:36) at defaultLoad (node:internal/modules/esm/load:123:22) at async ModuleLoader.load (node:internal/modules/esm/loader:567:7) at async ModuleLoader.moduleProvider (node:internal/modules/esm/loader:442:45) { code: 'ERR_UNKNOWN_FILE_EXTENSION' I am using Svelte 4.2.19. Maybe it isn't updated? Yeah, we might need to update this package. Care to send a PR?
gharchive/issue
2024-10-15T08:36:59
2025-04-01T04:34:01.197068
{ "authors": [ "dmvvilela", "vtashkov" ], "repo": "dmvvilela/svelte-scrollactive", "url": "https://github.com/dmvvilela/svelte-scrollactive/issues/5", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
181464987
Handle workflows without inbox elements in JobArchiver I noticed there was a workflow in testbed not moving from aborted to aborted-completed and the reason is taht JobArchiver was failing to check the WMBS injection status (there was no inbox element for that request(?)). In case the inbox element is gone, then we set this workflow as injected in the agent and get it out of the system. @ticoann please review @ticoann please review and merge before cutting the 1.0.21 tag
gharchive/pull-request
2016-10-06T16:33:09
2025-04-01T04:34:01.219793
{ "authors": [ "amaltaro" ], "repo": "dmwm/WMCore", "url": "https://github.com/dmwm/WMCore/pull/7270", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
274351462
change to debug statement for parent file This generates a lot of log message. @amaltaro, if you agree I will create agent branch patch as well. Yes, I agree! test this please duplicate, closing it.
gharchive/pull-request
2017-11-16T00:16:48
2025-04-01T04:34:01.221081
{ "authors": [ "amaltaro", "ericvaandering", "ticoann" ], "repo": "dmwm/WMCore", "url": "https://github.com/dmwm/WMCore/pull/8338", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
528836599
Possible memory leak in MVC modules Description of bug Hello. I have a project running in production, which I upgraded to the 9.4.1 version of DNN. After the upgrade I noticed that there was an unusual memory usage, closing in on max, resulting in website restarts. Searching in the reported issues, I found one issue related to a memory leak in the API, which was fixed in DNN 9.4.2 version release. I went on to upgrade to the 9.4.2 on my dev environment to test things out. Turns out there was still a high memory usage. For my next test, I installed a fresh DNN site, with no modules and ran a load test (20 concurrent users for 5 minutes) and the results were good, with normal memory usage. After that I installed a simple module, created with a DNN template based on Chris Hammond's template. This next test was done by adding this module to an empty page and running the same load test, which, turned out to be spot on: memory usage increased rapidly and was close to max. I am attaching the module I used to test this behavior. Steps to reproduce Install the attached TestModule Add TestModule to any page Run a load test on the page containing the module Observe memory usage for IIS process Current result Memory usage increases rapidly until it's maxed out. Expected result Memory usage should remain stable. Screenshots Memory Usage for IIS process running the clean test installation with a test module Additional context I took a process dump from my production website and after analysis I theorized that the leak was related to the new DI. Digging into the dump file showed that the ServiceProvider is not disposing of module controllers, as you can see by the image bellow. Affected version 9.4.0, 9.4.1, 9.4.2 TestModule_00.00.01.00_Install.zip TestModule.zip Hello I have a project running 9.4.1 and also had this problem. My memory is always max out until my AppPool recycle. This is causing several crashes on my production environment. Can you help out, base on what @jsbsantos described? Thanks! It depends if your memory leak is from mvc or web-api. The major fix in 9.4.2 is with webapi, I would upgrade to 9.4.2 to begin with. Hello Already done that. I upgraded my project to 9.4.2, hopping that the major fix could resolve my problem. But, doing a simple test load, the result were the same to 9.4.1. I done the same tests listed by @jsbsantos, and obtained the same results he had. My memory is always max out when i have a MVC module, consuming all my resources. @valadas I ran the same tests in the 9.4.2 version and got the same result. Upgrading won't fix it. @ahoefling do you have any clue on this, wasn't there already an issue for this? MVC and Web API use different pipelines in DNN. The Web API Pipeline runs independent of DNN inside of IIS, where MVC uses a reverse-engineered MVC pipeline built when ASP.NET was closed source. I was able to reproduce this error in the provided sample MVC Module, which will be a good testing case as we try and come up with a fix for this. ✅ If anyone is trying to solve this here are some things I have tried: Saving the Instance of IController public class DnnMvcControllerFactory : DefaultControllerFactory, IControllerFactory { private readonly Dictionary<Type, IController> _controllers = new Dictionary<Type, IController>(); protected override IController GetControllerInstance(RequestContext requestContext, Type controllerType) { if (!_controllers.ContainsKey(controllerType)) _controllers.Add(controllerType, (IController)Globals.DependencyProvider.GetService(controllerType)); return _controllers[controllerType] ?? base.GetControllerInstance(requestContext, controllerType); } } This doesn't work because ASP.NET throws an exception if we try re-using the same controller for multiple requests. Using the ActivatorUtilities and Reverting to Old Techniques public class DnnMvcControllerFactory : DefaultControllerFactory, IControllerFactory { private readonly Dictionary<Type, IController> _controllers = new Dictionary<Type, IController>(); protected override IController GetControllerInstance(RequestContext requestContext, Type controllerType) { var controller = (IController)ActivatorUtilities.GetServiceOrCreateInstance(Globals.DependencyProvider, controllerType); return controller ?? base.GetControllerInstance(requestContext, controllerType); } } This is functional but the memory leak still occurs Forcing The Release Controller Using the same code from the last technique, I tried forcing the Dispose and then null the controller. public class DnnMvcControllerFactory : DefaultControllerFactory, IControllerFactory { private readonly Dictionary<Type, IController> _controllers = new Dictionary<Type, IController>(); protected override IController GetControllerInstance(RequestContext requestContext, Type controllerType) { var controller = (IController)ActivatorUtilities.GetServiceOrCreateInstance(Globals.DependencyProvider, controllerType); return controller ?? base.GetControllerInstance(requestContext, controllerType); } public override void ReleaseController(IController controller) { if (controller is IDisposable disposable) disposable.Dispose(); controller = null; } } This is functional but the memory leak still occurs My Thoughts It is still presenting as a Memory Leak even with all the techniques I have tried. I wonder if this memory leak has been a problem for awhile now, but has gone undetected. I would hope that is not the case. The .NET Garbage Collector will hold onto anything that has a reference to the object. In the ModuleApplication.cs where the bulk of the MVC Module is processed it resolves the IController to a variable called controller this object is then released at the end of the pipeline. However this object is then passed to another variable called moduleController which is then used in the return object. In my experience with the .NET Garbage Collector that would mean the object is still being held onto and it won't properly be cleaned up and destroyed. I am not 100% convinced the problem is in the IControllerFactory, that is just where I started because it controls the lifecycle of the resolved or instantiated IController. I am now leaning more towards poor memory management and the objects are still being referenced after the fact. We may be able to learn something from how Dependency Injection libraries have supported this mechanism in the past with AspNetWebStack and see if we can implement a similar solution. No Quick Fix I don't have any temporary fixes for this, so we are going to have to keep looking at the problem. I have limited time to look into this, if anyone wants to try solving it I am more than happy to discuss solutions in this thread. I have noticed my production sites randomly restarting as well not really knowing to test like this to find the memory leak. I will take a deep look into this myself to see if I can understand anything given what was provided to help troubleshoot. Look forward to a solution. Thank you. Merged, will publish 9.4.4 RC-1 very shortly for testing with only this bugfix. If everything goes smooth we would release that next week. @valadas @ahoefling Thank you. I'll wait for the 9.4.4 RC and I'll test it again. I'll update if I find any problems. @jsbsantos it is published now if you want to test and report back. https://github.com/dnnsoftware/Dnn.Platform/releases/tag/v9.4.4-rc1 @valadas I saw it after submitting my comment. I tested it on the same conditions as before and it seems to be working. No memory noticeable memory increase, which is a good sign. I'll try to test this in a bigger and more complex website, with more customization and update with results.
gharchive/issue
2019-11-26T16:27:51
2025-04-01T04:34:01.278876
{ "authors": [ "Bladixx", "ahoefling", "jsbsantos", "thabaum", "valadas" ], "repo": "dnnsoftware/Dnn.Platform", "url": "https://github.com/dnnsoftware/Dnn.Platform/issues/3344", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
645852628
'/var/lib/ghost/content': Permission denied When trying to run ghost with docker compose I get the following error: 2020-06-25T20:59:41.481429000Z find: '/var/lib/ghost/content': Permission denied 2020-06-25T20:59:41.482828000Z chown: changing ownership of '/var/lib/ghost/content': Permission denied I have setup my docker compose file to point /var/lib/ghost/content to another dir. This is the docker compose file: version: "3.7" services: blog: image: ghost:latest container_name: blog_com restart: always depends_on: - blog_db ports: - "2370:2368" environment: url: https://blog.com preloadHeaders: 100 database__client: mysql database__connection__host: blog_db database__connection__user: blog database__connection__password: password database__connection__database: blog database__connection__port: 3308 volumes: - /opt/blog_content:/var/lib/ghost/content networks: - proxy-net blog_db: image: mysql:8.0 restart: always container_name: blog_mysql environment: MYSQL_ROOT_PASSWORD: password MYSQL_USER: blog MYSQL_PASSWORD: password MYSQL_DATABASE: blog MYSQL_TCP_PORT: 3308 volumes: - /opt/blog_mysql:/var/lib/mysql command: --default-authentication-plugin=mysql_native_password ports: - 3308:3308 networks: - proxy-net networks: proxy-net: external: name: blog_network I'm doing similarly for the mysql dir which works fine and sets itself up perfectly before starting ghost. Doing ls -l in /opt/ looks like this: drwxr-xr-x. 2 root root 6 Jun 25 20:42 blog_content drwxr-xr-x. 7 polkitd root 4096 Jun 25 20:53 blog_mysql I'm running the docker-compose file on the latest version of Fedora CoreOS. Starting it with the usual sudo docker-compose up -d. What could be causing this? blog content should be owned by: node:node (1000:1000) Ok. Will try. Strangely about a month ago on another server but with the same docker compose file I did not have to set the permission. Do I need to create the node user and node group and apply them to the blog_content folder? I don't have node installed as I thought, and it did work before, that the docker container cares for node and the rest which is needed to run ghost? You don't need to install Node. It's simply Linux group/user permission stuff :-p So after a lot of searching the correct way seems to be to add :Z at the end of any volume. This is the answer I found: https://stackoverflow.com/a/31334443/964887
gharchive/issue
2020-06-25T21:09:27
2025-04-01T04:34:01.300595
{ "authors": [ "andreborud", "pascalandy" ], "repo": "docker-library/ghost", "url": "https://github.com/docker-library/ghost/issues/225", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1409002670
15.0 release Hello. I would like to see the release build of docker) Built image ewout/postgres:15.0 for the impatient. Official CI should kick in within 24h. I was trying to get it updated all day yesterday but the APT repos had a slight delay in getting the update. :sweat_smile: https://github.com/docker-library/official-images/pull/13339 is open now :+1:
gharchive/issue
2022-10-14T08:49:34
2025-04-01T04:34:01.339690
{ "authors": [ "emansom", "pomazanbohdan", "tianon" ], "repo": "docker-library/postgres", "url": "https://github.com/docker-library/postgres/issues/1005", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
369201864
bower deprecated When I run npm install -g bower and after I run make test these messages appear among others: npm WARN deprecated bower@1.8.4: We don't recommend using Bower for new projects. Please consider Yarn and Webpack or Parcel. You can read how to migrate legacy project here: https://bower.io/blog/2017/how-to-migrate-away-from-bower/ Given this is course material, won't fix unless bower gets removed from NPM
gharchive/issue
2018-10-11T16:24:15
2025-04-01T04:34:01.348436
{ "authors": [ "mixja", "wilsonmar" ], "repo": "docker-production-aws/microtrader", "url": "https://github.com/docker-production-aws/microtrader/issues/3", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2738496658
Dockerfile: bump ghcr.io/google/addlicense:v1.1.1 Was hoping this would be a multi-platform image, but it's still single-arch; 1 warning found (use docker --debug to expand): - InvalidBaseImagePlatform: Base image ghcr.io/google/addlicense:v1.0.0 was pulled with platform "linux/amd64", expected "linux/arm64" for current build (line 25) I guess alternatively, we could just do a go install instead Codecov Report All modified and coverable lines are covered by tests :white_check_mark: Project coverage is 67.73%. Comparing base (a33364e) to head (a362e87). Additional details and impacted files @@ Coverage Diff @@ ## main #61 +/- ## ======================================= Coverage 67.73% 67.73% ======================================= Files 5 5 Lines 623 623 ======================================= Hits 422 422 Misses 139 139 Partials 62 62 :umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.
gharchive/pull-request
2024-12-13T14:13:41
2025-04-01T04:34:01.359749
{ "authors": [ "codecov-commenter", "thaJeztah" ], "repo": "docker/cli-docs-tool", "url": "https://github.com/docker/cli-docs-tool/pull/61", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
96880046
Update authentication.md Fix superfluous backslashes Please sign your commits following these rules: https://github.com/docker/docker/blob/master/CONTRIBUTING.md#sign-your-work The easiest way to do this is to amend the last commit: $ git clone -b "patch-1" git@github.com:dv/distribution.git somewhere $ cd somewhere $ git commit --amend -s --no-edit $ git push -f Ammending updates the existing PR. You DO NOT need to open a new one. High, and thanks for this! About nginx: this doc instructs you to use nginx 1.9 through compose - though that's implicit, so, maybe the explicit mention about nginx version would best fit in a paragraph at the end for people who want to do it "their own way" without following the compose instructions are you sure the backslashes are not needed? the nginx conf is not a standalone example, it's part of shell script you need to run to generate everything needed Hey dmp42, Yes you're right! If you run it as a script the backslashes are probably necessary, I totally overlooked that. I like your idea of adding a paragraph with the exact requirements, so people can configure their own systems as well. Like it! LGTM and merged. Thanks for this @dv
gharchive/pull-request
2015-07-23T18:43:33
2025-04-01T04:34:01.405946
{ "authors": [ "GordonTheTurtle", "dmp42", "dv" ], "repo": "docker/distribution", "url": "https://github.com/docker/distribution/pull/733", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
229000733
Define HA as it relates to Docker File: glossary.md, CC @johndmulhausen For the glossary, how would we define High Availability as it relates to Docker? High-availability is the ability of Docker to run my services and containers without any downtime, even when one if the swarm nodes stops working. This is an industry term and Docker uses it in the industry way.
gharchive/issue
2017-05-16T11:34:41
2025-04-01T04:34:01.419823
{ "authors": [ "joaofnfernandes", "mstanleyjones", "westonkdavis" ], "repo": "docker/docker.github.io", "url": "https://github.com/docker/docker.github.io/issues/3280", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
257044069
network "host" is missing File: compose/compose-file/index.md, CC @gbarr01 With >= 17.06 it is possible to use the Host Network within Containers. This is also true in swarm mode. See: https://github.com/docker/compose/issues/5039 https://github.com/moby/moby/issues/25873#issuecomment-319109840 On this page, this information is missing. The sections network-mode: "This option is ignored when deploying a stack in swarm mode with a (version 3) Compose file." and https://docs.docker.com/compose/compose-file/#network-configuration-reference keep silent about that. In swarm mode, when using docker service command, host networking can be used by passing this parameter like in->( docker service create --name $NAME --network=host $IMAGE) Can you confirm, if this fix will add support for network: host, in swarm mode, in Docker Compose file version 3. thanks !
gharchive/issue
2017-09-12T13:31:21
2025-04-01T04:34:01.423217
{ "authors": [ "rdxmb", "sudharkrish" ], "repo": "docker/docker.github.io", "url": "https://github.com/docker/docker.github.io/issues/4593", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
726273822
Publish updates from master Publish the latest changes from master @thaJeztah @StefanScherer PTAL
gharchive/pull-request
2020-10-21T08:48:40
2025-04-01T04:34:01.424084
{ "authors": [ "usha-mandya" ], "repo": "docker/docker.github.io", "url": "https://github.com/docker/docker.github.io/pull/11591", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1177288521
Add more valid placeholders Proposed changes Adding more valid placeholders. I was not able to find the complete list of these placeholders mentioned in the documentation site. @thaJeztah @usha-mandya any status of the PR? Do let me know for any requested changes.
gharchive/pull-request
2022-03-22T20:47:11
2025-04-01T04:34:01.425172
{ "authors": [ "prashant-shahi" ], "repo": "docker/docker.github.io", "url": "https://github.com/docker/docker.github.io/pull/14429", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
423891246
Update install.md add note: 8 character password minimum length Proposed changes Unreleased project version (optional) Related issues (optional) Deploy preview for docsdocker ready! Built with commit 14a43c4dad9d152d9297e886b22925c1c0c282db https://deploy-preview-8513--docsdocker.netlify.com
gharchive/pull-request
2019-03-21T18:56:29
2025-04-01T04:34:01.427624
{ "authors": [ "GordonTheTurtle", "adamancini" ], "repo": "docker/docker.github.io", "url": "https://github.com/docker/docker.github.io/pull/8513", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
132065936
[1.10] "runtime/cgo: pthread_create failed: Resource temporarily unavailable" on CentOS 7 Hi, I just upgraded to docker 1.10 and got struct with an issue where I'm not able to create large number of containers. I believe docker is either hanging or crashing as soon as number of containers are reaching more than 500. When I debug /var/log/messages I found that its giving resource unavailability issue on the same machine where I used to create around 1200 containers successfully. When I studied I found that there has been an introduction of TasksMax flag which sets number of threads to 512 by default but this flag is not supported by CentOS 7 or any OS versions running 3.10.xxx and giving following error: [/etc/systemd/system.conf:58] Unknown lvalue 'TasksMax' in section 'Manager' Kindly suggest a way forward because it completed stopped our operation and we are not able to proceed with a high number of containers. I tried to remove TasksMax from docker.service file still there is no success. Here is the detail of docker: [root@p4029667 log]# docker info Containers: 442 Running: 401 Paused: 0 Stopped: 41 Images: 30 Server Version: 1.10.0-rc3 Storage Driver: devicemapper Pool Name: docker-253:1-538163109-pool Pool Blocksize: 65.54 kB Base Device Size: 10.74 GB Backing Filesystem: xfs Data file: /dev/vg-docker/data Metadata file: /dev/vg-docker/metadata Data Space Used: 34.29 GB Data Space Total: 536.9 GB Data Space Available: 502.6 GB Metadata Space Used: 299.7 MB Metadata Space Total: 4.295 GB Metadata Space Available: 3.995 GB Udev Sync Supported: true Deferred Removal Enabled: true Deferred Deletion Enabled: true Deferred Deleted Device Count: 0 Library Version: 1.02.107-RHEL7 (2015-10-14) Execution Driver: native-0.2 Logging Driver: json-file Plugins: Volume: local Network: bridge null host Kernel Version: 3.10.0-123.20.1.el7.x86_64 Operating System: CentOS Linux 7 (Core) OSType: linux Architecture: x86_64 CPUs: 24 Total Memory: 188.7 GiB Name: p4029667.pubip.serverbeach.com ID: GYAC:IFA4:2ZBZ:FYMM:GT5G:CIIF:WSMY:3FVS:FZBU:B7LN:4WSQ:ZB6I WARNING: bridge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled Following is the info related to version: root@p4029667 log]# docker version Client: Version: 1.10.0-rc3 API version: 1.22 Go version: go1.5.3 Git commit: 08c24cc Built: Tue Feb 2 22:54:00 2016 OS/Arch: linux/amd64 Server: Version: 1.10.0-rc3 API version: 1.22 Go version: go1.5.3 Git commit: 08c24cc Built: Tue Feb 2 22:54:00 2016 OS/Arch: linux/amd64 If you are reporting a new issue, make sure that we do not have any duplicates already open. You can ensure this by searching the issue list for this repository. If there is a duplicate, please close your issue and add a comment to the existing issue instead. If you suspect your issue is a bug, please edit your issue description to include the BUG REPORT INFORMATION shown below. If you fail to provide this information within 7 days, we cannot debug your issue and will close it. We will, however, reopen it if you later provide the information. For more information about reporting issues, see CONTRIBUTING.md. You don't have to include this information if this is a feature request (This is an automated, informational response) BUG REPORT INFORMATION Use the commands below to provide key information from your environment: docker version: docker info: Provide additional environment details (AWS, VirtualBox, physical, etc.): List the steps to reproduce the issue: 1. 2. 3. Describe the results you received: Describe the results you expected: Provide additional info you think is important: ----------END REPORT --------- #ENEEDMOREINFO The TasksMax warning looks like a duplicate of https://github.com/docker/docker/issues/20096. Afaict, the warning is only a warning, and doesn't affect the way docker runs; for older versions of systemd (and kernel versions below 4.3) this should not have a difference. b.t.w., I see you're still running a release-candidate (1.10.0 has been released, but a 1.10.1 patch-release will be issued with resolves an issue with firewalld). Would you be able to provide the logs you found in /var/log/messages? Also, could you see if running the daemon with D (debug) gives anything useful? Yes thaJeztah I will provide you log when next time. But I think docker ps -a has been fixed it release 1.10 then any idea why its getting stuck when number of instances are going beyond 500 or 550?? Here is the log from messages file: Feb 7 23:47:21 p4029667 node: docker create --memory=100m --env-file=/var/www/html/docker.env -u 37842:37842 --ulimit nproc=300 -p 37842:37842 newbase jx /home/cg/src/index.jx 37842 cpp11 1454734359-3964 Feb 7 23:47:21 p4029667 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth54cff0d: link becomes ready Feb 7 23:47:21 p4029667 kernel: docker0: port 145(veth54cff0d) entered forwarding state Feb 7 23:47:21 p4029667 kernel: docker0: port 145(veth54cff0d) entered forwarding state Feb 7 23:47:21 p4029667 node: Retunring with error : Error: Command failed: runtime/cgo: pthread_create failed: Resource temporarily unavailable Feb 7 23:47:21 p4029667 node: SIGABRT: abort Feb 7 23:47:21 p4029667 node: PC=0x7f4f2fab85f7 m=5 Feb 7 23:47:21 p4029667 node: goroutine 0 [idle]: Feb 7 23:47:21 p4029667 node: goroutine 1 [runnable, locked to thread]: Feb 7 23:47:21 p4029667 node: runtime.Gosched() Feb 7 23:47:21 p4029667 node: /usr/local/go/src/runtime/proc.go:166 +0x14 Feb 7 23:47:21 p4029667 node: github.com/docker/libnetwork/ipamutils.initGranularPredefinedNetworks(0x0, 0x0, 0x0) Feb 7 23:47:21 p4029667 node: /root/rpmbuild/BUILD/docker-engine/vendor/src/github.com/docker/libnetwork/ipamutils/utils.go:38 +0x111 Feb 7 23:47:21 p4029667 node: github.com/docker/libnetwork/ipamutils.init.1() Feb 7 23:47:21 p4029667 node: /root/rpmbuild/BUILD/docker-engine/vendor/src/github.com/docker/libnetwork/ipamutils/utils.go:17 +0x4d Feb 7 23:47:21 p4029667 node: github.com/docker/libnetwork/ipamutils.init() Feb 7 23:47:21 p4029667 node: /root/rpmbuild/BUILD/docker-engine/vendor/src/github.com/docker/libnetwork/ipamutils/utils_linux.go:74 +0x59 Feb 7 23:47:21 p4029667 node: github.com/docker/libnetwork/ipam.init() Feb 7 23:47:21 p4029667 node: /root/rpmbuild/BUILD/docker-engine/vendor/src/github.com/docker/libnetwork/ipam/utils.go:81 +0x5e Feb 7 23:47:21 p4029667 node: github.com/docker/libnetwork/ipams/builtin.init() Feb 7 23:47:21 p4029667 node: /root/rpmbuild/BUILD/docker-engine/vendor/src/github.com/docker/libnetwork/ipams/builtin/builtin.go:35 +0x45 Feb 7 23:47:21 p4029667 node: github.com/docker/libnetwork.init() Feb 7 23:47:21 p4029667 node: /root/rpmbuild/BUILD/docker-engine/vendor/src/github.com/docker/libnetwork/store.go:422 +0xa6 Feb 7 23:47:21 p4029667 node: github.com/docker/docker/container.init() Feb 7 23:47:21 p4029667 node: /root/rpmbuild/BUILD/docker-engine/.gopath/src/github.com/docker/docker/container/store.go:28 +0xe9 Feb 7 23:47:21 p4029667 node: github.com/docker/docker/daemon.init() Feb 7 23:47:21 p4029667 node: /root/rpmbuild/BUILD/docker-engine/.gopath/src/github.com/docker/docker/daemon/wait.go:17 +0x5b Feb 7 23:47:21 p4029667 node: github.com/docker/docker/api/server/router/local.init() Feb 7 23:47:21 p4029667 node: /root/rpmbuild/BUILD/docker-engine/.gopath/src/github.com/docker/docker/api/server/router/local/local.go:107 +0xa3 Feb 7 23:47:21 p4029667 node: github.com/docker/docker/api/server/router/build.init() Feb 7 23:47:21 p4029667 node: /root/rpmbuild/BUILD/docker-engine/.gopath/src/github.com/docker/docker/api/server/router/build/build_routes.go:274 +0x44 Feb 7 23:47:21 p4029667 node: github.com/docker/docker/api/server.init() Feb 7 23:47:21 p4029667 node: /root/rpmbuild/BUILD/docker-engine/.gopath/src/github.com/docker/docker/api/server/server_unix.go:132 +0xad Feb 7 23:47:21 p4029667 node: main.init() Feb 7 23:47:21 p4029667 node: /root/rpmbuild/BUILD/docker-engine/docker/flags.go:30 +0x92 Feb 7 23:47:21 p4029667 node: goroutine 17 [syscall, locked to thread]: Feb 7 23:47:21 p4029667 node: runtime.goexit() Feb 7 23:47:21 p4029667 node: /usr/local/go/src/runtime/asm_amd64.s:1721 +0x1 Feb 7 23:47:21 p4029667 node: goroutine 35 [syscall]: Feb 7 23:47:21 p4029667 node: os/signal.loop() Feb 7 23:47:21 p4029667 node: /usr/local/go/src/os/signal/signal_unix.go:22 +0x18 Feb 7 23:47:21 p4029667 node: created by os/signal.init.1 Feb 7 23:47:21 p4029667 node: /usr/local/go/src/os/signal/signal_unix.go:28 +0x37 Feb 7 23:47:21 p4029667 node: rax 0x0 Feb 7 23:47:21 p4029667 node: rbx 0x7f4f2fe3e868 Feb 7 23:47:21 p4029667 node: rcx 0xffffffffffffffff Feb 7 23:47:21 p4029667 node: rdx 0x6 Feb 7 23:47:21 p4029667 node: rdi 0x2838 Feb 7 23:47:21 p4029667 node: rsi 0x384d Feb 7 23:47:21 p4029667 node: rbp 0x1990bcf Feb 7 23:47:21 p4029667 node: rsp 0x7f4f2bde1838 Feb 7 23:47:21 p4029667 node: r8 0xa Feb 7 23:47:21 p4029667 node: r9 0x7f4f2bde2700 Feb 7 23:47:21 p4029667 node: r10 0x8 Feb 7 23:47:21 p4029667 node: r11 0x202 Feb 7 23:47:21 p4029667 node: r12 0x7f4f1c0008c0 Feb 7 23:47:21 p4029667 node: r13 0x193f074 Feb 7 23:47:21 p4029667 node: r14 0x0 Feb 7 23:47:21 p4029667 node: r15 0x8 Feb 7 23:47:21 p4029667 node: rip 0x7f4f2fab85f7 Feb 7 23:47:21 p4029667 node: rflags 0x202 Feb 7 23:47:21 p4029667 node: cs 0x33 Feb 7 23:47:21 p4029667 node: fs 0x0 Feb 7 23:47:21 p4029667 node: gs 0x0 Feb 7 23:47:21 p4029667 node: Retunring with error : Error: Command failed: runtime/cgo: pthread_create failed: Resource temporarily unavailable Feb 7 23:47:21 p4029667 node: SIGABRT: abort Feb 7 23:47:21 p4029667 node: PC=0x7f84d21075f7 m=6 Feb 7 23:47:21 p4029667 node: goroutine 0 [idle]: Feb 7 23:47:21 p4029667 node: goroutine 20 [running]: Feb 7 23:47:21 p4029667 node: runtime.systemstack_switch() Feb 7 23:47:21 p4029667 node: /usr/local/go/src/runtime/asm_amd64.s:216 fp=0xc82003dc98 sp=0xc82003dc90 Feb 7 23:47:21 p4029667 node: runtime.gc(0x0) Feb 7 23:47:21 p4029667 node: /usr/local/go/src/runtime/mgc.go:1006 +0x1db fp=0xc82003df90 sp=0xc82003dc98 Feb 7 23:47:21 p4029667 node: runtime.backgroundgc() Feb 7 23:47:21 p4029667 node: /usr/local/go/src/runtime/mgc.go:897 +0x3d fp=0xc82003dfc0 sp=0xc82003df90 Can you please check my syntax to launch a container? Here i'm trying to run every container with different user ID and providing --nproc limit to 300, which I believe will limit the number of processes related to given user and not systemwide. Thanks for that output, @mcmohd. Syntax looks ok to me at a glance, so wondering if there's something else that causes this. I renamed the issue, because (as discussed above) I don't think this is related to the TasksMax option @mcmohd can you try setting TasksMax=infinity just out of curiosity. @tiborvass wondering what changed though in 1.10; does it use that many more processes? @mcmohd Can you provide details on how your containers are setup? What logging driver are you using? Aslo, is this the full trace? I also encounter the crash, on a different distro (manjaro). Here is my full crash trace. It happens after I start several containers with plenty of processes in them. docker-crash.txt I have a custom systemd unit for docker, without TaskMax. I will change it now to default with TaskMax set. @tiborvass, as I mentioned I'm running CentOS 7 where TasksMax flag is not supported. @thaJeztah, I'm still running 1.10.0-rc3, build 08c24cc @cpuguy83, sir here is the command which I'm using to create containers. docker create --memory=100m --env-file=/var/www/html/docker.env -u 37842:37842 --ulimit nproc=300 -p 37842:37842 newbase jx /home/cg/src/index.jx 37842 cpp11 1454734359-3964 Now let me give you complete story: I'm using image which is almost 6GB and running a machine with 192GB RAM with 24 CPUs and CentOS 7. Great news is that from last 2 days I did not get even a single crash even my number of concurrent containers crossed more than 1000. Glad to share it. Let me tell you what changes I did. (1) First of all I reduced the number of files per container from infinity to 1024 and a little reduction in number of processes per container. docker create --memory=100m --env-file=/var/www/html/docker.env -u 37842:37842 --ulimit nproc=250 --ulimit nofile=1024 -p 37842:37842 newbase jx /home/cg/src/index.jx 37842 cpp11 1454734359-3964 (2) Increased number of files limit at OS level inside /etc/security/limits.conf which was earlier set at very low, I think 65K. nofile 1048576 (3) Increased number of threads at OS level in /proc/sys/kernel/threads-max, earlier it was set to 1545841 and now I set it at 3091639 (4) Increased number of maximum number of processes at kernel level in /proc/sys/kernel/pid_max, earlier it was set at 32768 and now I increased it at 4194304 You can check this thread for further help to tweak virtual memory and stack size, though I did not touched them. But I'm happy that so far it's going very smooth, and let's finger cross and see for next few days. Thank you very much for coming ahead and providing required support as usual. Kind regards mohtashim tutorialspoint.com Missed to mentioned that I increased stack size also at docker level inside /usr/lib/systemd/system/docker.service LimitSTACK=33554432 No more crashes after settings TasksMax for me. @stelund, what OS you are running? @mcmohd manjaro (https://manjaro.github.io/) an derivative from Arch @mcmohd I don't understand. You're saying: But I'm happy that so far it's going very smooth, and let's finger cross and see for next few days. Does it mean you're not seeing this issue anymore ? Yes tiborvass, upto 1000 containers it's running fine, I'm waiting when it will go more than 1200. @mcmohd I hope you don't mind, I'm closing this issue. If you see it happen again, let us know with as much information and reproducibility as possible. Thanks! Sure sir, for now you can close it. I had the similar issue while running multiple containers on Virtualbox's guest. It pops up when the containers are run by a script automatically one after the other. Once you retry after it failed, it succeeds without leaving a chance to reproduce. However, for some reason, I tried to remove an image with docker rmi ... and it keeps exiting with this error. runtime/cgo: pthread_create failed: Resource temporarily unavailable SIGABRT: abort PC=0x7f34b919ecc9 m=3 goroutine 0 [idle]: goroutine 6 [syscall]: runtime.notetsleepg(0x20add20, 0xffffffffffffffff, 0x1) /usr/local/go/src/runtime/lock_futex.go:202 +0x4e fp=0xc820023f40 sp=0xc820023f18 runtime.signal_recv(0x6) /usr/local/go/src/runtime/sigqueue.go:111 +0x132 fp=0xc820023f78 sp=0xc820023f40 os/signal.loop() /usr/local/go/src/os/signal/signal_unix.go:22 +0x18 fp=0xc820023fc0 sp=0xc820023f78 runtime.goexit() /usr/local/go/src/runtime/asm_amd64.s:1721 +0x1 fp=0xc820023fc8 sp=0xc820023fc0 created by os/signal.init.1 /usr/local/go/src/os/signal/signal_unix.go:28 +0x37 goroutine 1 [runnable, locked to thread]: github.com/docker/docker/pkg/tarsum.NewTHash(0x164bdd0, 0x6, 0x1948820, 0x0, 0x0) /usr/src/docker/.gopath/src/github.com/docker/docker/pkg/tarsum/tarsum.go:133 +0x8d github.com/docker/docker/pkg/tarsum.init() /usr/src/docker/.gopath/src/github.com/docker/docker/pkg/tarsum/tarsum.go:150 +0x1ca github.com/docker/docker/builder.init() /usr/src/docker/.gopath/src/github.com/docker/docker/builder/tarsum.go:158 +0xa1 github.com/docker/docker/builder/dockerfile.init() /usr/src/docker/.gopath/src/github.com/docker/docker/builder/dockerfile/support.go:16 +0x6f github.com/docker/docker/api/server/router/local.init() /usr/src/docker/.gopath/src/github.com/docker/docker/api/server/router/local/local.go:107 +0x71 github.com/docker/docker/api/server/router/build.init() /usr/src/docker/.gopath/src/github.com/docker/docker/api/server/router/build/build_routes.go:274 +0x44 github.com/docker/docker/api/server.init() /usr/src/docker/.gopath/src/github.com/docker/docker/api/server/server_unix.go:132 +0xad main.init() /usr/src/docker/docker/flags.go:30 +0x92 goroutine 17 [syscall, locked to thread]: runtime.goexit() /usr/local/go/src/runtime/asm_amd64.s:1721 +0x1 rax 0x0 rbx 0x7f34b9527868 rcx 0xffffffffffffffff rdx 0x6 rdi 0x48e9 rsi 0x48eb rbp 0x1996ddf rsp 0x7f34b6f7e8a8 r8 0xa r9 0x7f34b6f7f700 r10 0x8 r11 0x202 r12 0x7f34b00008c0 r13 0x1944f70 r14 0x0 r15 0x8 rip 0x7f34b919ecc9 rflags 0x202 cs 0x33 fs 0x0 gs 0x0 It turned out to be that there isn't enough memory available. $ free -m total used free shared buffers cached Mem: 489 484 5 18 0 22 -/+ buffers/cache: 462 27 Swap: 0 0 0 I used top to find which process is taking up my Memory and found Java taking about 60% and pkilled it to see about 280 free memory from free -m command. once there is enough memory, the command runs all normal. @mcmohd , You have mentioned that you specified a nproc of 250 which is the overall process limit for the user. That means only 250 containers can be created. How in your case does that work? Hi, I'm also seeing this issue when running unlimited nproc docker images. It appears to only happen with my GO web application. Somehow it uses all resources and the docker host crashes. Limiting nproc fixed it, but I think it is pretty bad that code running in containers can crash the host. @PepijnK it's important to always set constraints on a container (e.g., limit its memory, cpu). Even though processes in a container don't have file access to the host, and cannot access processes outside the container, that doesn't mean they cannot consume resources. By default, no limits are set on the amount of memory, and cpu a container is allowed to use, so if your host is running out of memory, the kernel starts to randomly kill processes. @thaJeztah I assumed the daemon would protect himself against that, but containers are not fully isolated (like in case of virtual machines), which is why they are lightweight. A security/performance tradeoff I guess. So, ok, I will put constraints on my containers.. @PepijnK containers and virtual machines suit a different goal, and generally complement each other. The daemon is configured with a negative OOM score; --oom-score-adjust=-500, which means it's very unlikely to be killed before containers are killed (but not "unkillable"). The daemon is not in control there, that's a task for the kernel; docker tells the daemon how to "provision" a container and what constraints to put on them, after that the daemon only monitors (since docker 1.12, you can even stop the daemon, and the containers keep running). So, ok, I will put constraints on my containers.. That's not any different than VM's; when deploying VM's, you'll also specify the amount of memory, cpu (and disk) a VM uses.
gharchive/issue
2016-02-08T06:41:19
2025-04-01T04:34:01.452781
{ "authors": [ "GordonTheTurtle", "PepijnK", "cpuguy83", "mcmohd", "stelund", "thaJeztah", "tiborvass", "tsrivishnu", "vibgyar" ], "repo": "docker/docker", "url": "https://github.com/docker/docker/issues/20096", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
66635981
Add "builder-deb" base images for building ".deb" packages properly Here's how I've been testing this (without DinD): $ ./contrib/builder/deb/build.sh $ AUTO_GOPATH=1 DOCKER_TEST_HOST=unix:///var/run/docker.sock ./hack/make.sh build-deb This can also be tested "properly" with something like: $ make shell # ./hack/make.sh binary build-deb We can also use "frozen images" later to make this build much faster (and work without internet access, assuming an up-to-date Dockerfile build, as per our usual flow). Here's an example of what this produces: $ find bundles -name '*.deb' | sort bundles/1.5.0-dev/build-deb/debian-jessie/docker-core_1.5.0~dev~git20150403.013846.0.22cb318-0~jessie_amd64.deb bundles/1.5.0-dev/build-deb/debian-wheezy/docker-core_1.5.0~dev~git20150403.013846.0.22cb318-0~wheezy_amd64.deb bundles/1.5.0-dev/build-deb/ubuntu-debootstrap-trusty/docker-core_1.5.0~dev~git20150403.013846.0.22cb318-0~trusty_amd64.deb bundles/1.5.0-dev/build-deb/ubuntu-debootstrap-utopic/docker-core_1.5.0~dev~git20150403.013846.0.22cb318-0~utopic_amd64.deb bundles/1.5.0-dev/build-deb/ubuntu-debootstrap-vivid/docker-core_1.5.0~dev~git20150403.013846.0.22cb318-0~vivid_amd64.deb I tried to include precise here too (12.04), but it's way too ancient. :cry: The end goal of this change is to deprecate hack/make/ubuntu and https://get.docker.com/ubuntu in favor of having a separate repository for each suite we officially target and support, which allows us to have a dynamic binary and allows us to let debhelper do the things it's good at, instead of trying to replicate them ourselves (like properly managing service startup). awesome!!!!! me like!!! can we move to code review, ping @crosbymichael LGTM yup yup looks like shell to me LGTM
gharchive/pull-request
2015-04-06T15:46:52
2025-04-01T04:34:01.458731
{ "authors": [ "crosbymichael", "jfrazelle", "tianon" ], "repo": "docker/docker", "url": "https://github.com/docker/docker/pull/12111", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
141793164
Update the document error Signed-off-by: Wen Cheng Ma wenchma@cn.ibm.com @thaJeztah :smile: thanks, nice catch, LGTM thanks! LGTM
gharchive/pull-request
2016-03-18T07:20:13
2025-04-01T04:34:01.460339
{ "authors": [ "coolljt0725", "thaJeztah", "wenchma" ], "repo": "docker/docker", "url": "https://github.com/docker/docker/pull/21318", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
155853537
fixed spelling error in cli pull local test - What I did Fixed misspell in cli_pull_local_test - Description for the changelog fixed spelling error in cli pull local test Signed-off-by: Nirmal Mehta nirmalkmehta@gmail.com LGTM Hey, @normalfaults! Good to see you contributing! LGTM LGTM 🐮 Green !
gharchive/pull-request
2016-05-19T22:55:47
2025-04-01T04:34:01.462805
{ "authors": [ "aaronlehmann", "normalfaults", "thaJeztah", "vdemeester" ], "repo": "docker/docker", "url": "https://github.com/docker/docker/pull/22844", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
177614842
Fix typo in docs. s/methodoligies/methodologies/ ** Make sure all your commits include a signature generated with git commit -s ** ✔️ (sorry about prior noise. i had done git commit -S {capital S} and not git commit -s {lowercase s}) - What I did typo fix - How I did it typing - How to verify it spell checker - Description for the changelog {none} - A picture of a cute animal (not mandatory but encouraged) LGTM 🐸 /cc @thaJeztah LGTM
gharchive/pull-request
2016-09-17T22:39:46
2025-04-01T04:34:01.466419
{ "authors": [ "cpuguy83", "pestophagous", "vdemeester" ], "repo": "docker/docker", "url": "https://github.com/docker/docker/pull/26675", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
199768166
284 activation This PR implements some of the features required in #284. Specifically New packages pkg/launch and pkg/launch/os are added to implement launch a normal os binary via os exec, which is specified in a configuration json (the command and args required to start the process) A new verb for infrakit plugin to start a list of named plugins. A new config JSON that maps the plugin names to the 'how' in starting up the process Added a new version of the tutorial_test to show case how to start the plugins in one CLI invocation. TODO Incorporate plugin activation in the manager so that manager will be able to launch plugins dynamically based on the config input. Implement other executors -- such as plugins implemented as Docker containers or engine plugins. Please sign your commits following these rules: https://github.com/docker/docker/blob/master/CONTRIBUTING.md#sign-your-work The easiest way to do this is to amend the last commit: $ git clone -b "284-activation" git@github.com:chungers/infrakit.git somewhere $ cd somewhere $ git rebase -i HEAD~842353934280 editor opens change each 'pick' to 'edit' save the file and quit $ git commit --amend -s --no-edit $ git rebase --continue # and repeat the amend for each commit $ git push -f Amending updates the existing PR. You DO NOT need to open a new one. Current coverage is 51.36% (diff: 66.19%) Merging #356 into master will decrease coverage by 13.35% @@ master #356 diff @@ ========================================== Files 39 11 -28 Lines 1993 512 -1481 Methods 0 0 Messages 0 0 Branches 0 0 ========================================== - Hits 1290 263 -1027 + Misses 570 219 -351 + Partials 133 30 -103 Powered by Codecov. Last update fd5894e...5b02c04
gharchive/pull-request
2017-01-10T09:14:45
2025-04-01T04:34:01.596320
{ "authors": [ "GordonTheTurtle", "chungers", "codecov-io" ], "repo": "docker/infrakit", "url": "https://github.com/docker/infrakit/pull/356", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
390494078
Is Kitematic End of Life? As a fairly new user to the Docker Ecosystem, kitematic is really, really confusing. Not the app, but how Docker is positioning it. A fresh clean install of Docker for windows (and I assume mac) provides a menu item for kitematic. If kitematic is depreciated, can this menu item be removed? It wasted an evening of productivity for me, and I'm sure countless others. There is a ton of dis-information about Kitematic out there: The kitematic.com page advises "kitematic is now part of docker toolbox" and advises us to "Download Docker toolbox" Clicking that link takes us to docs.docker.com/toolbox which tells us toolbox is depreciated, and to use docker for windows Installing Docker for windows does not include kitematic, but it is on the menu. Clicking the link on the menu brings up a download for a zip file with no instructions. There isn't so much as a readme in the zip file telling us what to do with it. No where in the documentation is GitHub.com/docker/kitematic mentioned as a source for future updates. The install instructions on GitHub.com/docker/kitematic don't say how to install for windows. Who can clean all this mess up? If you want to kill kitematic, fine, if you want to keep it, fine, but it would take less time to fix the various bad sources of information on your own website than it took me to write this post. The whole Kitematic thing is extremely irritating - I looked at it 6 months ago and again in the last few days. My request to Docker(who appear to own it and include it in the docker desktop menu) is to take a policy stand and either support Kitematic properly or declare it end of life. Docker Desktop takes you to a Kitematic version which is broken: It rediscovers a 4 year old bug which just keeps coming back where it hangs if you try to search for an image... The documentation says: Every container created through Kitematic automatically has it’s volumes exposed on your Mac; **this is not true **of most of the images I have created and is contradicted here 3 years ago here: https://forums.docker.com/t/modifying-or-adding-volumes/1287/4 ...with no update other than complaint about no updates ! the same issue generated some discussions on reddit. .. https://www.reddit.com/r/docker/comments/82ws1c/kitematic_how_to_add_volumes_to_containers/ so it's off to portainer.io for now. For those coming here, and have their correct shared drives set up in the image, and still get stuck at updating. My problem was that I forgot to enable the C drive in docker: Had a little brainfart at that moment. So if it helps anyone else with brainfarts, glad to help :) I'm a bit late to the party, and I agree that the misinformation is problematic... I just encountered the same issue today and came across your post. When you right-click on Docker Desktop in the system tray and click on the Kitematic entry, it brings up a dialog prompting you to download Kitematic. Unfortunately it seems that this download button is pointing to an outdated version. To get around it, I was able to just go to https://github.com/docker/kitematic/releases and click on the zip file for the latest release (currently v0.17.8). When I put the contents from that zip file in C:\Program Files\Docker\Kitematic, then the Kitematic entry in the Docker Desktop context menu started working for me again. Just wanted to share since I agree that it is frustrating. Well, since it basically doesn't work at all on macOS, it's pretty much End-of-Life as far as mac users are concerned.
gharchive/issue
2018-12-13T02:41:18
2025-04-01T04:34:01.606135
{ "authors": [ "bryan-lunt", "goowikns", "izuio", "jackfruhecolab", "jdogan-castle" ], "repo": "docker/kitematic", "url": "https://github.com/docker/kitematic/issues/4385", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
76281875
Changing name on cli not replicating to Kitematic When using cli "docker rename" to change the name of a container, it does not reflect on the Kitematic UI. Any subsequent UI setting changes for the container will 404. @casalot Thanks for this bug report. We will look into it. There is currently no event triggered when a container is renamed. Until the core daemon does this, we will not be able to make this work. Closing.
gharchive/issue
2015-05-14T08:24:47
2025-04-01T04:34:01.608038
{ "authors": [ "FrenchBen", "casalot", "mchiang0610" ], "repo": "docker/kitematic", "url": "https://github.com/docker/kitematic/issues/498", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
51081675
docker daemon panic using libcontainer 1.2.0 docker version: 1.2.0 libcontainer version: 1.2.0 Is below panic stack fixed after libcontainer 1.2.0? please give me a link if fixed. panic: runtime error: invalid memory address or nil pointer dereference [signal 0xb code=0x1 addr=0x0 pc=0x6b7b83] goroutine 122512 [running]: runtime.panic(0xb44200, 0x116c553) /usr/local/go/src/pkg/runtime/panic.c:279 +0xf5 github.com/docker/libcontainer/namespaces.Exec(0xc209aa48c0, 0x0, 0x0, 0x7fa6fc7f21c8, 0xc2092a8dc0, 0x7fa6fc7f21c8, 0xc2092a8da0, 0x0, 0x0, 0xc209b13ea0, ...) /go/src/github.com/docker/docker/vendor/src/github.com/docker/libcontainer/namespaces/exec.go:85 +0x563 github.com/docker/docker/daemon/execdriver/native.(*driver).Run(0xc2081ebfb0, 0xc208ec7180, 0xc208b65dd0, 0xc2093ac990, 0x0, 0x0, 0x0) /go/src/github.com/docker/docker/daemon/execdriver/native/driver.go:127 +0x794 github.com/docker/docker/daemon.(*Daemon).Run(0xc208240090, 0xc2084b5380, 0xc208b65dd0, 0xc2093ac990, 0x490173, 0x0, 0x0) /go/src/github.com/docker/docker/daemon/daemon.go:970 +0x82 github.com/docker/docker/daemon.(*containerMonitor).Start(0xc20a4c36c0, 0x0, 0x0) /go/src/github.com/docker/docker/daemon/monitor.go:136 +0x361 github.com/docker/docker/daemon.*containerMonitor.Start·fm(0x0, 0x0) /go/src/github.com/docker/docker/daemon/container.go:1077 +0x38 github.com/docker/docker/utils.func·001() /go/src/github.com/docker/docker/utils/utils.go:36 +0x2e created by github.com/docker/docker/utils.Go /go/src/github.com/docker/docker/utils/utils.go:37 +0xa7 Tried yum install docker service docker stop unshare -m /usr/bin/docker -- -d -D -g /home/docker -H unix:///var/run/docker.sock -H tcp://0.0.0.0:61234 On cat /etc/centos-release CentOS Linux release 7.0.1406 (Core) # unshare -m /usr/bin/docker -- -d -D -g /home/docker -H unix:///var/run/docker.sock -H tcp://0.0.0.0:61234 2015/01/06 01:21:17 docker daemon: 1.3.2 39fa2fa/1.3.2; execdriver: native; graphdriver: [f65e4c57] +job serveapi(unix:///var/run/docker.sock, tcp://0.0.0.0:61234) [info] Listening for HTTP on unix (/var/run/docker.sock) [debug] server.go:1300 Registering GET, /images/{name:.*}/history [debug] server.go:1300 Registering GET, /containers/{name:.*}/export Can you either provide a standalone test case for libcontainer outside docker or test against current docker Looking at github.com/docker/libcontainer/namespaces.Exec(0xc209aa48c0, 0x0, 0x0, 0x7fa6fc7f21c8, 0xc2092a8dc0, 0x7fa6fc7f21c8, 0xc2092a8da0, 0x0, 0x0, 0xc209b13ea0, ...) And https://github.com/docker/libcontainer/blob/db65c35051d05f3fb218a0e84a11267e0894fe0a/namespaces/exec.go#L24 func Exec(container *libcontainer.Config, stdin io.Reader, stdout, stderr io.Writer, console string, rootfs, dataPath string, args []string, createCommand CreateCommand, startCallback func()) (int, error) We have 0x0 for stdin, stdout, createCommand and args. Actually despite the tag there I don't think that's the line of code. Regardless this can probably be closed. Thanks. I'll close this for now. Let us know if you are still experiencing this with the latest updates.
gharchive/issue
2014-12-05T10:04:06
2025-04-01T04:34:01.616598
{ "authors": [ "crosbymichael", "pnasrat", "wangzhipeng1984" ], "repo": "docker/libcontainer", "url": "https://github.com/docker/libcontainer/issues/284", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }