id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
384479644
Build script and file layout changes Updated file layout for clarity and to more closely mirror Bootstrap's layout. This is why it looks like so many files were changed. Mostly, this consists of moving from a "src" directory to a "scss" directory, from "styles" to "css" and from "scripts" to "js". Big changes to gulpfile.js to break out more source and destination variables and break out more of the tasks into sub-tasks. Also fixed the cause of the "async termination" gulp errors. Broke apart application.js into demo.js (script used only for the demo site) and navbar.js (script required by the Fluid navbar, which will be part of the package). Can we please merge this?
gharchive/pull-request
2018-11-26T19:37:22
2025-04-01T04:34:31.508122
{ "authors": [ "jgolieb", "mihai-mihalache" ], "repo": "hortonworks/fluid-bootstrap-theme", "url": "https://github.com/hortonworks/fluid-bootstrap-theme/pull/22", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
249013392
Migration script for the change of ISSUE-785 There's change on DB schema, and one of table is broken down to two tables, so need to provide migration tool, along with schema update script. @raju-saravanan resolved this issue. Closing.
gharchive/issue
2017-08-09T12:25:06
2025-04-01T04:34:31.509055
{ "authors": [ "HeartSaVioR" ], "repo": "hortonworks/streamline", "url": "https://github.com/hortonworks/streamline/issues/876", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
64263737
numer księgi głównej March 25, 2015 at 02:05PM created via Do Button brak danych
gharchive/issue
2015-03-25T13:05:12
2025-04-01T04:34:31.510349
{ "authors": [ "amarcinkowski" ], "repo": "hospitalhub/punction", "url": "https://github.com/hospitalhub/punction/issues/3", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
162136823
ZNet.1.0.9 업데이트 설명 c# 클라이언트 전용 ZNet.dll 라이브러리가 1.0.9로 업데이트 되었습니다 몇가지 최적화 및 안정화 작업이 이루어졌으며 특히 유니티에서 간헐적으로 서버이동시 먹통되던 오류가 수정되었습니다 이미 서버쪽 부분은 다양한 유닛테스트와 스트레스 테스트를 통해 안정화 작업이 이루어 졌으나 c# 클라이언트 쪽은 많이 테스트를 하지 못하였는데 이번 업데이트로 인해 이제 유니티에서도 안정적으로 라이브러리를 사용할 수 있습니다 추가로 전체적인 기능들이 아닌 좀 더 간단히 실제적인 서버를 제작하고 싶은 분들은 아래 샘플을 참고하시면 도움이 될거라 생각됩니다 https://github.com/hothoty/SimpleServer 혹시 추가로 궁금한 점이나 문의할 내용이 있으시면 이곳에 올려주시면 최대한 알려드리겠습니다
gharchive/issue
2016-06-24T12:29:45
2025-04-01T04:34:31.512199
{ "authors": [ "hothoty" ], "repo": "hothoty/Zero", "url": "https://github.com/hothoty/Zero/issues/2", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
260843992
Handling large amount of users on user engagement graphs @MariaSolovyeva @dimasciput How do we want to handle the user engagement graphs when we have a lot of users? Looks like this breaks when it's above 10 or 11 but that's dependent on the username length. This is from the chart.js library, so I will try to customize the legend. Maybe change the location of the legend.
gharchive/issue
2017-09-27T04:33:38
2025-04-01T04:34:31.519089
{ "authors": [ "dimasciput", "smit1678" ], "repo": "hotosm/field-campaigner", "url": "https://github.com/hotosm/field-campaigner/issues/365", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
777600142
Serialize/deserialize negative Decimal number is wrong Hi, I extracted codes from DataTypeDecimal, The following is the test codes: @Test public void testNegativeDecimal() { // int scale = 4; BigDecimal negative = new BigDecimal("-18.2000"); BigDecimal scaleFactor = BigDecimal.valueOf(Math.pow(10, scale)); BigDecimal targetValue = negative.multiply(scaleFactor); BigInteger res = targetValue.toBigInteger(); long l1 = res.longValue(); long l2 = res.shiftRight(64).longValue(); BigInteger v1 = new BigInteger(1, longToBytes(l1)); BigInteger v2 = new BigInteger(1, longToBytes(l2)); BigDecimal value = new BigDecimal(v1.add(v2.shiftLeft(64))); value = value.divide(scaleFactor, scale, RoundingMode.HALF_UP); Assert.assertEquals(negative, value); } and the result is: java.lang.AssertionError: Expected :-18.2000 Actual :34028236692093846346337460743176802.9456 Here is my solution, we also need expand longToBytes to support decimal 256: public static byte[] longToBytes(long high, long low) { byte[] result = new byte[Long.BYTES * 2]; extracted(high, result, 0, Long.BYTES); extracted(low, result, Long.BYTES, Long.BYTES*2); return result; } public static byte[] longToBytes(long high, long low) { byte[] result = new byte[Long.BYTES * 2]; extracted(high, result, 0, Long.BYTES); extracted(low, result, Long.BYTES, Long.BYTES*2); return result; } Here is my solution, we also need expand longToBytes to support decimal 256: public static byte[] longToBytes(long high, long low) { byte[] result = new byte[Long.BYTES * 2]; extracted(high, result, 0, Long.BYTES); extracted(low, result, Long.BYTES, Long.BYTES*2); return result; } public static byte[] longToBytes(long high, long low) { byte[] result = new byte[Long.BYTES * 2]; extracted(high, result, 0, Long.BYTES); extracted(low, result, Long.BYTES, Long.BYTES*2); return result; } Hello, Happy new year. Because l2 is already be the value of -1. If you want to use shiftRight(64), why not use Decimal128 or Decimal256. There are some sample codes for that. https://github.com/housepower/ClickHouse-Native-JDBC/blob/179e1412baeb345cb3ca03f4629f1f4d7eea858a/clickhouse-native-jdbc/src/main/java/com/github/housepower/jdbc/data/type/complex/DataTypeDecimal.java#L131-L144 Hello, Happy new year. Because l2 is already be the value of -1. If you want to use shiftRight(64), why not use Decimal128 or Decimal256. There are some sample codes for that. https://github.com/housepower/ClickHouse-Native-JDBC/blob/179e1412baeb345cb3ca03f4629f1f4d7eea858a/clickhouse-native-jdbc/src/main/java/com/github/housepower/jdbc/data/type/complex/DataTypeDecimal.java#L131-L144 @sundy-li you can insert negative decimal into clickhouse and get it back to see what happen. CREATE TABLE tmp2 (ch Int8, i64 Int64, f64 Float64, str String, date Date, dec Decimal(19, 10), uuid UUID) ENGINE = MergeTree() ORDER BY tuple(); insert into tmp2 (ch, i64, f64, str, date, dec, uuid) values (44, 534324234, 0.32423423, 'hello', '2019-01-23', -1.333333, '61f0c404-5cb3-11e7-907b-a6006ad3dba0'); select dec from tmp2; The serialization part is right, but the deserialization is wrong. To correctly handle the various cases, we should use BigInteger(byte[] val) to construct BigInteger. public void testNegativeDecimal() { int scale = 4; BigDecimal negative = new BigDecimal("-18.2000", MathContext.DECIMAL128); BigDecimal scaleFactor = BigDecimal.valueOf(Math.pow(10, scale)); BigDecimal targetValue = negative.multiply(scaleFactor); BigInteger res = targetValue.toBigInteger(); long l1 = res.longValue(); long l2 = res.shiftRight(64).longValue(); // v = -1820000 BigInteger v = new BigInteger(longToBytes(l2, l1)); BigDecimal value = new BigDecimal(v); value = value.divide(scaleFactor, scale, RoundingMode.HALF_UP); Assert.assertEquals(negative, value); } @sundy-li you can insert negative decimal into clickhouse and get it back to see what happen. CREATE TABLE tmp2 (ch Int8, i64 Int64, f64 Float64, str String, date Date, dec Decimal(19, 10), uuid UUID) ENGINE = MergeTree() ORDER BY tuple(); insert into tmp2 (ch, i64, f64, str, date, dec, uuid) values (44, 534324234, 0.32423423, 'hello', '2019-01-23', -1.333333, '61f0c404-5cb3-11e7-907b-a6006ad3dba0'); select dec from tmp2; The serialization part is right, but the deserialization is wrong. To correctly handle the various cases, we should use BigInteger(byte[] val) to construct BigInteger. public void testNegativeDecimal() { int scale = 4; BigDecimal negative = new BigDecimal("-18.2000", MathContext.DECIMAL128); BigDecimal scaleFactor = BigDecimal.valueOf(Math.pow(10, scale)); BigDecimal targetValue = negative.multiply(scaleFactor); BigInteger res = targetValue.toBigInteger(); long l1 = res.longValue(); long l2 = res.shiftRight(64).longValue(); // v = -1820000 BigInteger v = new BigInteger(longToBytes(l2, l1)); BigDecimal value = new BigDecimal(v); value = value.divide(scaleFactor, scale, RoundingMode.HALF_UP); Assert.assertEquals(negative, value); } @baibaichen Thanks for your advice, I realized this is a bug. Could you help review the pr in #279 @baibaichen Thanks for your advice, I realized this is a bug. Could you help review the pr in #279
gharchive/issue
2021-01-03T09:03:15
2025-04-01T04:34:31.531386
{ "authors": [ "baibaichen", "sundy-li" ], "repo": "housepower/ClickHouse-Native-JDBC", "url": "https://github.com/housepower/ClickHouse-Native-JDBC/issues/277", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
537837590
Modify onChange() example func arrow missing this change should actually be made in the javascript file itself, as these reference docs are generated from code: https://github.com/howdyai/botkit/blob/master/packages/botkit/src/conversation.ts#L486 fix source comment instead
gharchive/pull-request
2019-12-14T00:37:38
2025-04-01T04:34:31.533831
{ "authors": [ "benbrown", "mattcobb" ], "repo": "howdyai/botkit", "url": "https://github.com/howdyai/botkit/pull/1876", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
599238432
安装项目报错 git下载https://github.com/howl-anderson/hanzi_char_featurizer. cd 到目录下 python steup.py install 然后在控制台 导入包的时候 报错 import hanzi_char_featurizer Traceback (most recent call last): File "F:\ANACONDA\lib\site-packages\IPython\core\interactiveshell.py", line 2862, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-6-717af367fe07>", line 1, in <module> import hanzi_char_featurizer File "F:\python\PyCharm Community Edition 2017.2.4\helpers\pydev\_pydev_bundle\pydev_import_hook.py", line 21, in do_import module = self._system_import(name, *args, **kwargs) File "F:\ANACONDA\lib\site-packages\hanzi_char_featurizer-0.1-py3.6.egg\hanzi_char_featurizer\__init__.py", line 5, in <module> from hanzi_char_featurizer.featurizers.four_corner import FourCorner File "F:\python\PyCharm Community Edition 2017.2.4\helpers\pydev\_pydev_bundle\pydev_import_hook.py", line 21, in do_import module = self._system_import(name, *args, **kwargs) ModuleNotFoundError: No module named 'hanzi_char_featurizer.featurizers' 看起来是项目的结构有点不正常 win 10+ python3.6.7 请问我还需要提供什么呢 已经更新代码解决依赖丢失问题,你重新尝试一下 嗯嗯
gharchive/issue
2020-04-14T01:47:24
2025-04-01T04:34:31.536848
{ "authors": [ "howl-anderson", "luoqishuai" ], "repo": "howl-anderson/hanzi_char_featurizer", "url": "https://github.com/howl-anderson/hanzi_char_featurizer/issues/3", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2587193089
Extra header in schematic? Thank you for the amazing work here! This is exactly what I need for a project I'm working on. I'm fairly new to reading schematics, and I'm getting tripped up as I read the example here: https://github.com/hpwit/I2SClocklessVirtualLedDriver/blob/main/extra/Schematic_4pins-virtual-8-1-20191115225252.png I can see 3 headers to the left (H1, H2, H3). H2 and H3 look the same and both provide "Clock" and "Latch". There's only one 7HC245 in the diagram, so it seems like one of these headers might be extra (i.e. if there were instead two 7HC245s). Of course more than likely I'm reading this wrong :) Once again, thank you for this amazing library. 🙏 Hello thank you for your comment ! I had two header incase if you wanted to daisy chain the board. How many strips do you need ? Can you tell me more about your project ? I have other drawings. Ah! That makes sense. I appreciate the help. I'm building an LED board made of many panels. Pretty much a scaled down version of the LED panel I saw in one of your posts. Until now, I have been daisy chaining the panels together, at the cost of very low FPS. I've learned a ton from reading this lib and the various posts that led me to it. I've been able to successfully get a demo working with nothing but a shift register at massive FPS (I ordered some level shifters too, based on what I read about it's affect on signal timing, but haven't needed to use one yet). Many thanks! It's 5 44x11 WS12x strips, btw. My goal is to use a single data GPIO for simplified wiring. So far it has been working (I'm setting NBIS2SERIALPINS to 1, so not using this lib as intended, but leaning on the timing for the shift register that it provides)
gharchive/issue
2024-10-14T22:33:58
2025-04-01T04:34:31.617545
{ "authors": [ "hpwit", "nuttingd" ], "repo": "hpwit/I2SClocklessVirtualLedDriver", "url": "https://github.com/hpwit/I2SClocklessVirtualLedDriver/issues/25", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
540701861
fix(vx-brush): Fix BaseBrush, iterate on types + gallery :rocket: Enhancements Adds a bit of polish to the gallery brush example (changes the big:small chart ratio, changes colors from green to purple, uses our LinearGradient component instead of a manual defs version, hides bottom axes in smaller tile gallery vs page demo) Updates the brush demo to include the react code (it's not super obvious that you have to do this) :bug: Bug Fix Previously the shouldComponentUpdate method in BaseBrush was written to almost always return false. This prevented the BrushSelection region to actually render and change dimensions as you drag making the Brush component unusable. While debugging I iterated on several of the types within the vx-brush package there were lots of Function and anys which I tried to get rid of for the most part I tried to leverage the @vx/drag types where possible Testing [x] functional brush demo [x] CI [x] I need to verify that the Drag types actually work @hshoff @geekplux cc @schillerk @kristw @Rudeg gonna land this since it fixes so much. can continue to iterate.
gharchive/pull-request
2019-12-20T02:13:51
2025-04-01T04:34:31.669748
{ "authors": [ "williaster" ], "repo": "hshoff/vx", "url": "https://github.com/hshoff/vx/pull/567", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
546715697
eleTree组件拖动节点时判断是否超出边界的代码有问题 场景描述:页面上有两棵树,两棵树均允许节点拖动 操作描述:拖动一棵树的节点到另一棵树上 期望结果:被拖动的节点不会添加到另一棵树上 实际结果:被拖动的节点添加到另一棵树上 结论:eleTree组件拖动节点时判断是否超出边界的代码有问题 jquery的parents()方法传入的参数应该是选择器(也就是字符串),而options.elem此时已经是对象了,导致 target.parents(options.elem).length===0 条件不成立,不能return @muyunzhongtian 多谢提醒,之前没太注意到,修改之后通过获取节点的属性组成字符串查找 @muyunzhongtian 多谢提醒,之前没太注意到,修改之后通过获取节点的属性组成字符串查找 建议使用:target.parents(options.elem.selector).length===0 进行判断 您的这种修改方案是硬生生的把节点的属性全部拼接到选择器中,其实options.elem已经提供了选择器了,不用再手动创建选择器了,截图如下: 这样就可以了 // 判断是否超出边界 if(target.parents(options.elem.selector).length===0 && !isTargetOuterMost){ return; } 昨天忘了贴解决方案了,勿怪 1425245378@qq.com 发件人: 李祥 发送时间: 2020-01-09 09:58 收件人: hsiangleev/layuiExtend 抄送: muyunzhongtian; Mention 主题: Re: [hsiangleev/layuiExtend] eleTree组件拖动节点时判断是否超出边界的代码有问题 (#92) @muyunzhongtian 多谢提醒,之前没太注意到,修改之后通过获取节点的属性组成字符串查找 — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe. @muyunzhongtian 额,谢谢,我感觉应该有这个api,找了半天没找到,就自己拼接选择器了。。。 大神,给您提几个建议吧,如有冒犯,还望见谅啊。 现在节点拖动时只能实现改变父子关系的功能,以后是否考虑增加改变同级节点的上下排序的功能呢??还有,目前只有checkbox的功能,以后是否考虑增加radio的功能呢?? 因为我之前项目中用过zTree,这些功能zTree中都有,现在改用layui框架了,如果eleTree也有这些功能,那就完美了呢。 1425245378@qq.com 发件人: 李祥 发送时间: 2020-01-09 10:56 收件人: hsiangleev/layuiExtend 抄送: muyunzhongtian; Mention 主题: Re: [hsiangleev/layuiExtend] eleTree组件拖动节点时判断是否超出边界的代码有问题 (#92) @muyunzhongtian 额,谢谢,我感觉应该有这个api,找了半天没找到,就自己拼接选择器了。。。 — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe. @muyunzhongtian 哪里哪里,大神这个词叫的我有点慌。。。 同级节点的上下排序这个通过拖拽目前也能实现,因为每次拖拽的时候都会把那个节点移动到同级的最后一个节点,例如1这个节点下面有ABCD四个子节点,把A拖拽到1上面,子节点顺序就会变为BCDA了 单选的话,因为目前感觉多选还有一些问题没有解决,比如现在执行一些方法之后,手动选择的节点会被清除,而且自从上次大修改之后增加了好多的功能,代码越来越多感觉有点驾驭不住了。。。
gharchive/issue
2020-01-08T08:33:10
2025-04-01T04:34:31.677755
{ "authors": [ "hsiangleev", "muyunzhongtian" ], "repo": "hsiangleev/layuiExtend", "url": "https://github.com/hsiangleev/layuiExtend/issues/92", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
728640950
Any plan to introduce lintting tools to the project we can have something like auto format code, remove white spaces .. or is there any vscode plugin recommend Normally, TS has tslint.json for linting configuration, and can be execute every time we commit source code. Currently, I haven't configure it yet. :( let me working on this, as tslint has been deprecated. I would suggest that do not add the lint because create-react-app recommend user to use prettier instead of eslint. (https://create-react-app.dev/docs/setting-up-your-editor/#formatting-code-automatically). I have implement the husky and prettier in this PR (#18) please check it.
gharchive/issue
2020-10-24T02:53:50
2025-04-01T04:34:31.701916
{ "authors": [ "heyfirst", "hspotlight", "thanakritju" ], "repo": "hspotlight/metro-fare", "url": "https://github.com/hspotlight/metro-fare/issues/15", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
297990470
Fixed build issues roles/kubernetes-apps/registry/tasks/main.yml 33:1 error trailing spaces (trailing-spaces) 41:8 error trailing spaces (trailing-spaces) 55:8 error trailing spaces (trailing-spaces) @k1-hedayati thanks for reporting, I also discover something strange when rebase all my patch branch to my master, good catch and already merge into single commit ;-) @k1-hedayati please also give a hand for https://github.com/kubernetes-incubator/kubespray/pull/2332 so it could contribute back to upstream~~~ You're welcome, I've already gave the hand in there :)
gharchive/pull-request
2018-02-17T08:50:27
2025-04-01T04:34:31.719293
{ "authors": [ "hswong3i", "k1-hedayati" ], "repo": "hswong3i/kubespray", "url": "https://github.com/hswong3i/kubespray/pull/1", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
303139289
Plugin is saying that I'm tracking ignored files but it's wrong Prerequisites [X] Plugin is in the latest version [X] Issue was not reported yet [ ] Stack trace (if provided) contains mobi.hsz.idea.gitignore package name Description After I open a project the plugin presents a popup in the lower right corner that tells me "Ignored files found that are tracked" I click the "Show details" button to see the files and I notice that it's listing files that while they are listed in the .gitignore file they are being excluded from being ignored so they should not be counted as files that are being ignored in .gitignore Stack trace if available. Steps to Reproduce Just logging in produces the pop up. I've proved a screenshot so you can see the problem Expected behavior: What you expect to happen To not display this popup message Actual behavior: What actually happens Displays this popup message Reproduces how often: What percentage of the time does it reproduce? 100% Versions Plugin: Version: 2.4.0 Can be found in Settings > Plugins > .ignore IDE: 2017.3.4 Can be found in Help > About. Click on the copy icon on the left. OS: Windows 10 Pro Information about operation system - type, version. Additional Information Any additional information, configuration or data that might be necessary to reproduce the issue. I'm having the same issue on macOS. Plugin: 2.6.1 IDE: 2018.1.2 OS: macOS 10.12.6 Maybe related: #513 I tried deleting .idea/ and launching IntelliJ again (as in #478), but that did not help. I think the line you have in your gitignore file "experiments/bin/gen/" ignores the whole folder so the line after it "!experiments/bin/gen/init_.py" doesn't do anything and that's why it's still being ignored. I would suggest changing "experiments/bin/gen/" to "experiments/bin/gen/*" and that might fix the problem. Same issue. Easy to repo: mkdir out touch out/ignored touch out/.keep create a .gitignore with the following content: out/ !.keep Git (correctly) shows that out/ignored is ignored and out/.keep is not ignored. The plugin appears to not process the "exclusion" operation ("!") There's been a change in how Git interprets .gitignore files. gitignore Pattern Format An optional prefix "!" which negates the pattern; any matching file excluded by a previous pattern will become included again. It is not possible to re-include a file if a parent directory of that file is excluded. Git doesn’t list excluded directories for performance reasons, so any patterns on contained files have no effect, no matter where they are defined. Put a backslash ("") in front of the first "!" for patterns that begin with a literal "!", for example, "!important!.txt". So the following no longer works: out/ # doesn't work !.keep # also doesn't work !out/.keep Running git status won't show out/ under "Untracked files:". If you try git add out/.keep you'll get this message: The following paths are ignored by one of your .gitignore files: log/.keep Use -f if you really want to add them. To correct this you have to stop excluding the directory, and only exclude the contents. out/* # now this works !.keep # and this also works !out/.keep Running git status will now show out/ directory under "Untracked files:". Running git add out/.keep will succeed. This feature will be disabled with v3.1.0
gharchive/issue
2018-03-07T15:19:21
2025-04-01T04:34:31.732129
{ "authors": [ "chaimleib", "hsz", "jpickwell", "kevincam3", "mlsquires" ], "repo": "hsz/idea-gitignore", "url": "https://github.com/hsz/idea-gitignore/issues/523", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2751185813
🛑 Manhuagui is down In db99c26, Manhuagui (https://www.manhuagui.com) was down: HTTP code: 0 Response time: 0 ms Resolved: Manhuagui is back up in 1294b45 after 6 minutes.
gharchive/issue
2024-12-19T18:48:08
2025-04-01T04:34:31.856724
{ "authors": [ "http403" ], "repo": "http403/uptime_monitor", "url": "https://github.com/http403/uptime_monitor/issues/410", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2591086872
No Decoding of Http2PingFrame / Http2GoAwayFrame We are seeing logs such as Discarded inbound message DefaultHttp2GoAwayFrame(errorCode=0, content=UnpooledSlicedByteBuf(ridx: 8, widx: 15, cap: 15/15, unwrapped: PooledUnsafeDirectByteBuf(ridx: 24, widx: 41, cap: 63)), extraStreamIds=0, lastStreamId=771) that reached at the tail of the pipeline. Please check your pipeline configuration. Discarded inbound message DefaultHttp2SettingsFrame[...] Discarded inbound message DefaultHttp2PingFrame[...] Using the client like this NettyClientBuilder[F].withHttp2.resource @sam0jones0 try this branch and see if this works better for you: https://github.com/http4s/http4s-netty/pull/721
gharchive/issue
2024-10-16T08:32:32
2025-04-01T04:34:31.858329
{ "authors": [ "hamnis", "sam0jones0" ], "repo": "http4s/http4s-netty", "url": "https://github.com/http4s/http4s-netty/issues/714", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
945070610
修改两个小问题 1、修改全局变量覆盖局部变量的问题;(比如定义了某个全局变量A,某条用例在config.variables中又定义了变量A,在该条用例中应该使用局部变量。提出修改的原因是extract输出的变量是全局的,很容易和别的用例的自定义变量重名) 2、解决断言时,不能从list末尾开始取值的问题;(不支持-1的问题,如list.-1) 并没有人理我 并没有人理我 原来已经修复了啊。。这两个还都用到过, 大佬求解下, 1、如何能让提取的值给所有case用呢,只能通过读写外部文件了吗(全局变量) 蛮多场景需要的啊 2、集成到ci,能不能传值到debugtalk.py里面呢。。切环境时候.Env不能解决所有场景 3、报错:'latin-1' codec can't encode characters in position 12-14: Body , 抓包人家是username&password,我用date请求传参方式,就报错这个。。不知道在框架里怎么改 . 希望大佬能看见吧,闲逛正好看见了 httprunenr 2.2.2 python3.6 1、使用“extract”关键词取到的变量,在本次批量执行的用例中是全局的(3.x才行)。也可以写个teardown方法取到这个值,取到的值怎么给全局就和python怎么定义全局变量类似,方法有很多,实在不行存到某个实体配置文件中也行。 2、没看明白具体问题是什么,我用过两种切环境的方法:a. 执行的时候debugtalk去读取某个配置文件,动态获取base_url和登录账号信息;b. 直接生成多个环境的用例,想执行哪个环境调用接口就行(我是用excel写用例,然后转换成httprunner用例) 3、没碰到过这个,可能是3.x没有这个问题。字符编码问题吧,调试找一下,我也不知道具体在哪出了问题。 并没有人理我 原来已经修复了啊。。这两个还都用到过, 大佬求解下, 1、如何能让提取的值给所有case用呢,只能通过读写外部文件了吗(全局变量) 蛮多场景需要的啊 2、集成到ci,能不能传值到debugtalk.py里面呢。。切环境时候.Env不能解决所有场景 3、报错:'latin-1' codec can't encode characters in position 12-14: Body , 抓包人家是username&password,我用date请求传参方式,就报错这个。。不知道在框架里怎么改 . 希望大佬能看见吧,闲逛正好看见了 httprunenr 2.2.2 python3.6 1、使用“extract”关键词取到的变量,在本次批量执行的用例中是全局的(3.x才行)。也可以写个teardown方法取到这个值,取到的值怎么给全局就和python怎么定义全局变量类似,方法有很多,实在不行存到某个实体配置文件中也行。 2、没看明白具体问题是什么,我用过两种切环境的方法:a. 执行的时候debugtalk去读取某个配置文件,动态获取base_url和登录账号信息;b. 直接生成多个环境的用例,想执行哪个环境调用接口就行(我是用excel写用例,然后转换成httprunner用例) 3、没碰到过这个,可能是3.x没有这个问题。字符编码问题吧,调试找一下,我也不知道具体在哪出了问题。
gharchive/pull-request
2021-07-15T07:07:08
2025-04-01T04:34:31.863290
{ "authors": [ "wangtesting", "xiongjs" ], "repo": "httprunner/httprunner", "url": "https://github.com/httprunner/httprunner/pull/1099", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
300379280
Add delegate for Web Socket session close event This change allows client code to register a delegate to receive notification when a WebSocketSession will close. This should have no effect on sockets which aren't involved in a WebSocketSession. Because of the way that the WebSocketSession is retained by a closure, it seemed like deinit would be the most reliable place to "notice" that the session is going away. However, I can see arguments for calling the delegate method from another location instead, such as from inside or outside the main do/catch in the session closure. Let me know if you'd like me to move things around, or if you'd prefer some other solution for this notification. Thanks! I found that my old implementation was not reliably catching cases where the WebSocket was closed unexpectedly, so I've reimplemented this in a different way. It's working reliably for me now. @cwillisf I need to handle both some connected and disconnected events to the Websocket per client so I added two closure to the Websocket method... which is a different approach from yours. #319
gharchive/pull-request
2018-02-26T20:11:59
2025-04-01T04:34:31.866251
{ "authors": [ "cwillisf", "marcc-orange" ], "repo": "httpswift/swifter", "url": "https://github.com/httpswift/swifter/pull/295", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
2197163439
关于量化权重文件的生成 您好,我看了原工程的python代码。做了这样的调用 img_path='gao_mask.jpg' img = cv2.imread(img_path) img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # 将BGR转换为RGB from torchvision import transforms # 预处理图像 transform = transforms.Compose([ transforms.ToPILImage(), transforms.Resize((640, 640)), # 假设模型输入尺寸为640x640 transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) # 标准化 ]) img_tensor = transform(img).unsqueeze(0) # 添加批次维度 float_model,dataloader_iter=load_model_data('face_mask.yaml','yolov3tiny_facemask.pt',416,False) generate_quant_model(float_model,dataloader_iter,'yolov3tiny_facemask_quant.pth') quant_model=load_quant_model('yolov3tiny_facemask.pt','yolov3tiny_facemask_quant.pth') quant_model_evaluate_show(img_tensor,quant_model) yolov3tiny_infer_para_gen(quant_model,32,'F') 目录如下 python_prj |── images |── labels(代码中好像没遇到) └── yolov3tiny_quant.py 最后在quant_model_evaluate_show函数报错,这个函数应该是看下分割的权重能否正确检测。 Input shape must be (N, C, H, W)! File "D:\yolo_work\fpga_accelerator_yolov3tiny\python_prj\models\common.py", line 45, in fuseforward return self.act(self.conv(x)) File "D:\yolo_work\fpga_accelerator_yolov3tiny\python_prj\yolov3tiny_quant.py", line 80, in quant_model_detect x=quant_model[1].modelii File "D:\yolo_work\fpga_accelerator_yolov3tiny\python_prj\yolov3tiny_quant.py", line 97, in quant_model_evaluate_show res=quant_model_detect(x,quant_model) File "D:\yolo_work\fpga_accelerator_yolov3tiny\python_prj\yolov3tiny_quant.py", line 320, in quant_model_evaluate_show(img_tensor,quant_model) ValueError: Input shape must be (N, C, H, W)! 请问您有遇到这个问题吗 我没有调用过quant_model_evaluate_show,因此暂时没有遇到这个问题 我没有调用过quant_model_evaluate_show,因此暂时没有遇到这个问题 那请问您用的是这个python工程生成的量化权重吗?我在调用load_model_data()函数的时候,出现这个 val: Scanning 'images.cache' images and labels... 0 found, 853 missing, 0 empty, 0 corrupted: 100%| 对加载权重又影响吗 没有影响,第一次加载就是有这个问题 还有两个问题想问下您? 一、generate_quant_model 和 load_quant_model 这个函数有什么区别吗? 二、我训练得到了权重文件,是不是只要按如下操做得到分割后的文件,存入SD卡就行: float_model,dataloader_iter=load_model_data('face_mask.yaml','yolov3tiny_facemask.pt',416,False) quant_model=load_quant_model('yolov3tiny_facemask.pt','yolov3tiny_facemask_quant.pth') yolov3tiny_infer_para_gen(quant_model,32,'F',) 问题一,一个是生成量化模型,一个是加载模型。量化后的模型还需要将其参数提取出来适配加速器。 问题二,按操作得到分割后的bin文件,存入sd卡,并在zynq软件端修改正确的读取文件名即可
gharchive/issue
2024-03-20T10:18:12
2025-04-01T04:34:31.888802
{ "authors": [ "foooal", "huanggeli" ], "repo": "huanggeli/yolov3tiny-ZYNQ7000", "url": "https://github.com/huanggeli/yolov3tiny-ZYNQ7000/issues/11", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1662921283
RL example: Platoon Added baseline example, consisting of training, inference, and zoo agent registration, for the platooning task in Driving SMARTS 2023.3 benchmark. Documented the challenge objective, desired inference code structure, and use of baseline example, for Driving SMARTS 2023.3 benchmark, i.e., platooning task. Really only suggestions rather than requirements.
gharchive/pull-request
2023-04-11T17:15:31
2025-04-01T04:34:31.898850
{ "authors": [ "Adaickalavan", "Gamenot" ], "repo": "huawei-noah/SMARTS", "url": "https://github.com/huawei-noah/SMARTS/pull/1955", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1379092837
codecov actions add sc and zk env Which issue(s) this PR fixes: Fixes #770 Use case description: None Does this PR introduce a user-facing change? No Additional documentation e.g., usage docs, etc.: No /lgtm
gharchive/pull-request
2022-09-20T09:21:00
2025-04-01T04:34:31.904995
{ "authors": [ "robotLJW", "xuezechao1" ], "repo": "huaweicloud/Sermant", "url": "https://github.com/huaweicloud/Sermant/pull/771", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
243534447
Some events returned to Splunk have a semicolon and random string appended to the file name From @basepi on March 7, 2017 22:42 From @mew1033 on March 2, 2017 23:12 There are some events getting returned to Splunk that have a semicolon and a random string appended to the file name. The object_path parameter comes from the pathname field in pulsar. Copied from original issue: hubblestack/pulsar#62 Copied from original issue: hubblestack/hubble-salt#23 So have we decided these are not real filenames? From @mew1033 on March 2, 2017 23:43 I don't know. I would suspect not, but it's certainly possible. Since @mew1033 hasn't mentioned this issue in years, I'm going to close it.
gharchive/issue
2017-07-17T21:32:40
2025-04-01T04:34:31.922567
{ "authors": [ "basepi" ], "repo": "hubblestack/hubble", "url": "https://github.com/hubblestack/hubble/issues/71", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
660821088
Adding FDG Command line parser module The module can be used to fetch value of keys provided in command lines. Example: "dockerd --config-file=/etc/docker/daemon.json --log-level=debug" In the above command line, the module can be used to fetch the value of config-file and log-level. Perhaps we should cover this in one of our meetings, but it seems that this is somewhat complex solution to the problem. Since for a given check or a query we already know which argument we're after, isn't it a bit simpler to rex out the right thing in query itself ?
gharchive/pull-request
2020-07-19T12:55:09
2025-04-01T04:34:31.924556
{ "authors": [ "MoodyMudit", "fossam" ], "repo": "hubblestack/hubble", "url": "https://github.com/hubblestack/hubble/pull/891", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
271595649
Deploy Hubot 3.x to NPM? Hello, I was excited to see that version 3 had come out but I didn't see it available in NPM. If I go to https://www.npmjs.com/package/hubot it reports the latest version is still 2.19.0. Any thoughts on getting the newer versions deployed out there? Thanks! idg why this is so hard for some packages but in the meantime you can just point your package.json at the tag: { // ... "devDependencies": { // ... "hubot": "hubotio/hubot#v3.0.1" } } @jwbennet @atomanyih it already is on npm. It's under the next dist-tag, so you can get it with either npm install --save hubot@3 or npm install --save hubot@next Since there is a 3.0.1 stable release, is there a reason not to deploy that npm as the latest hubot to npmjs? The 3.x series has been promoted to latest 🎉
gharchive/issue
2017-11-06T19:41:38
2025-04-01T04:34:31.935849
{ "authors": [ "MattSFT", "atomanyih", "jwbennet", "strugee", "technicalpickles" ], "repo": "hubotio/hubot", "url": "https://github.com/hubotio/hubot/issues/1406", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2632803635
🛑 VGMdb API is down In 7f30c37, VGMdb API (https://vgmdb.info) was down: HTTP code: 502 Response time: 350 ms Resolved: VGMdb API is back up in 9ddb080 after 18 minutes.
gharchive/issue
2024-11-04T13:20:28
2025-04-01T04:34:31.947247
{ "authors": [ "hufman" ], "repo": "hufman/upptime", "url": "https://github.com/hufman/upptime/issues/1335", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2679864155
🛑 VGMdb API is down In aff5a31, VGMdb API (https://vgmdb.info) was down: HTTP code: 502 Response time: 349 ms Resolved: VGMdb API is back up in b121e37 after 11 minutes.
gharchive/issue
2024-11-21T15:15:01
2025-04-01T04:34:31.949573
{ "authors": [ "hufman" ], "repo": "hufman/upptime", "url": "https://github.com/hufman/upptime/issues/1617", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1344983795
Accelerator starting multiple wandb runs Hi there! I may be doing something wrong, but when I call accelerator.init_trackers in my main function, I get N different runs for each process. Is this the expected behavior? @ezhang7423 a pr was just merged to adjust this. But if you peek at the examples currently you'll see it's done under an if accelerator.is_main_process first. (Next release will do this automatically) Hi @ezhang7423, you should do a source install of accelerate and that issue should be fixed. pip install git+https://github.com/huggingface/accelerate This works for me now!
gharchive/issue
2022-08-19T23:48:03
2025-04-01T04:34:31.952456
{ "authors": [ "Gladiator07", "ezhang7423", "muellerzr" ], "repo": "huggingface/accelerate", "url": "https://github.com/huggingface/accelerate/issues/644", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1845294951
I keep getting "requests.exceptions.HTTPError: 400 Client Error: Bad Request" I am trying to fine-tune llama-2-7b-hf with my own dataset. I have tried both on huggingface and by running autotrain-advanced locally and I get this error: requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://api.autotrain.huggingface.co/projects/81364/start_training. I cannot fine-tune it with autotrain llm since I get a GPU error. I have looked online. Some people have had this issue since April and still could not find a fix. Anyone has an idea? Thanks. After adding a payment method, this error suddenly disappeared which is kind of bad considering the "free" option. Now, when using autotrain directly on huggingface or locally, my runs just stop without any error in the logs. This tool has a lot of potential for someone with limited hardware resources who wants to experiment with LLMs at a low cost, but the amount of issues I have encountered makes this completely unusable. I still get GPU error with autotrain llm, I'm on a Macbook pro M1, 16 GB RAM.
gharchive/issue
2023-08-10T14:22:21
2025-04-01T04:34:31.954727
{ "authors": [ "jaslatendresse" ], "repo": "huggingface/autotrain-advanced", "url": "https://github.com/huggingface/autotrain-advanced/issues/197", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1850304398
feat: 🎸 be more specific in OpenAPI type the failed configs format is a CustomError The docs for this PR live here. All of your documentation changes will be reflected on that endpoint.
gharchive/pull-request
2023-08-14T18:16:33
2025-04-01T04:34:31.955873
{ "authors": [ "HuggingFaceDocBuilderDev", "severo" ], "repo": "huggingface/datasets-server", "url": "https://github.com/huggingface/datasets-server/pull/1682", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1946010912
Cannot load dataset with 2.14.5: FileNotFound error Describe the bug I'm trying to load [piuba-bigdata/articles_and_comments] and I'm stumbling with this error on 2.14.5. However, this works on 2.10.0. Steps to reproduce the bug Colab link Downloading readme: 100% 1.19k/1.19k [00:00<00:00, 30.9kB/s] --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) [<ipython-input-2-807c3583d297>](https://localhost:8080/#) in <cell line: 3>() 1 from datasets import load_dataset 2 ----> 3 load_dataset("piuba-bigdata/articles_and_comments", split="train") 2 frames [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs) 2127 2128 # Create a dataset builder -> 2129 builder_instance = load_dataset_builder( 2130 path=path, 2131 name=name, [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, use_auth_token, storage_options, **config_kwargs) 1813 download_config = download_config.copy() if download_config else DownloadConfig() 1814 download_config.storage_options.update(storage_options) -> 1815 dataset_module = dataset_module_factory( 1816 path, 1817 revision=revision, [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs) 1506 raise e1 from None 1507 if isinstance(e1, FileNotFoundError): -> 1508 raise FileNotFoundError( 1509 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. " 1510 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}" FileNotFoundError: Couldn't find a dataset script at /content/piuba-bigdata/articles_and_comments/articles_and_comments.py or any data file in the same directory. Couldn't find 'piuba-bigdata/articles_and_comments' on the Hugging Face Hub either: FileNotFoundError: No (supported) data files or dataset script found in piuba-bigdata/articles_and_comments. Expected behavior It should load normally. Environment info - `datasets` version: 2.14.5 - Platform: Linux-5.15.120+-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.18.0 - PyArrow version: 9.0.0 - Pandas version: 1.5.3 Thanks for reporting, @finiteautomata. We are investigating it. There is a bug in datasets. You can see our proposed fix: #6309
gharchive/issue
2023-10-16T20:11:27
2025-04-01T04:34:31.959875
{ "authors": [ "albertvillanova", "finiteautomata" ], "repo": "huggingface/datasets", "url": "https://github.com/huggingface/datasets/issues/6305", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2057377630
ImportError: cannot import name 'SchemaInferenceError' from 'datasets.arrow_writer' (/opt/conda/lib/python3.10/site-packages/datasets/arrow_writer.py) Describe the bug While importing from packages getting the error Code: import os import torch from datasets import load_dataset, Dataset from transformers import ( AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, HfArgumentParser, TrainingArguments, pipeline, logging ) from peft import LoraConfig, PeftModel from trl import SFTTrainer from huggingface_hub import login import pandas as pd Error: `--------------------------------------------------------------------------- ImportError Traceback (most recent call last) Cell In[5], line 14 4 from transformers import ( 5 AutoModelForCausalLM, 6 AutoTokenizer, (...) 11 logging 12 ) 13 from peft import LoraConfig, PeftModel ---> 14 from trl import SFTTrainer 15 from huggingface_hub import login 16 import pandas as pd File /opt/conda/lib/python3.10/site-packages/trl/init.py:21 8 from .import_utils import ( 9 is_diffusers_available, 10 is_npu_available, (...) 13 is_xpu_available, 14 ) 15 from .models import ( 16 AutoModelForCausalLMWithValueHead, 17 AutoModelForSeq2SeqLMWithValueHead, 18 PreTrainedModelWrapper, 19 create_reference_model, 20 ) ---> 21 from .trainer import ( 22 DataCollatorForCompletionOnlyLM, 23 DPOTrainer, 24 IterativeSFTTrainer, 25 PPOConfig, 26 PPOTrainer, 27 RewardConfig, 28 RewardTrainer, 29 SFTTrainer, 30 ) 33 if is_diffusers_available(): 34 from .models import ( 35 DDPOPipelineOutput, 36 DDPOSchedulerOutput, 37 DDPOStableDiffusionPipeline, 38 DefaultDDPOStableDiffusionPipeline, 39 ) File /opt/conda/lib/python3.10/site-packages/trl/trainer/init.py:44 42 from .ppo_trainer import PPOTrainer 43 from .reward_trainer import RewardTrainer, compute_accuracy ---> 44 from .sft_trainer import SFTTrainer 45 from .training_configs import RewardConfig File /opt/conda/lib/python3.10/site-packages/trl/trainer/sft_trainer.py:23 21 import torch.nn as nn 22 from datasets import Dataset ---> 23 from datasets.arrow_writer import SchemaInferenceError 24 from datasets.builder import DatasetGenerationError 25 from transformers import ( 26 AutoModelForCausalLM, 27 AutoTokenizer, (...) 33 TrainingArguments, 34 ) ImportError: cannot import name 'SchemaInferenceError' from 'datasets.arrow_writer' (/opt/conda/lib/python3.10/site-packages/datasets/arrow_writer.py ` transformers version: 4.36.2 python version: 3.10.12 datasets version: 2.16.0 Steps to reproduce the bug Install packages !pip install -U datasets trl accelerate peft bitsandbytes transformers trl huggingface_hub import packages import os import torch from datasets import load_dataset, Dataset from transformers import ( AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, HfArgumentParser, TrainingArguments, pipeline, logging ) from peft import LoraConfig, PeftModel from trl import SFTTrainer from huggingface_hub import login import pandas as pd Expected behavior No error while importing Environment info datasets version: 2.16.0 Platform: Linux-5.15.133+-x86_64-with-glibc2.35 Python version: 3.10.12 huggingface_hub version: 0.20.1 PyArrow version: 11.0.0 Pandas version: 2.1.4 fsspec version: 2023.10.0 Hi ! Are you sure you have datasets 2.16 ? I just checked and on 2.16 I can run from datasets.arrow_writer import SchemaInferenceError without error I have the same issue - using with datasets version 2.16.1. Also this is on a kaggle notebook - other people with the same issue also seem to be having it on kaggle? I have the same issue now and didn't have this problem around 2 weeks ago. Hi ! Are you sure you have datasets 2.16 ? I just checked and on 2.16 I can run from datasets.arrow_writer import SchemaInferenceError without error Yes, I am sure !pip show datasets Name: datasets Version: 2.16.1 Summary: HuggingFace community-driven open-source library of datasets Home-page: https://github.com/huggingface/datasets Author: HuggingFace Inc. Author-email: thomas@huggingface.co License: Apache 2.0 Location: /opt/conda/lib/python3.10/site-packages Requires: aiohttp, dill, filelock, fsspec, huggingface-hub, multiprocess, numpy, packaging, pandas, pyarrow, pyarrow-hotfix, pyyaml, requests, tqdm, xxhash Required-by: trl I have the same issue - using with datasets version 2.16.1. Also this is on a kaggle notebook - other people with the same issue also seem to be having it on kaggle? Don't know about other people. But I am having this issue whose solution I can't find anywhere. And this issue still persists. I have the same issue now and didn't have this problem around 2 weeks ago. Same here I was having the same issue but the datasets version was 2.6.1, after I updated it to latest(2.16), error is gone while importing. I was having the same issue but the datasets version was 2.6.1, after I updated it to latest(2.16), error is gone while importing. I also have datasets version 2.16, but the error is still there. Can you try re-installing datasets ? Can you try re-installing datasets ? I tried re-installing. Still getting the same error. Can you try re-installing datasets ? I tried re-installing. Still getting the same error. In kaggle I used: %pip install -U datasets and then restarted runtime and then everything works fine. Can you try re-installing datasets ? I tried re-installing. Still getting the same error. In kaggle I used: %pip install -U datasets and then restarted runtime and then everything works fine. This isn't working for me. Can you try re-installing datasets ? I tried re-installing. Still getting the same error. In kaggle I used: %pip install -U datasets and then restarted runtime and then everything works fine. Yes, this is working. When I restart the runtime after installing packages, it's working perfectly. Thank you so much. But why do we need to restart runtime every time after installing packages? https://stackoverflow.com/questions/57831187/need-to-restart-runtime-before-import-an-installed-package-in-colab Can you try re-installing datasets ? I tried re-installing. Still getting the same error. In kaggle I used: %pip install -U datasets and then restarted runtime and then everything works fine. Yes, this is working. When I restart the runtime after installing packages, it's working perfectly. Thank you so much. But why do we need to restart runtime every time after installing packages? For some packages it is required. https://stackoverflow.com/questions/57831187/need-to-restart-runtime-before-import-an-installed-package-in-colab Thank you for your assistance. I dedicated the past 2-3 weeks to resolving this issue. Interestingly, it runs flawlessly in Colab without requiring a runtime restart. However, the problem persisted exclusively in Kaggle. I appreciate your help once again. Thank you. Closing this issue as it is not related to the datasets library; rather, it's linked to platform-related issues.
gharchive/issue
2023-12-27T13:31:16
2025-04-01T04:34:31.985125
{ "authors": [ "Sonali-Behera-TRT", "YZx0pa", "iwo9", "lhoestq", "lucken99" ], "repo": "huggingface/datasets", "url": "https://github.com/huggingface/datasets/issues/6538", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1898391913
Fix latex example The latex example string was incorrectly changed here: https://github.com/huggingface/hub-docs/pull/905/files cc @mishig25 @julien-c The docs for this PR live here. All of your documentation changes will be reflected on that endpoint.
gharchive/pull-request
2023-09-15T12:59:57
2025-04-01T04:34:32.073000
{ "authors": [ "HuggingFaceDocBuilderDev", "patrickvonplaten" ], "repo": "huggingface/hub-docs", "url": "https://github.com/huggingface/hub-docs/pull/955", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2668916671
Tasks: Add image-text-to-text pipeline and inference API to task page ..and remove the long inference ah need to lint @pcuenca I changed it since it looked counterintuitive as an example, merging. thanks for the review https://youtu.be/tKCQcM5vfac?feature=shared On Thu, Dec 12, 2024, 08:50 Merve Noyan @.***> wrote: Merged #1039 https://github.com/huggingface/huggingface.js/pull/1039 into main. — Reply to this email directly, view it on GitHub https://github.com/huggingface/huggingface.js/pull/1039#event-15637831029, or unsubscribe https://github.com/notifications/unsubscribe-auth/A45VJTN3PP57PJQUH7SHQZT2FG5HDAVCNFSM6AAAAABR75E5MWVHI2DSMVQWIX3LMV45UABCJFZXG5LFIV3GK3TUJZXXI2LGNFRWC5DJN5XDWMJVGYZTOOBTGEYDEOI . You are receiving this because you are subscribed to this thread.Message ID: @.*** com>
gharchive/pull-request
2024-11-18T15:23:14
2025-04-01T04:34:32.077089
{ "authors": [ "code30x58", "merveenoyan" ], "repo": "huggingface/huggingface.js", "url": "https://github.com/huggingface/huggingface.js/pull/1039", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2327856648
ModelHubMixn: Fix attributes lost in inheritance Fix https://github.com/huggingface/huggingface_hub/issues/2300. Instead of recreating the MixinInfo object, we should complete its information as mentioned by @qubvel. Thanks for reporting this! The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. @Wauplin thanks for a quick response and fix! EDIT: Ah, to be consistent with ModelCardData Yes, and moreover to be consistent with the existing repos on the Hub (and typically to filter by language). Thanks for the review :)
gharchive/pull-request
2024-05-31T13:35:41
2025-04-01T04:34:32.079824
{ "authors": [ "HuggingFaceDocBuilderDev", "Wauplin", "qubvel" ], "repo": "huggingface/huggingface_hub", "url": "https://github.com/huggingface/huggingface_hub/pull/2305", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
2256571233
Remove TGI models Needs #158 to be merged first, then to be rebased on main. TGI models might finally be needed, keeping this on hold Works for me. FYI, I'll probably try to see if we can have a single launcher for inference endpoints and TGI (which, following discussions with Philip, seems like it should be true) (and this code is obsolete anyway) Closing, replacing by #184
gharchive/pull-request
2024-04-22T13:39:55
2025-04-01T04:34:32.081212
{ "authors": [ "NathanHB", "clefourrier" ], "repo": "huggingface/lighteval", "url": "https://github.com/huggingface/lighteval/pull/167", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2240979250
Fix checking output limits for #114 Fixes issue from #114 @zaycev I'll merge as this makes the tests in https://github.com/huggingface/optimum-nvidia/pull/117 pass. note: we should use public models and avoid requiring OPTIMUM_NVIDIA_HUB_READ_TOKEN secret that is not passed to PRs from forks on pull_request event.
gharchive/pull-request
2024-04-12T21:42:26
2025-04-01T04:34:32.082684
{ "authors": [ "fxmarty", "zaycev" ], "repo": "huggingface/optimum-nvidia", "url": "https://github.com/huggingface/optimum-nvidia/pull/115", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1212170251
[Patch] Add loss for ORT inference What does this PR do? Wrap OnnxConfig by wrap_onnx_config_for_loss to obtain the loss while using ORTTrainer under the mode inference_with_ort=True. Enable deepspeed for ONNX Runtime training. (Tested with ZeRO stage 2, full availability under progress) Clean up unused dependencies in ORTTrainer. Update CI of onnxruntime training. Update associated tests. The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. This line is why the doc build is currently failing: https://github.com/huggingface/optimum/pull/152/files#diff-3e928ce0b52f617b86cd0df9399c6bbb5804d6269c6a04b486613eb929449256R25 Until now we never actually imported OnnxConfigWithLoss etc and doing so triggers an import error because transformers is pinned to <4.17 in the doc build (to have both intel and onnxruntime packages in the same env): from optimum.onnxruntime.trainer import ORTTrainer ImportError: cannot import name 'TensorType' from 'transformers.utils' (/home/lewis/miniconda3/envs/optimum/lib/python3.8/site-packages/transformers/utils/__init__.py) The solution is to refactor the doc build so that we build intel and onnxruntime in separate envs. Doing so will also allow us to build the Graphcore & other hardware partner docs as well. I don't have bandwidth for this right now, but happy to review a PR if someone else has time to tackle this! FYI @JingyaHuang if you want to test that the docs build locally you can run: pip install ".[dev,intel.onnxruntime]' pip install git+https://github.com/huggingface/doc-builder.git doc-builder build optimum docs/source --build_dir test-docs --version v1.0.0 --clean You'll need a Linux machine for this since one cannot install intel on macOS This PR looks great !
gharchive/pull-request
2022-04-22T10:30:04
2025-04-01T04:34:32.088139
{ "authors": [ "HuggingFaceDocBuilderDev", "JingyaHuang", "echarlaix", "lewtun" ], "repo": "huggingface/optimum", "url": "https://github.com/huggingface/optimum/pull/152", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2315504401
Fix ORT CI What does this PR do? Fixes ort ci failures due to 1.18 Before submitting [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). [ ] Did you make sure to update the documentation with your changes? [ ] Did you write any new necessary tests? Who can review? one test is still fail during generation with old model optimum/gpt2 https://github.com/huggingface/optimum/blob/main/tests/onnxruntime/test_modeling.py#L2277 The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. ort tests are passing locally, the issue with broadcasting was solved. but for some reason output ids are matching locally and not on the runner (two tests with old onnx model) but for some reason output ids are matching locally and not on the runner (two tests with old onnx model) Do you know where this could come from ?
gharchive/pull-request
2024-05-24T14:07:16
2025-04-01T04:34:32.092002
{ "authors": [ "HuggingFaceDocBuilderDev", "IlyasMoutawwakil", "echarlaix" ], "repo": "huggingface/optimum", "url": "https://github.com/huggingface/optimum/pull/1875", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1921037226
QaLora Feature request This came out https://github.com/yuhuixu1993/qa-lora/blob/main/qalora.py would like to use it with transformers btw, I also got mezo integrated with transformers. Maybe we can swap notes to get that integrated. Motivation more efficient Your contribution I'll test, contribute code if necessary Yes, this is something we surely want to integrate into PEFT. The linked repo already uses PEFT, so hopefully this should not be too difficult to add. Let us know if you want to work on adding the feature. Also, feel free to share your notes on MeZO, though I assume this is for the transformers Trainer, so please share them with the transformers folks. mezo: https://github.com/thistleknot/TrainLLMv3/blob/main/functions.py#L1284 transformers suggested I submit mezo here https://github.com/huggingface/transformers/issues/24264 Id definitely be interested in this. It seems just as promising as LoftQ if not better.
gharchive/issue
2023-10-01T23:48:27
2025-04-01T04:34:32.095810
{ "authors": [ "BenjaminBossan", "datavistics", "thistleknot" ], "repo": "huggingface/peft", "url": "https://github.com/huggingface/peft/issues/986", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
469605157
GPT sentence log loss: average or summed loss? config = GPT2Config.from_pretrained('gpt2') tokenizer = GPT2Tokenizer.from_pretrained('gpt2') model = GPT2LMHeadModel(config) input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute")).unsqueeze(0) # Batch size 1 outputs = model(input_ids, labels=input_ids) loss, logits = outputs[:2] For the loss value computed for the sentence, is it an average log loss or summed log loss? I had a look at CrossEntropyLoss in torch.nn and it seems to be an average loss, but thought I'd double check. If there are multiple sentences in the input instead (so batch size > 1), what does it return? The average logloss over all tokens in the two sentences? Yes, it's the average Thanks for the prompt reply. Much appreciated.
gharchive/issue
2019-07-18T07:09:05
2025-04-01T04:34:32.098402
{ "authors": [ "jhlau", "thomwolf" ], "repo": "huggingface/pytorch-transformers", "url": "https://github.com/huggingface/pytorch-transformers/issues/818", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
2232736465
quantize() returns None with VGG19 Hi there, I was hoping to use quanto to make my Neural Style Transfer application use less VRAM, which depends on the VGG19 classification model. However, when I run the following code, quanto.quantize() seems to return None, as the last line prints "None": import torchvision.models as models import quanto model = models.vgg19(pretrained=True) model = quanto.quantize(model, weights=quanto.qint8, activations=quanto.qint8) print(model) I've tested and this happens on Mac, Windows, and WSL. I'm using torch 2.2.2, torchvision 0.17.2, and quanto 0.1.0. I could be missing something, thanks in advance! @jonahclarsen the model is quantized 'in-place' (i.e. the original model is directly modified): this is why quantize returns None. Hi there, I was hoping to use quanto to make my Neural Style Transfer application use less VRAM, which depends on the VGG19 classification model. However, when I run the following code, quanto.quantize() seems to return None, as the last line prints "None": import torchvision.models as models import quanto model = models.vgg19(pretrained=True) model = quanto.quantize(model, weights=quanto.qint8, activations=quanto.qint8) print(model) I've tested and this happens on Mac, Windows, and WSL. I'm using torch 2.2.2, torchvision 0.17.2, and quanto 0.1.0. I could be missing something, thanks in advance! the quantize() does not have return value and the model you passed in is the quantized model Oh of course! Silly me. Thanks!
gharchive/issue
2024-04-09T06:56:53
2025-04-01T04:34:32.102477
{ "authors": [ "IceWind233", "dacorvo", "jonahclarsen" ], "repo": "huggingface/quanto", "url": "https://github.com/huggingface/quanto/issues/157", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1735310237
Converting models with shared tensors: roberta-base conversion seemingly(?) done incorrectly @julien-c Converted with commit https://huggingface.co/roberta-base/commit/ff46155979338ff8063cdad90908b498ab91b181 However, the conversion script doesn't support models with shared tensors so I'm unsure how it was done. For that matter, I'm unsure about the best way to do it. The error message says to use load_model and save_model but that assumes one knows what model to instantiate. Should it use the architecture listed in config.json like infer_framework_load_model? The included script outputs the following. Some weights of RobertaForMaskedLM were not initialized from the model checkpoint at models/roberta-base_sf and are newly initialized: ['lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.bias', 'lm_head.layer_norm.bias', 'lm_head.dense.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. Different infos when reloading the model: missing_keys : SF warnings contain {'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.bias', 'lm_head.layer_norm.bias', 'lm_head.dense.weight'} which are not present in PT warnings from transformers.pipelines.base import infer_framework_load_model from transformers import AutoConfig from huggingface_hub import hf_hub_download from os import path import config def create_diff(pt_infos, sf_infos): errors = [] for key in ["missing_keys", "mismatched_keys", "unexpected_keys"]: pt_set = set(pt_infos[key]) sf_set = set(sf_infos[key]) pt_only = pt_set - sf_set sf_only = sf_set - pt_set if pt_only: errors.append(f"{key} : PT warnings contain {pt_only} which are not present in SF warnings") if sf_only: errors.append(f"{key} : SF warnings contain {sf_only} which are not present in PT warnings") return "\n".join(errors) repo_id = 'roberta-base' cache_dir = config.MODELS_CACHE_DIR pt_dir = path.join(config.MODELS_DIR, f"{repo_id}_pt") sf_dir = path.join(config.MODELS_DIR, f"{repo_id}_sf") pt_config_path = hf_hub_download(repo_id=repo_id, filename="config.json", local_dir=pt_dir, cache_dir=cache_dir) sf_config_path = hf_hub_download(repo_id=repo_id, filename="config.json", local_dir=sf_dir, cache_dir=cache_dir) pt_model_path = hf_hub_download(repo_id=repo_id, filename="pytorch_model.bin", local_dir=pt_dir, cache_dir=cache_dir) sf_model_path = hf_hub_download(repo_id=repo_id, filename="model.safetensors", local_dir=sf_dir, cache_dir=cache_dir) pt_config = AutoConfig.from_pretrained(pt_config_path) sf_config = AutoConfig.from_pretrained(sf_config_path) _, (pt_model, pt_infos) = infer_framework_load_model(pt_dir, pt_config, output_loading_info=True) _, (sf_model, sf_infos) = infer_framework_load_model(sf_dir, sf_config, output_loading_info=True) if pt_infos != sf_infos: error_string = create_diff(pt_infos, sf_infos) print(f"Different infos when reloading the model:\n{error_string}") Which transformers version are you using ? I cannot reproduce with your script on latest, as there are mismatched types deep in function (output_loading_info messes things up). Maybe it's an older version that indeed had bugs. doesn't support models with shared tensors so I'm unsure how it was done. Safetensors doesn't support shared tensors, but we still manage because tensor sharing is usually simple, therefore conversion scripts (from pt, from safetensors or anything else) are simply modified to be sharing aware, properly ignore ignorable warnings from torch (since shared tensors are properly loaded as soon as 1 of the shared is updated) and continue raising if the checkpoint was actually missing something. I modified it to show everything should work properly: import torch from transformers.pipelines.base import infer_framework_load_model from transformers import AutoConfig, AutoModelForMaskedLM from huggingface_hub import hf_hub_download from os import path def create_diff(pt_infos, sf_infos): errors = [] for key in ["missing_keys", "mismatched_keys", "unexpected_keys"]: pt_set = set(pt_infos[key]) sf_set = set(sf_infos[key]) pt_only = pt_set - sf_set sf_only = sf_set - pt_set if pt_only: errors.append(f"{key} : PT warnings contain {pt_only} which are not present in SF warnings") if sf_only: errors.append(f"{key} : SF warnings contain {sf_only} which are not present in PT warnings") return "\n".join(errors) repo_id = "roberta-base" MODELS_DIR = "tmp/" pt_dir = path.join(MODELS_DIR, f"{repo_id}_pt") sf_dir = path.join(MODELS_DIR, f"{repo_id}_sf") pt_config_path = hf_hub_download(repo_id=repo_id, filename="config.json", local_dir=pt_dir) sf_config_path = hf_hub_download(repo_id=repo_id, filename="config.json", local_dir=sf_dir) pt_model_path = hf_hub_download(repo_id=repo_id, filename="pytorch_model.bin", local_dir=pt_dir) sf_model_path = hf_hub_download(repo_id=repo_id, filename="model.safetensors", local_dir=sf_dir) pt_config = AutoConfig.from_pretrained(pt_config_path) sf_config = AutoConfig.from_pretrained(sf_config_path) pt_model = AutoModelForMaskedLM.from_pretrained(pt_dir, use_safetensors=False) sf_model = AutoModelForMaskedLM.from_pretrained(pt_dir, use_safetensors=True) sf_dict = sf_model.state_dict() for k, v in pt_model.state_dict().items(): print(torch.allclose(v, sf_dict[k])) Thank you for looking into this and my apologies for not responding sooner. After burning a day on this, I was able to reproduce the error in a more convincing way and uncovered a few bugs along the way. My motivation I should've been more explicit about what I was trying to accomplish. I was trying to modify the script convert.py to work with models that use shared tensors. I've added this capability to convert.py and can share it if you'd like. If I'm correct in my assertions, there maybe many models on the hub that aren't correctly converted to safetensors. To answer your question, I used transformers 4.29.2, which accounts for the issues you encountered with infer_framework_load_model when you ran my previous code (why it doesn't work is actually a bug- see Transformers 4.29.2 and 4.30.2 bugs). BTW I also noticed a bug in your testing code pt_model = AutoModelForMaskedLM.from_pretrained(pt_dir, use_safetensors=False) sf_model = AutoModelForMaskedLM.from_pretrained(pt_dir, use_safetensors=True) which explains why you didn't encounter the issue I was trying to raise. It turns out the bug (pt_dir instead of sf_dir) demonstrates yet another bug (see use_safetensors-bug.py). Current situation with convert.py If you run python convert.py --force roberta-base it spits out the error: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: .... A potential way to correctly save your model is to use save_model. More information at https://huggingface.co/docs/safetensors/torch_shared_tensors When I first saw this I took it as, "you are on your own figuring out how to convert models with shared tensors." However, I was confused since roberta-base has already been converted. However, as you'll see below it was improperly converted. This left me wondering how did @julien-c convert it? was it a one off or is there some script I'm unaware of for converting models with shared tensors? most importantly, did @julien-c's method include the checks that convert.py has, i.e., check_final_model? I still don't know the answers to 1. and 2. but pretty sure 3. didn't happen. Safetensors models with shared tensors You wrote Safetensors doesn't support shared tensors, but we still manage because tensor sharing is usually simple, therefore conversion scripts (from pt, from safetensors or anything else) are simply modified to be sharing aware, properly ignore ignorable warnings from torch (since shared tensors are properly loaded as soon as 1 of the shared is updated) and continue raising if the checkpoint was actually missing something. which seemed correct at the time but didn't seem quite right the more I thought about it. I believe my test script (roberta-base-conversion-bug_take2.py) shows that shared tensors work with safetensors and furthermore don't produce initialization warnings. Transformers 4.29.2 and 4.30.2 bugs I see that the latest version 4.30.2 introduced a breaking change in infer_framework_load_model when passing the kwargoutput_loading_info=True to from_pretrained. I've included the relevant pieces of code below: model = model_class.from_pretrained(model, **kwargs) ... - framework = "tf" if "keras.engine.training.Model" in str(inspect.getmro(model.__class__)) else "pt" + framework = infer_framework(model.__class__) With kwargs[output_loading_info]=True the model variable is actually a tuple of the instantiated model and a dict of missing_keys, unexpected_keys, mismatched_keys, and error_msgs. The 4.29.2 code only worked because of the else "pt" statement. To that end, the code shouldn't have ever work for tensorflow b/c model.__class__ returns <class 'tuple'> and str(inspect.getmro(model.__class__) returns (<class 'tuple'>, <class 'object'>) The code should be changed to account for a tuple when output_loading_info=True is passed. I suppose the dict it creates should also be passed back in the return statement as well. Code roberta-base-conversion-bug_take2.py import os import shutil import torch from transformers.pipelines.base import infer_framework_load_model from transformers import AutoConfig, AutoModelForMaskedLM, RobertaForMaskedLM, AutoTokenizer from huggingface_hub import hf_hub_download from os import path def create_diff(pt_infs, sf_infs): errors = [] for key in ["missing_keys", "mismatched_keys", "unexpected_keys"]: pt_set = set(pt_infs[key]) sf_set = set(sf_infs[key]) pt_only = pt_set - sf_set sf_only = sf_set - pt_set if pt_only: errors.append(f"{key} : PT warnings contain {pt_only} which are not present in SF warnings") if sf_only: errors.append(f"{key} : SF warnings contain {sf_only} which are not present in PT warnings") return "\n".join(errors) def compare_models(pt_mdl, sf_mdl): # A blend of convert.py's generalized check_final_model with concrete usage example to demonstrate sf_dict = sf_mdl.state_dict() print("Tensors the same for pt and sf hub models:", all([torch.allclose(v, sf_dict[k]) for k, v in pt_mdl.state_dict().items()])) kwargs = dict() kwargs["input_ids"] = torch.arange(10).unsqueeze(0) pt_logits = pt_mdl(**kwargs)[0] sf_logits = sf_mdl(**kwargs)[0] try: torch.testing.assert_close(sf_logits, pt_logits) print("Model outputs match!") except AssertionError as e: print(e) sequence = f"To be, or not to be, that is the {tokenizer.mask_token}" input_seq = tokenizer.encode(sequence, return_tensors='pt') mask_token_index = torch.where(input_seq == tokenizer.mask_token_id)[1] # we only want the 2nd dimension pt_token_logits = pt_mdl(input_seq).logits sf_token_logits = sf_mdl(input_seq).logits pt_masked_token_logits = pt_token_logits[0, mask_token_index, :] sf_masked_token_logits = sf_token_logits[0, mask_token_index, :] pt_top_tokens = torch.topk(pt_masked_token_logits, 4, dim=1).indices[0].tolist() sf_top_tokens = torch.topk(sf_masked_token_logits, 4, dim=1).indices[0].tolist() print(f"Pytorch masked language model output for '{sequence}' with top predicted <mask> tokens: " f"{', '.join([tokenizer.decode([token]) for token in pt_top_tokens])}") print(f"Safetensors masked language model output for '{sequence}' with top predicted <mask> tokens: " f"{', '.join([tokenizer.decode([token]) for token in sf_top_tokens])}") MODELS_DIR = "tmp/" shutil.rmtree(MODELS_DIR, ignore_errors=True) os.makedirs(MODELS_DIR, exist_ok=True) assert len(os.listdir(MODELS_DIR)) == 0, f"Need to start with directory empty: {os.path.abspath(MODELS_DIR)}" repo_id = "roberta-base" pt_dir = path.join(MODELS_DIR, f"{repo_id}_hub_pt") sf_dir = path.join(MODELS_DIR, f"{repo_id}_hub_sf") sf_dir_local = path.join(MODELS_DIR, f"{repo_id}_local_sf") pt_config_path = hf_hub_download(repo_id=repo_id, filename="config.json", local_dir=pt_dir) sf_config_path = hf_hub_download(repo_id=repo_id, filename="config.json", local_dir=sf_dir) pt_model_path = hf_hub_download(repo_id=repo_id, filename="pytorch_model.bin", local_dir=pt_dir) sf_model_path = hf_hub_download(repo_id=repo_id, filename="model.safetensors", local_dir=sf_dir) print("Comparing pytorch and safetensors models loaded with infer_framework to demonstrate differences in network " "layers reported by using the kwarg output_loading_info=True (requires transformers 4.29.2)\n\n") pt_config = AutoConfig.from_pretrained(pt_config_path) pt_framework, (pt_model_infer, pt_infos) = infer_framework_load_model(pt_dir, pt_config, output_loading_info=True) sf_config = AutoConfig.from_pretrained(sf_config_path) sf_framework, (sf_model_infer, sf_infos) = infer_framework_load_model(sf_dir, sf_config, output_loading_info=True) # These differences are real and show the converted model didn't convert the LM head (the part that has shared tensors) if pt_infos != sf_infos: error_string = create_diff(pt_infos, sf_infos) print(f"Different infos when reloading the model:\n{error_string}") tokenizer = AutoTokenizer.from_pretrained(repo_id, cache_dir=CACHE_DIR) print("\npytorch models loaded with infer_framework and RobertaForMaskedLM produce the same result (no surprise but " "just making sure)") # What infer_framework_load_model would use, i.e., model.__class__ pt_model = RobertaForMaskedLM.from_pretrained(pt_dir, use_safetensors=False) compare_models(pt_model_infer, pt_model) print("\n-->Same tensors and output") print("\nThe pytorch and safetensors models on the hub for roberta-base produce different results") # sf_model = RobertaForMaskedLM.from_pretrained(sf_dir, use_safetensors=True) sf_model = sf_model_infer compare_models(pt_model, sf_model) print("\n-->The safetensors model is clearly missing its pretrained LM head.") print("\nLet's see if converting the pytorch model to safetensors format fixes that. Let's just save the model with " "safe_serialization=True") # Save model with *trained* LM head pt_model.save_pretrained(sf_dir_local, safe_serialization=True) print("Now let's load it up the converted model") sf_model = RobertaForMaskedLM.from_pretrained(sf_dir_local, use_safetensors=True) compare_models(pt_model, sf_model) print("\n-->pytorch model and newly converted safetensors model (not the hub one) produce the same result") print("\nFinis...QED") %python roberta-base-conversion-bug_take2.py produces Comparing pytorch and safetensors models loaded with infer_framework to demonstrate differences in network layers reported by using the kwarg output_loading_info=True (requires transformers 4.29.2) Some weights of RobertaForMaskedLM were not initialized from the model checkpoint at tmp/roberta-base_hub_sf and are newly initialized: ['lm_head.dense.bias', 'lm_head.dense.weight', 'lm_head.bias', 'lm_head.layer_norm.bias', 'lm_head.layer_norm.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. Different infos when reloading the model: missing_keys : SF warnings contain {'lm_head.dense.bias', 'lm_head.layer_norm.bias', 'lm_head.layer_norm.weight', 'lm_head.dense.weight', 'lm_head.bias'} which are not present in PT warnings pytorch models loaded with infer_framework and RobertaForMaskedLM produce the same result (no surprise but just making sure) Tensors the same for pt and sf hub models: True Model outputs match! Pytorch masked language model output for 'To be, or not to be, that is the <mask>' with top predicted <mask> tokens: question, choice, point, difference Safetensors masked language model output for 'To be, or not to be, that is the <mask>' with top predicted <mask> tokens: question, choice, point, difference -->Same tensors and output The pytorch and safetensors models on the hub for roberta-base produce different results Tensors the same for pt and sf hub models: False Tensor-likes are not close! Mismatched elements: 502649 / 502650 (100.0%) Greatest absolute difference: 30.437580347061157 at index (0, 0, 0) (up to 1e-05 allowed) Greatest relative difference: 1503215.0 at index (0, 7, 24147) (up to 1.3e-06 allowed) Pytorch masked language model output for 'To be, or not to be, that is the <mask>' with top predicted <mask> tokens: question, choice, point, difference Safetensors masked language model output for 'To be, or not to be, that is the <mask>' with top predicted <mask> tokens: inaccessible, Penny, Kob, Moe -->The safetensors model is clearly missing its pretrained LM head. Let's see if converting the pytorch model to safetensors format fixes that. Let's just save the model with safe_serialization=True Now let's load it up the converted model Tensors the same for pt and sf hub models: True Model outputs match! Pytorch masked language model output for 'To be, or not to be, that is the <mask>' with top predicted <mask> tokens: question, choice, point, difference Safetensors masked language model output for 'To be, or not to be, that is the <mask>' with top predicted <mask> tokens: question, choice, point, difference -->pytorch model and newly converted safetensors model (not the hub one) produce the same result Finis...QED use_safetensors-bug.py import os import shutil import torch from transformers import RobertaForMaskedLM, AutoTokenizer from huggingface_hub import hf_hub_download from os import path def compare_models(pt_mdl, sf_mdl): # A blend of convert.py's generalized check_final_model with concrete usage example to demonstrate sf_dict = sf_mdl.state_dict() print("Tensors the same for pt and sf hub models:", all([torch.allclose(v, sf_dict[k]) for k, v in pt_mdl.state_dict().items()])) kwargs = dict() kwargs["input_ids"] = torch.arange(10).unsqueeze(0) pt_logits = pt_mdl(**kwargs)[0] sf_logits = sf_mdl(**kwargs)[0] try: torch.testing.assert_close(sf_logits, pt_logits) print("Model outputs match!") except AssertionError as e: print(e) sequence = f"To be, or not to be, that is the {tokenizer.mask_token}" input_seq = tokenizer.encode(sequence, return_tensors='pt') mask_token_index = torch.where(input_seq == tokenizer.mask_token_id)[1] # we only want the 2nd dimension pt_token_logits = pt_mdl(input_seq).logits sf_token_logits = sf_mdl(input_seq).logits pt_masked_token_logits = pt_token_logits[0, mask_token_index, :] sf_masked_token_logits = sf_token_logits[0, mask_token_index, :] pt_top_tokens = torch.topk(pt_masked_token_logits, 4, dim=1).indices[0].tolist() sf_top_tokens = torch.topk(sf_masked_token_logits, 4, dim=1).indices[0].tolist() print(f"Pytorch masked language model output for '{sequence}' with top predicted <mask> tokens: " f"{', '.join([tokenizer.decode([token]) for token in pt_top_tokens])}") print(f"Safetensors masked language model output for '{sequence}' with top predicted <mask> tokens: " f"{', '.join([tokenizer.decode([token]) for token in sf_top_tokens])}") MODELS_DIR = "tmp/" shutil.rmtree(MODELS_DIR, ignore_errors=True) os.makedirs(MODELS_DIR, exist_ok=True) assert len(os.listdir(MODELS_DIR)) == 0, f"Need to start with directory empty: {os.path.abspath(MODELS_DIR)}" repo_id = "roberta-base" pt_dir = path.join(MODELS_DIR, f"{repo_id}_hub_pt") sf_dir = path.join(MODELS_DIR, f"{repo_id}_hub_sf") same_dir = path.join(MODELS_DIR, f"{repo_id}_hub") _ = hf_hub_download(repo_id=repo_id, filename="config.json", local_dir=pt_dir) _ = hf_hub_download(repo_id=repo_id, filename="pytorch_model.bin", local_dir=pt_dir) _ = hf_hub_download(repo_id=repo_id, filename="config.json", local_dir=sf_dir) _ = hf_hub_download(repo_id=repo_id, filename="model.safetensors", local_dir=sf_dir) _ = hf_hub_download(repo_id=repo_id, filename="config.json", local_dir=same_dir) _ = hf_hub_download(repo_id=repo_id, filename="pytorch_model.bin", local_dir=same_dir) _ = hf_hub_download(repo_id=repo_id, filename="model.safetensors", local_dir=same_dir) # This fails as it should (doesn't reach assert False) since no .bin model available in sf_dir print("Try to load safetensors model when only pytorch model present") try: _ = RobertaForMaskedLM.from_pretrained(sf_dir, use_safetensors=False) assert False, "Able to load model.safetensors with use_safetensors=False. Should have failed since " \ "pytorch_model.bin isn't available." except OSError as e: print(e) print("\n-->It rightly blows up because it can't find the pytorch model.") print("\n\nTry to load pytorch model when only safetensors model present") # This doesn't fail as it should (reaches assert False) since no safetensors model available in pt_dir try: _ = RobertaForMaskedLM.from_pretrained(pt_dir, use_safetensors=True) assert False, "Able to load pytorch_model.bin with use_safetensors=True. Should have failed since " \ "model.safetensors isn't available." except OSError as e: print(e) except AssertionError as e: print(e) print("\n-->If explicitly asked to use_safetensors and a safetensors model isn't present, it will fall back to " "a pytorch model if it's present. Intended behavior? It confused the hell out of me when trying to debug all " "this.\n") tokenizer = AutoTokenizer.from_pretrained(repo_id, cache_dir=CACHE_DIR) print("Note that when loading the safetensors (and not pytorch) model it throws the uninitialized warning\n") pt_model = RobertaForMaskedLM.from_pretrained(same_dir, use_safetensors=False) print("This will throw a warning since model.safetensors doesn't have the LM head") sf_model = RobertaForMaskedLM.from_pretrained(same_dir, use_safetensors=True) print("\nWhich model will it load with use_safetensors kwarg not defined? Perhaps the absense or occurrence of the " "warning will tell?") mystery_model = RobertaForMaskedLM.from_pretrained(same_dir) print("\n\nWhen both models are present which does it load when use_safetensors is False vs True?") compare_models(pt_model, sf_model) print("\n-->It seems to obey use_safetensors when both models are present") print("\n\nWhen both models present which does it load when use_safetensors kwarg isn't defined?") compare_models(pt_model, mystery_model) print("\n-->Not surprisingly it prefers the safetensors model as evidenced by the incorrect predictions") %python use_safetensors-bug.py produces Try to load safetensors model when only pytorch model present Error no file named pytorch_model.bin, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory tmp/roberta-base_hub_sf. -->It rightly blows up because it can't find the pytorch model. Try to load pytorch model when only safetensors model present Able to load pytorch_model.bin with use_safetensors=True. Should have failed since model.safetensors isn't available. -->If explicitly asked to use_safetensors and a safetensors model isn't present, it will fall back to a pytorch model if it's present. Intended behavior? It confused the hell out of me when trying to debug all this. Note that when loading the safetensors (and not pytorch) model it throws the uninitialized warning This will throw a warning since model.safetensors doesn't have the LM head Some weights of RobertaForMaskedLM were not initialized from the model checkpoint at tmp/roberta-base_hub and are newly initialized: ['lm_head.layer_norm.bias', 'lm_head.layer_norm.weight', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. Which model will it load with use_safetensors kwarg not defined? Perhaps the absense or occurrence of the warning will tell? Some weights of RobertaForMaskedLM were not initialized from the model checkpoint at tmp/roberta-base_hub and are newly initialized: ['lm_head.layer_norm.bias', 'lm_head.layer_norm.weight', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. When both models are present which does it load when use_safetensors is False vs True? Tensors the same for pt and sf hub models: False Tensor-likes are not close! Mismatched elements: 502648 / 502650 (100.0%) Greatest absolute difference: 32.8263783454895 at index (0, 0, 0) (up to 1e-05 allowed) Greatest relative difference: 214167.1 at index (0, 7, 24147) (up to 1.3e-06 allowed) Pytorch masked language model output for 'To be, or not to be, that is the <mask>' with top predicted <mask> tokens: question, choice, point, difference Safetensors masked language model output for 'To be, or not to be, that is the <mask>' with top predicted <mask> tokens: ody, lyak, oj, 685 -->It seems to obey use_safetensors when both models are present When both models present which does it load when use_safetensors kwarg isn't defined? Tensors the same for pt and sf hub models: False Tensor-likes are not close! Mismatched elements: 502650 / 502650 (100.0%) Greatest absolute difference: 30.133498668670654 at index (0, 0, 0) (up to 1e-05 allowed) Greatest relative difference: 727437.3 at index (0, 7, 24147) (up to 1.3e-06 allowed) Pytorch masked language model output for 'To be, or not to be, that is the <mask>' with top predicted <mask> tokens: question, choice, point, difference Safetensors masked language model output for 'To be, or not to be, that is the <mask>' with top predicted <mask> tokens: Alfred, efficiency, alid, Pret -->Not surprisingly it prefers the safetensors model as evidenced by the incorrect predictions I don't see any issue at all on my end. I used your script, and there were 0 differences for either. Which torch version are you using ? 4.29.2 has the bug: framework = "tf" if "keras.engine.training.Model" in str(inspect.getmro(model.__class__)) else "pt" is wrong logic. Since you're using model has now become a tuple, it will happily say "pt" even if the model was actually TF. Crashing is better here. Using output_loading_info changes the return type of from_pretrained and should bug out for sure. I'm putting the script in a gist it's becoming unreadable: https://gist.github.com/Narsil/7fd524bd6d59c9827563e9e0c99e7952.2 Tensors the same for pt and sf hub models: True Model outputs match! Pytorch masked language model output for 'To be, or not to be, that is the <mask>' with top predicted <mask> tokens: question, choice, point, difference Safetensors masked language model output for 'To be, or not to be, that is the <mask>' with top predicted <mask> tokens: question, choice, point, difference Tensors the same for pt and sf hub models: True Model outputs match! Pytorch masked language model output for 'To be, or not to be, that is the <mask>' with top predicted <mask> tokens: question, choice, point, difference Safetensors masked language model output for 'To be, or not to be, that is the <mask>' with top predicted <mask> tokens: question, choice, point, difference Tensors the same for pt and sf hub models: True Model outputs match! Pytorch masked language model output for 'To be, or not to be, that is the <mask>' with top predicted <mask> tokens: question, choice, point, difference Safetensors masked language model output for 'To be, or not to be, that is the <mask>' with top predicted <mask> tokens: question, choice, point, difference lm_head is not missing. It's not there and correctly so. For roberta lm_head is shared with embeddings (not sure about the correct name for that arch right now). That means it's the same tensor, and only 1 is saved on disk.
gharchive/issue
2023-06-01T02:10:25
2025-04-01T04:34:32.130633
{ "authors": [ "Narsil", "christian-storm" ], "repo": "huggingface/safetensors", "url": "https://github.com/huggingface/safetensors/issues/265", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2149778869
8-bit precision error with fine tuning of gemma I am trying to fine tune gemma7-b with 4 A100 80 GB gpus using 4-bit qunatization model_id = "google/gemma-7b" BitsAndBytesConfig int-4 config bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16 ) print("initiating model download") model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config, use_cache=False, attn_implementation="flash_attention_2", torch_dtype=torch.bfloat16, device_map="auto", token=access_token) peft_config = LoraConfig( lora_alpha=16, lora_dropout=0.1, target_modules=["q_proj", "v_proj"], r=64, bias="none", task_type="CAUSAL_LM", ) prepare model for training model = prepare_model_for_kbit_training(model) model = get_peft_model(model, peft_config) from transformers import TrainingArguments args = TrainingArguments( output_dir=output_dir, num_train_epochs=15, per_device_train_batch_size=8, gradient_accumulation_steps=2, gradient_checkpointing=True, optim="paged_adamw_32bit", logging_steps=100, save_strategy="epoch", learning_rate=2e-4, bf16=True, tf32=True, max_grad_norm=0.3, warmup_ratio=0.03, seed=42, eval_steps=100, lr_scheduler_type="cosine", evaluation_strategy='epoch', disable_tqdm=False, load_best_model_at_end=True, metric_for_best_model="eval_loss", greater_is_better=False, report_to="wandb", run_name=run_name # disable tqdm since with packing values are in correct ) from trl import SFTTrainer max_seq_length = 2048 # max sequence length for model and packing of the dataset trainer = SFTTrainer( model=model, peft_config=peft_config, max_seq_length=max_seq_length, tokenizer=tokenizer, packing=True, formatting_func=generate_prompt, # this will aplly the create_prompt mapping to all training and test dataset args=args, train_dataset=dataset["train"], eval_dataset=dataset["test"] ) trainer.train() This is throwing ""ValueError: You can't train a model that has been loaded in 8-bit precision on a different device than the one you're training on. Make sure you loaded the model on the correct device using for example `device_map={'':torch.cuda.current_device() or device_map={'':torch.xpu.current_device()}"" the same script works for other models like llama2 versions used : transformers:4.38.1 trl:0.7.11 Hi @smreddy05 Thanks for the issue ! Can you try out the solution proposed here: https://github.com/huggingface/trl/issues/1348#issuecomment-1959028364 @younesbelkada thanks for your suggestion and i am hitting new issue ""torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 5.86 GiB. GPU 0 has a total capacity of 79.15 GiB of which 5.08 GiB is free. Process 73494 has 74.06 GiB memory in use. Of the allocated memory 69.76 GiB is allocated by PyTorch, and 2.78 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF" @smreddy05 now you're facing a cuda OOM issue, can you try to use Flash Attention 2 or decrease the max_seq_len / batch_size ? Hey @younesbelkada , i was using flashattention from the moment I have faced 8-bit precision error and I tried reduing batch_size, still I am hitting same issue and the same code works for llama2. Not sure whats wrong with this. Will give it a try with previous versions of trl and accelerate. Also, I am using 4-bit quantization but error talks about 8-bit precision. am I missing something here ? can you please share your thoughts on this? really appreciate your help on this I suspect the reason why it worked for llama-2 is that llama has 6.74B parameters Whereas gemma-7b has in reality ~8.5B parameters You can also use gradient accumulation with very small batch size. For the error you are getting you need to update accelerate pip install -U accelerate @younesbelkada sorry for not being clear, i was referring to llama2-70B model and as of now I am on accelerate 0.27.2, trl=0.7.10 Hi @smreddy05! Were you able to find a solution to fix the OutOfMemoryError error? I have encountered a similar error where I am able to fine-tune llama2 13B but not gemma 7B (although I was using trainer from Transformers=4.41 library). This error occurs only when the evaluation is enabled (do_eval=True), setting it to False makes everything work like a charm. @VIS-WA, sorry, i haven't spent time on this. But, if we set do_eval=False then we cannot run any evaluation on validation set and due to this it might be tricky to judge how good fine tuned model is
gharchive/issue
2024-02-22T19:18:59
2025-04-01T04:34:32.215032
{ "authors": [ "VIS-WA", "smreddy05", "younesbelkada" ], "repo": "huggingface/trl", "url": "https://github.com/huggingface/trl/issues/1355", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2114008281
Fix typos in docs for Multi Adapter RL (MARL). Just fixed up a few typos, nothing really significant The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
gharchive/pull-request
2024-02-02T04:06:00
2025-04-01T04:34:32.216990
{ "authors": [ "HuggingFaceDocBuilderDev", "elhusseiniali" ], "repo": "huggingface/trl", "url": "https://github.com/huggingface/trl/pull/1312", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
445728938
Run boids in electron old package.json moved to *_og new package.json runs in electron with npm start Meant to target my own fork for my work-in-progress. Sorry about that!
gharchive/pull-request
2019-05-18T15:57:09
2025-04-01T04:34:32.218174
{ "authors": [ "dandye" ], "repo": "hughsk/boids", "url": "https://github.com/hughsk/boids/pull/6", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
241913390
RSS Agent - include_feed_info - feed with no "description" example feed: https://www.youtube.com/feeds/videos.xml?channel_id=UCoTLdfNePDQzvdEgIToLIUg Error from Dry Run [00:00:00] INFO -- : Dry Run started [00:00:00] ERROR -- : Failed to fetch https://www.youtube.com/feeds/videos.xml?channel_id=UCoTLdfNePDQzvdEgIToLIUg with message 'undefined method `description' for #<Feedjira::Parser::AtomYoutube:0x005611d3d0a620>': ["/home/huginn/huginn/app/models/agents/rss_agent.rb:233:in `feed_data'", [SNIP] Agent Config { "expected_update_period_in_days": "5", "clean": "false", "url": "https://www.youtube.com/feeds/videos.xml?channel_id=UCoTLdfNePDQzvdEgIToLIUg", "include_feed_info": "true" } Troubleshooting Looking at the RSS feed provided by youtube for a channel, it does not provide a "description" item in the feed data. I'm guessing this is causing issues on line 233 where it attempts to set a variable using that value if include_feed_info is true. Thanks for the report @zoomequipd, I think you are right. We can probably fix it by wrapping the description call with .try(:description), it could be worth doing the same for the more 'exotic' attributes that might not be present on all feeds. I'll see what I can do about this one as well. Feel free to assign it to me.
gharchive/issue
2017-07-11T03:22:49
2025-04-01T04:34:32.221363
{ "authors": [ "dsander", "zoomequipd" ], "repo": "huginn/huginn", "url": "https://github.com/huginn/huginn/issues/2054", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1629980484
Push docker images to docker hub and ghcr As discussed in #3234 we want to make the transition from hosting the docker images on docker hub to GitHub container registry. This change will push the images to both registries, after we have verified everything works as expected we can embed a warning in images pushed to docker hub which tells the users to switch. This is the github action run on my fork that successfully created the images. I didn't bother with integrating the two registries into a build matrix because it is probably easier to just remove the docker hub jobs once we are done with the transition. @dsander Looks like docker push failed for both repositories. Maybe migration from build_docker_image.sh to docker/login-action@v2 and docker/build-push-action@v3 makes it all simpler, I guess?
gharchive/pull-request
2023-03-17T22:16:53
2025-04-01T04:34:32.223836
{ "authors": [ "dsander", "knu" ], "repo": "huginn/huginn", "url": "https://github.com/huginn/huginn/pull/3235", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1425543194
Minor changes regarding dependency versions Security update 'pytest: py<=1.1.0' :warning: Please install the to ensure uploads and comments are reliably processed by Codecov. Codecov Report All modified and coverable lines are covered by tests :white_check_mark: Project coverage is 100.00%. Comparing base (6ddc17d) to head (7091e66). Report is 14 commits behind head on main. :exclamation: Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files @@ Coverage Diff @@ ## main #16 +/- ## ========================================= Coverage 100.00% 100.00% ========================================= Files 30 30 Lines 2271 2270 -1 ========================================= - Hits 2271 2270 -1 :umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.
gharchive/pull-request
2022-10-27T12:19:02
2025-04-01T04:34:32.231199
{ "authors": [ "codecov-commenter", "hugowschneider" ], "repo": "hugowschneider/fastgraphql", "url": "https://github.com/hugowschneider/fastgraphql/pull/16", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1172084418
Avellaneda - Hanging order amount's orig and adj does not matched on status Describe the bug While testing Avellaneda strategy on staging branch, we noticed that for hanging order amount the orig and adj does not match on status. Initially the orders are created with an order amount of 20 XRP but after one side getting filled and the other gets hanged you would noticed that the org changed. This seems to be affecting whole numbers ending in 0 Steps To Reproduce Create Avellaneda strategy with hanging orders enabled Select a market with order amount of whole number ending with 0 (Tested on XRP and DOGE-USDT pairs) Start the bot and wait for trades, best to run with status --live Check hanging orders after few trades Release version Staging 1.2 and dev-1.3 Attachments ave-binance.zip Severity: P3 Bounty: 10,000 (https://hummingbot.org/maintenance/bugs/) Rationale: Bug related to component on Avellaneda strategy Is this still relevant? Based on the latest source, it does not seem possible: amount_orig = self._config_map.order_amount if is_hanging_order: amount_orig = float(order.quantity) level = "hang" data.append([ level, "buy" if order.is_buy else "sell", float(order.price), f"{spread:.2%}", amount_orig, float(order.quantity), age ]) Hi @MementoRC good day and thanks for your response. I setup a testbot on paper to reproduce this quickly and confirm its still a ongoing issue. Also I can confirm that when the hanging order was created the log file shows the client stores the correct amount . The issue is that if the order amount is a whole number that ends on 0 and a hanging order is created, we noticed that it removes the zeros from the status command resulting to cosmetic bug Sample: Hi @rapcmia , got it, I confirmed t the issue with a testcase, it's a cython module, so a tidbit annoying to debug - Any chances the bounty scales with the debugging difficulty ;P (just kidding!) @rapcmia PR: https://github.com/hummingbot/hummingbot/pull/5862 Assigning this issue to @MementoRC. Also, next time please comment first so we can see your comment before creating a PR. Thanks! @MementoRC, please fill up & submit the AML Policy Form via this link: https://forms.gle/MULD7qfm2g2Nhqbr5 This is a one-time process for each ETH Address used to receive HBOT bounty. This bounty has been paid. Thanks!
gharchive/issue
2022-03-17T08:39:33
2025-04-01T04:34:32.326645
{ "authors": [ "JeremyKono", "MementoRC", "carlitogetaladajr", "rapcmia" ], "repo": "hummingbot/hummingbot", "url": "https://github.com/hummingbot/hummingbot/issues/5190", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1452060443
(feat) Update of Huobi spot connector to the latest spot connector st… Before submitting this PR, please make sure: [x] Your code builds clean without any errors or warnings [x] You are using approved title ("feat/", "fix/", "docs/", "refactor/") A description of the changes proposed in the pull request: Updated version of Huobi spot connector to make it compliant with the latest spot connectors code standards. The unit tests have also been updated, and the exchange class unit tests are now implemented based on ExchangeConnectorTests Tests performed by the developer: All unit tests passing in green Tips for QA testing: Please run all validations for a spot connector Solves #5793 HGP: https://snapshot.org/#/hbot.eth/proposal/0x12714ef2fe805472b119ef37b274dccb6ef1541fcc91b70517651a5080b9275b Tests performed: Cloned and installed feature branch Started the client, created password Connected API key successfully Checked Balance is OK Created/Imported/Started a strategy using Huobi Checked status and status --live Checked order cancellation and Manually placed order on stop Checked data integrity (order_book, ticker, status --live) - OK Broker ID - OK Handling Order filled events - OK Handling/Interrupt hanging orders - OK Handling partial fills - OK Compatibility check - OK Data aggregate confirmed Handling multiple orders - OK Handling long refresh rate - OK Handling fast refresh rate - OK Test different token pairs Verify fee calculation - OK @aarmoa Could you please say if that's expected? Currently bot constantly showed WARNING - The websocket connection was closed. Close code = 1003 Steps to reproduce: Create a pureMM (or other) strategy using Huobi (observed better on long refresh time) Start the bot Actual: bot constantly showed WARNING - The websocket connection was closed. Close code = 1003 `` then INFO - Subscribed to public orderbook and trade channels... Expected: No errors during the bot run 5892huobi.zip logs_huobi-pmm.log Hello @nikspz. The code 1003 is the code that is triggered by the websocket framework when the connection is lost because it has been closed in the other end. It seems like Huobi is closing the connections. Hello @nikspz, I have pushed a fix for the issue
gharchive/pull-request
2022-11-16T18:11:52
2025-04-01T04:34:32.336474
{ "authors": [ "aarmoa", "nikspz" ], "repo": "hummingbot/hummingbot", "url": "https://github.com/hummingbot/hummingbot/pull/5892", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2665699561
[Bug]: add video as link Trying to add a video link as in this picture and press apply the video is shown as expected, but submit the form and save the content to local storage and try to reload the page, and the initial content for the editor will be that data saved into local storage, I got this weird form inside the editor you can find the code here and see issue check the code on stackblitz @waleedsalah4 can you record a video for this bug? Here's a recorded video for the bug https://github.com/user-attachments/assets/5f0252ea-5fa8-4259-ba79-f10830590b81 Here's a recorded video for the bug VID_20241117_191242.mp4 oke, this is a bug, we need to validate the video when submit, We only accept link video format mp4, mov,.. Still can't add a YouTube video or other links and prevent men from pasting the url into the field and gives invalid url But when I past a link that ends with .mp4 or other supported formats by the editor it adds it So l, I think this an issue that need to be fixed and allow user to add there own video links like YouTube ones https://github.com/user-attachments/assets/d653dc05-df16-4330-9966-07213f5e737b you should use iframe to add link youtube
gharchive/issue
2024-11-17T10:43:11
2025-04-01T04:34:32.341766
{ "authors": [ "hunghg255", "waleedsalah4" ], "repo": "hunghg255/reactjs-tiptap-editor", "url": "https://github.com/hunghg255/reactjs-tiptap-editor/issues/112", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1699011454
🛑 chirpy is down In 62f3c95, chirpy (https://chirpy.craftstudios.shop/) was down: HTTP code: 0 Response time: 0 ms Resolved: chirpy is back up in 6f6a5f0.
gharchive/issue
2023-05-07T11:02:14
2025-04-01T04:34:32.354886
{ "authors": [ "hupratt" ], "repo": "hupratt/upptime", "url": "https://github.com/hupratt/upptime/issues/239", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2619464767
🛑 chirpy is down In 8d4c5f6, chirpy (https://chirpy.thekor.eu/) was down: HTTP code: 502 Response time: 609 ms Resolved: chirpy is back up in 56a9fb9 after 24 minutes.
gharchive/issue
2024-10-28T20:25:09
2025-04-01T04:34:32.357388
{ "authors": [ "hupratt" ], "repo": "hupratt/upptime", "url": "https://github.com/hupratt/upptime/issues/3551", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1868597401
🛑 Blog is down In d33c2ae, Blog (https://blog.craftstudios.shop/) was down: HTTP code: 0 Response time: 0 ms Resolved: Blog is back up in 7fcc4e8 after 834 days, 21 hours, 12 minutes.
gharchive/issue
2023-08-27T19:02:49
2025-04-01T04:34:32.359656
{ "authors": [ "hupratt" ], "repo": "hupratt/upptime", "url": "https://github.com/hupratt/upptime/issues/529", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
374638794
Background dimmed view is NOT animating Describe the bug In the previous versions, dimmed background view (screenBackground) was dismissed animatedly (fade out). But in this version, after dismissing our UIView/UIViewController, dimmed view is dismissing WITHOUT animation. To Reproduce I use this code: var attributes = EKAttributes.centerFloat attributes.entryBackground = .visualEffect(style: .extraLight) attributes.entranceAnimation = .init( translate: nil, scale: .init(from: 1.2, to: 1, duration: 0.2), fade: .init(from: 0.0, to: 1, duration: 0.2) ) attributes.displayDuration = .infinity attributes.shadow = .none attributes.scroll = .disabled attributes.entryInteraction = .absorbTouches attributes.screenInteraction = .absorbTouches attributes.positionConstraints.maxSize = .init(width: .constant(value: 300), height: .constant(value: 500)) attributes.positionConstraints.verticalOffset = 12 attributes.positionConstraints.size = .init(width: .offset(value: 12), height: .intrinsic) attributes.roundCorners = .all(radius: 8) attributes.screenBackground = .color(color: UIColor.black.withAlphaComponent(0.4)) attributes.exitAnimation = .init( translate: nil, scale: nil, fade: .init(from: 1.0, to: 0.0, duration: 0.2) ) // setting up elements... let simpleMessage = EKSimpleMessage(image: imageContent, title: titleContent, description: descriptionContent) let alertMessage = EKAlertMessage(simpleMessage: simpleMessage, imagePosition: .top, buttonBarContent: buttonBarContent) let contentView = EKAlertMessageView(with: alertMessage) SwiftEntryKit.display(entry: contentView, using: attributes) iPhone (please complete the following information): Device: iPhone 8 iOS Version: iOS 12 Xcode Version: 10 Dependency Manager Version: CocoaPods 1.4.0 SwiftEntryKit Release # 0.8.2 Screenshots / Video Links @omidgolparvar - Thank you for opening this issue. I'll investigate and keep you posted. Fixed in 0.8.3 - Please let me know if that works for you. :-) Unfortunately, I still have the issue. :( Oooooh... It seems that the mistake is shared between us!! 😂 I installed 0.8.3, and you did not release 0.8.4 on Cocoapods. 😁 O.K :-) 0.8.3 already contains the fix that is exemplified by the gif in my previous comment. So, if you installed version 0.8.3 you must have that fix and it should work well. Let me know if I misunderstand something. I installed 0.8.3 and the problem is still here. 😕 The problem was cocoapod! When ever I tried to install 0.8.3, it uses cached version of SwiftEntryKit. I added skip_download_cache: true in ~/.cocoapods/config.yaml in order to force cocoapod to redownload SwiftEntryKit, and the problem is solved. Thanks. 😋👍
gharchive/issue
2018-10-27T12:12:23
2025-04-01T04:34:32.366962
{ "authors": [ "huri000", "omidgolparvar" ], "repo": "huri000/SwiftEntryKit", "url": "https://github.com/huri000/SwiftEntryKit/issues/132", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
602731658
Can't choose prediction AI Hello, i am trying to build an AI model, or use any type of AI to predict not random numbers. So i have the algorithm that generates what is seemed to be random numbers from 100 (the lowest and the worst) to 1000 is the best (the closer you can predict the number the better). I know how this algorithm works, but i wanted to know if it's possible to use an AI to learn patterns of this algorithm and predict the next outcome based on all previous ones. My currant data is just a 1 column with 500 rows of these numbers in CSV. Like the simple table: Number 343 784 100 185 167 164 912 102 194 250 340 197 176 108 124 ... and so on Can you please help me with choosing the right model, lib and approach to it. Tried TensorFlow, but it seemed to me was predicting mean of this values Tried to use Support Vector Machine, but couldn't make it work, it wasn't guessing accurately. Thanks in advance. If you want to build this project together (for educational purposes), would be glad to do so. (i am new to all of this) well you probably have to normalize your data so that you dont have gradient explosions on certain near "out of bounds" data. And any basic vanilla recurrent NN will work (stacked LSTM) or even for your case (single LSTM). BUT make sure you are normalizing your data first before feeding it into the neural network. And this also goes that you will have "un normalize" your data for each prediction your nn makes. If this is a univariate time-series forecasting, then the order in which you feed your data into the NN will matter. So what I would also suggest is to stagger your inputs such it the series are represented as such {[a_0.....a_n], [a_1....a_n+1], ....., [a_n......a_n+1]} for-all a elm of your dataset and rage from 0->n. Look into "Tensorflow univariate timeseries prediction" example on goog, should give you a good stride on how to approach your problem.
gharchive/issue
2020-04-19T14:39:58
2025-04-01T04:34:32.379921
{ "authors": [ "IISuperluminaLII", "yeah-me" ], "repo": "huseinzol05/Stock-Prediction-Models", "url": "https://github.com/huseinzol05/Stock-Prediction-Models/issues/78", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2726444330
The speed of processing of hu_core_news_trf is slow. Is your feature request related to a problem? Please describe. I want to use hu_core_news_trf to do part-of-speech statistics for each sample sentence, but at a much slower pace than using nltk. I want to know what the problem is. Describe the solution you'd like Maybe some improvement about code Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered. Additional context My function is following: def pos_tagging(input_str: str) -> list: doc = nlp(input_str) total_tokens = len(doc) if total_tokens < 20: return Blank_num_ratio = random.choice([0.05, 0.08, 0.1, 0.12]) Blank_num = int(total_tokens * Blank_num_ratio) if int(total_tokens * Blank_num_ratio) < 20 else 20 # Ratio: N:A:V:D:X = 5 : 3 : 1 : 0.5 : 0.5 Token_Nouns = [] Token_Adj = [] Token_Verb = [] Token_Det = [] Token_X = [] T_POS = [] seen = set() # 跳过重复词 for token in doc: T_POS.append(token.pos_) if token.pos_ in ['NOUN', 'PROPN'] and token.text not in seen: Token_Nouns.append(token.text) seen.add(token.text) elif token.pos_ in ['ADJ', 'ADV'] and token.text not in seen: Token_Adj.append(token.text) seen.add(token.text) elif token.pos_ in ['VERB'] and token.text not in seen: Token_Verb.append(token.text) seen.add(token.text) elif token.pos_ in ['DET'] and token.text not in seen: Token_Det.append(token.text) seen.add(token.text) elif token.pos_ in ['X'] and token.text not in seen: Token_X.append(token.text) seen.add(token.text) np.random.shuffle(Token_Nouns) np.random.shuffle(Token_Adj) np.random.shuffle(Token_Verb) np.random.shuffle(Token_Det) np.random.shuffle(Token_X) Token_Nouns = Token_Nouns[:int(Blank_num*0.5)+1] Token_Adj = Token_Adj[:int(Blank_num*0.3)+1] Token_Verb = Token_Verb[:int(Blank_num*0.1)+1] Token_Det = Token_Det[:int(Blank_num*0.05)+1] Token_X = Token_X[:int(Blank_num*0.05)+1] Token = Token_Nouns + Token_Adj + Token_Verb + Token_Det + Token_X return Token Sorry, but this looks like spam to me. Sorry, maybe I should recheck it. When I use hu_core_news_trf to get token_pos_ on the GPU, it is very slow (taking about 2 hours), even though my text corpus isn't that large. But when I use NLTK, it only takes 10 minutes.
gharchive/issue
2024-12-09T09:12:57
2025-04-01T04:34:32.383933
{ "authors": [ "Dbgsaoge", "oroszgy" ], "repo": "huspacy/huspacy", "url": "https://github.com/huspacy/huspacy/issues/73", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1669375190
visualize I need to visualize how the segmentation results are processed same question I need to visualize how the segmentation results are processed
gharchive/issue
2023-04-15T12:55:06
2025-04-01T04:34:32.385102
{ "authors": [ "iamwenyizhang", "lks0825" ], "repo": "hustvl/BMaskR-CNN", "url": "https://github.com/hustvl/BMaskR-CNN/issues/26", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
330969978
Ocean tiles sometimes missing a row of polys There appears to be an edgecase based on camera location that can cause some tiles to be missing their border: Setting 'uniform tiles' fixes it. Thanks for reporting! I'm pretty sure this bug has been lurking some time but was rare enough to evade my focus until now. This fix should sort it. Let me know/reopen if not!
gharchive/issue
2018-06-10T13:47:18
2025-04-01T04:34:32.386929
{ "authors": [ "huwb", "strich" ], "repo": "huwb/crest-oceanrender", "url": "https://github.com/huwb/crest-oceanrender/issues/26", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
496672155
Extra properties of items and item nesting are lost Version used: 4.1.2 How to reproduce: Install https://github.com/tom-englert/ProjectMigrationHelper (optional) Open https://github.com/tom-englert/ResXResourceManager sources in Visual Studio Create a fingerprint "before" with ProjectMigrationHelper (optional) Convert ResXResourceManager projects using dotnet migrate-2019 wizard Create a fingerprint "after" with ProjectMigrationHelper (optional) Compare fingerprints (optional, or manually find the changes) => File nesting is lost, item properties like CustomTool are lost. Root causes: 1. <None Include="Properties\Settings.settings"> <Generator>PublicSettingsSingleFileGenerator</Generator> <LastGenOutput>Settings.Designer.cs</LastGenOutput> </None> <Compile Include="Properties\Settings.Designer.cs"> <AutoGen>True</AutoGen> <DependentUpon>Settings.settings</DependentUpon> <DesignTimeSharedInput>True</DesignTimeSharedInput> </Compile> has been removed at all, but should have been replaced with <None Update="Properties\Settings.settings"> <Generator>PublicSettingsSingleFileGenerator</Generator> <LastGenOutput>Settings.Designer.cs</LastGenOutput> </None> <Compile Update="Properties\Settings.Designer.cs"> <AutoGen>True</AutoGen> <DependentUpon>Settings.settings</DependentUpon> <DesignTimeSharedInput>True</DesignTimeSharedInput> </Compile> <None Include="Properties\Resources.Designer.tt"> <Generator>TextTemplatingFileGenerator</Generator> <DependentUpon>Resources.resx</DependentUpon> <LastGenOutput>Resources.Designer.cs</LastGenOutput> </None> has been left unchanged, but should have been replaced with <None Update="Properties\Resources.Designer.tt"> <Generator>TextTemplatingFileGenerator</Generator> <DependentUpon>Resources.resx</DependentUpon> <LastGenOutput>Resources.Designer.cs</LastGenOutput> </None> Generally any item with properties should not be removed, but changed from Include to Update @hvanbakel any chance to get this fixed? I had a quick look at the sources, but did'nt find the correct entry point where to start fixing this. This tool has been superseded by https://github.com/dotnet/try-convert
gharchive/issue
2019-09-21T16:07:35
2025-04-01T04:34:32.401664
{ "authors": [ "tom-englert" ], "repo": "hvanbakel/CsprojToVs2017", "url": "https://github.com/hvanbakel/CsprojToVs2017/issues/266", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2651997369
Syhol: mise-vscode I started working on a VSCode extension for mise a while back but it didn't go anywhere, feel free to take any bits from it: https://github.com/syhol/mise-vscode I doubt I'll develop it any further and its totally undocumented. I just wanted to share it in case you might get some use out of it. I love the look of your project and I look forward to seeing it develop further. Thank you! I looked at the VSCode marketplace and the existing ones looked like POC, so I created this one. I feel like having an easy way to find where a tool/task is defined might make the adoption of mise easier. I will integrate some features are present in your plugin like automatic configuration for bun/ruff/go/deno etc. I am also adding automatic change detection so that one does not have to manually reload the mise panels. Adding a mise install prompt if some tools are missing is also on my list Please let me know if there are any features that you would like to see
gharchive/issue
2024-11-12T12:05:13
2025-04-01T04:34:32.405875
{ "authors": [ "hverlin", "syhol" ], "repo": "hverlin/mise-vscode", "url": "https://github.com/hverlin/mise-vscode/issues/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
890643302
感谢大佬 感谢大佬的中间件,省去了很多工作,谢谢,专门提个issue感谢,希望以后能参加这个项目的开源. 只是做了最基本封装,大家一起进步
gharchive/issue
2021-05-13T02:31:39
2025-04-01T04:34:32.410774
{ "authors": [ "hwpchn", "madpudding" ], "repo": "hwpchn/AroayCloudScraper", "url": "https://github.com/hwpchn/AroayCloudScraper/issues/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
198290900
Get built-in emoji id How do you find the ID or object of a built-in emoji? I'm trying to add a reaction to one of the bot's messages and finding that no matter what I try, it throws an "Unknown Emoji" error. If by "built-in" you mean a Unicode emoji, just use the raw emoji. message.react('🍰');
gharchive/issue
2017-01-02T02:10:02
2025-04-01T04:34:32.412652
{ "authors": [ "GusCaplan", "tripl3dogdare" ], "repo": "hydrabolt/discord.js", "url": "https://github.com/hydrabolt/discord.js/issues/1056", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
123833771
Add voiceChannel property to User. I added a voiceChannel property to user which contains the voice channel that the user is currently in, or null if they're not in a voice channel, as per #106. I also added a userVoiceUpdate event which is emitted when a user joins or leaves a voice channel. Voice channels are typically tied to guild (memberMap + detailsOf) I'm not quite sure what you're saying, sorry. Are you saying that I should instead make a function to get the voice channel of a user on the server object? https://github.com/hydrabolt/discord.js/blob/master/src/Structures/Server.js#L34 https://github.com/hydrabolt/discord.js/blob/master/src/Structures/Server.js#L74 IMO something like memberMap would be a better place to store the user's voice channel. How's this? I also fixed a few issues with file volume. The order of the arguments meant that the filter was ignored, and the short circuiting meant that a volume of "0" would result in a volume of "1". Might also want to supply the server as part of the event. :) Also, next time you make a PR you shouldn't include things that aren't relative (in this case, the volume change) and make another PR for any additional changes you want to make. The server can be accessed through channel.server, so I'm not sure if I should add it as a parameter. And I included the volume change because I don't think (might be wrong) that there's a way to make two PRs at the same time. True, but since it's included in the data of the packet, it's nice to pass along to the event. You can, you just need to make a new branch. :) This is already being worked on in the indev branch so unfortunately I see no reason to accept the PR, however thanks anyway
gharchive/pull-request
2015-12-24T20:52:15
2025-04-01T04:34:32.417206
{ "authors": [ "TehSeph", "abalabahaha", "cpancake", "hydrabolt" ], "repo": "hydrabolt/discord.js", "url": "https://github.com/hydrabolt/discord.js/pull/107", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
200521210
add internal sharding todo [x] fix ready events [x] fix up voice [x] fix up sending presence updates [x] TEEEEST this was tested on: a user account with 100 guilds a user account with 1 guild a bot with 8 guilds a bot with 16k guilds gus this is god-like, marry me closing in favor of another thing because so many internal changes have been made in 11.1
gharchive/pull-request
2017-01-13T00:51:06
2025-04-01T04:34:32.420460
{ "authors": [ "GusCaplan", "itslukej" ], "repo": "hydrabolt/discord.js", "url": "https://github.com/hydrabolt/discord.js/pull/1088", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1648142548
BDM NXP (Freescale) bkgd hello, in the near future the bdm protocol could be added? thanks NXP (Freescale) Such feature is not planned and will requires more details to better understand what it is exactly and on which MCU/CPU/Platform it can be used/tested PR is welcome as always hello, I will gather all the necessary information and I will post them. THANKS
gharchive/issue
2023-03-30T18:50:20
2025-04-01T04:34:32.421987
{ "authors": [ "Alfa16bravo", "bvernoux" ], "repo": "hydrabus/hydrafw", "url": "https://github.com/hydrabus/hydrafw/issues/146", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
2360050404
Added Catalan translation. Added Catalan translation. You need to also export your language in the src/locales/index.ts file You need to also export your language in the src/locales/index.ts file Sorry, totally forgot. Done now 👍 Thanks! I just added two comments with things that a could remember now. Other than that LGTM. If you are insterested in keeping this language updated, you can join our Telegram. We ask for help in the Translators channel when there is new text to translate I just applied all your suggestions. I will join the Telegram channel, thanks! Before merging I removed strings that are not used anymore. If you use VS Code, i recommend using the i18n Ally extension, you can see the missing strings for each translation. Helps a lot.
gharchive/pull-request
2024-06-18T14:46:34
2025-04-01T04:34:32.424438
{ "authors": [ "Ecron", "zamitto" ], "repo": "hydralauncher/hydra", "url": "https://github.com/hydralauncher/hydra/pull/605", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
392672635
Resource Landing Page: fix layout of publication agreement This iframe brings in unnecessary content (like an arbitrarily cropped up version of the footer), does not wrap the text correctly and does not set appropriate padding. This is specially obvious in mobile devices. Possible fixes: Retrieve the content that matters from the iframe and remove everything else with javascript. Move the agreement text to an HTML file in the project instead of keeping it in Mezzanine. This way we can easily include the text content in the agreement modal window. No mobile phone support officially that will be a larger design effort in the code No mobile phone support officially that will be a larger design effort in the code This is an issue in non-mobile browsers, too
gharchive/issue
2018-12-19T16:11:37
2025-04-01T04:34:32.429397
{ "authors": [ "Maurier", "cuahsimarko" ], "repo": "hydroshare/hydroshare", "url": "https://github.com/hydroshare/hydroshare/issues/3062", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
440893155
Release Checklist 1.23 Issues in Release [x] #2790 No message to user when resource folder creation fails due to duplicate name @pkdash [ ] #1505 Order recent activity in groups from last to first @mraocuahsi [ ] #2871 No user message during processing for resource creation/deletion @cuahsimarko [ ] #3267 Capitalization issue - "Programming language" in the resource landing page Resource Specific metadata @cuahsimarko [ ] #2915 Inconsistent capitalization approach on user profile page @cuahsimarko [ ] #3229 Remove old NetCDF template contents @cuahsimarko [ ] #3273 The "user type" on the hydroshare profile page is incorrectly formatted @mraocuahsi [ ] #2777 Python strftime can't handle dates before 1900 @mraocuahsi [ ] #2871 No user message during processing for resource creation/deletion @mraocuahsi [ ] #3292 colon in discover @alvacouch [ ] #3273 The "user type" on the hydroshare profile page is incorrectly formatted @mraocuahsi [x] #2405 Resource Landing Page: Performance Enhancements Needed @pkdash [ ] #3059 Dashboard UI and Backend @engineSound @alvacouch [x] #2756 Delete resource metadata confirmation @engineSound [ ] #3242 Create" should be moved to the top navigation with a dropdown @Maurier [x] #3362 Group searches UI need inform user when no result found @engineSound [ ] #3310 Spatial coverage place/area name not displayed in resource edit mode @Maurier [x] #2556 Redirect To Resource After Login @pkdash [ ] #3226 Searching users to add as contributors does not work in IE11 @Maurier [ ] #3324 Res landing page crashed with UnicodeEncodeError after being made Public @sblack-usu [ ] #3120 New owner requires refresh before it shows up on landing page @mraocuahsi [x] #3388 textual change: "Show get started" to "Show getting started" in dashboard @engineSound [x] #3392 Without user logged in, the hydroshare page url should not redirect @engineSound [ ] #3382 Anomaly in community resource listing @alvacouch Beta Deployment [ ] Diff RC to master to identify and manually make changes in the following files: hsctl config/hydroshare-config.yaml hydroshare/local_settings.py hydroshare/settings.py nginx/config-files/hydroshare-ssl-nginx.conf.template scripts/templates/docker-compose.template [ ] Deployed to Beta [ ] check_resource beta results match current www results [ ] Review the search and discovery pages [ ] Create a new user and update profile [ ] Create iROD account, test connection and delete iROD account [ ] Create a new resource, check sharing/permission settings, delete new resource [ ] Developers test around issues [ ] QA testing around issues [ ] Stakeholders approval Production Deployment [ ] Deployed to Production Make manual changes to files identified in Beta Deployment [ ] Maps API key is correct and maps are displaying correctly [ ] check_resource www results match pre-deployment www results [ ] Review the search and discovery pages [ ] Create a new user and update profile [ ] Create iROD account, test connection and delete iROD account [ ] Create a new resource, check sharing/permission settings, delete new resource [ ] Developers test around issue Notes relevant to deployment [Enter Notes here] #3400 reports a regression that is in my opinion deploy blocking. #3397 is also a regression that I think should be a hotfix to beta. #3400 reports a regression that is in my opinion deploy blocking. This has been resolved. #2777 and #2871 have been removed from the checklist because they are not complete. The code changes from these two issues are not creating any regressions. #3382 - is dark code for communities, so I'm verifying it.
gharchive/issue
2019-05-06T21:10:26
2025-04-01T04:34:32.443717
{ "authors": [ "dtarb", "sblack-usu" ], "repo": "hydroshare/hydroshare", "url": "https://github.com/hydroshare/hydroshare/issues/3380", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
434888979
[#3330] - push local_settings import to bottom Pull Request Checklist: [x] Positive Test Case Written by Dev [x] Automated Testing [x] Sufficient User and Developer Documentation [ ] Passing Jenkins Build [ ] Peer Code review and approval Positive Test Case When deployed ensure local_settings overwrite settings. (I tested this on beta testing HSWS_ACTIVATED) @phuongdm - Could you verify whether there is more work to be done with deployments for this change to stick? My understanding is settings.py isn't touched on deployment anymore since we moved all machine specific settings to local_settings. Yes, settings.py does not take any affect on deployment process
gharchive/pull-request
2019-04-18T17:58:00
2025-04-01T04:34:32.448288
{ "authors": [ "phuongdm", "sblack-usu" ], "repo": "hydroshare/hydroshare", "url": "https://github.com/hydroshare/hydroshare/pull/3331", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
205418165
Use addEventListener for popstate/hashchange events With this approach if user will set his own handler on popstate/hashchange event it will not reset library behaviour Nice pick up @itrelease mate! Consider this merged! :100: addEventListener is not available in IE8, which is why I had it like that, but @itrelease brought a good point which I had not considered. Thanks! We should look at settling on a "this is what we're going to support", so I don't merge stuff again.... But you're right in that regard. @maraisr That's alright, you made the right choice!
gharchive/pull-request
2017-02-05T11:07:05
2025-04-01T04:34:32.452888
{ "authors": [ "itrelease", "jbucaran", "maraisr" ], "repo": "hyperapp/hyperapp", "url": "https://github.com/hyperapp/hyperapp/pull/26", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
315981987
Accepting connections could be faster with level polling Disclaimer: I'm new to rust. I've been developing a "sort of" http server (it just uses the same protocol, but in reality it only get GET requests, so no body, nothing complex really) which really needs high performance. The earlier version was written in Go, actually performs really well (benchmarks down), but I just really don't like the occasional spikes in CPU usage due to GC. That led me to Rust. I wanted to try it out years ago, but the lack of tooling/docs just scared me away. But! I thought now is the time, let's do it. I heard many good things about it, specially it being really fast. Even saw benchmarks (like the TechEmpower one), where Rust really dominates. I was like YEAH, NICE! That's all I need. Now here comes the problem: my app always closes the connection, sends the Connection: close HTTP header. But... those benchmarks uses keep-alive. So I did some simple "hello world" benchmarking with wrk (since my app is basically an http server), and here's my problem: hyper > wrk -t2 -c4 -d20s -H "Connection: close" http://127.0.0.1:8080 Running 20s test @ http://127.0.0.1:8080 2 threads and 4 connections Thread Stats Avg Stdev Max +/- Stdev Latency 138.03us 332.44us 9.08ms 96.40% Req/Sec 11.95k 1.01k 13.53k 86.32% 477757 requests in 20.10s, 43.28MB read Requests/sec: 23769.81 Transfer/sec: 2.15MB Avg. CPU usage during test: 360%. Go > wrk -t2 -c4 -d20s -H "Connection: close" http://127.0.0.1:13370 Running 20s test @ http://127.0.0.1:13370 2 threads and 4 connections Thread Stats Avg Stdev Max +/- Stdev Latency 1.03ms 9.72ms 173.51ms 98.64% Req/Sec 15.67k 2.63k 21.31k 76.88% 621721 requests in 20.00s, 56.33MB read Requests/sec: 31085.62 Transfer/sec: 2.82MB Avg. CPU usage during test: 105% with occasional spikes to ~130% (GC). Safe to say that's not really promising... So I went ahead, thought it might be an issue (?) in Hyper, and wrote a bare minimum implementation of the same thing on top of tokio to see if there's any difference. Tokio > wrk -t2 -c4 -d20s -H "Connection: close" http://127.0.0.1:8080 Running 20s test @ http://127.0.0.1:8080 2 threads and 4 connections Thread Stats Avg Stdev Max +/- Stdev Latency 140.13us 347.23us 10.41ms 96.41% Req/Sec 11.98k 1.09k 16.11k 86.28% 478077 requests in 20.10s, 43.31MB read Requests/sec: 23786.07 Transfer/sec: 2.15MB Avg. CPU usage during test: 360%. Basically it's the same as hyper - which is not surprising since hyper is built on top of tokio. Now, as I said earlier I'm very new to Rust, read the book yesterday, so I'm probably doing something wrong. Is there some kind of option I'm missing? Thanks! Edit: forgot to mention I'm using --release when building. Would you be able to share what versions of hyper and tokio are being used? The exact versions can be found in the Cargo.lock file. @seanmonstar Absolutely! [[package]] name = "hyper" version = "0.11.25" source = "registry+https://github.com/rust-lang/crates.io-index" dependencies = [ "base64 0.9.0 (registry+https://github.com/rust-lang/crates.io-index)", "bytes 0.4.6 (registry+https://github.com/rust-lang/crates.io-index)", "futures 0.1.21 (registry+https://github.com/rust-lang/crates.io-index)", "futures-cpupool 0.1.8 (registry+https://github.com/rust-lang/crates.io-index)", "httparse 1.2.4 (registry+https://github.com/rust-lang/crates.io-index)", "iovec 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)", "language-tags 0.2.2 (registry+https://github.com/rust-lang/crates.io-index)", "log 0.4.1 (registry+https://github.com/rust-lang/crates.io-index)", "mime 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)", "percent-encoding 1.0.1 (registry+https://github.com/rust-lang/crates.io-index)", "relay 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)", "time 0.1.39 (registry+https://github.com/rust-lang/crates.io-index)", "tokio-core 0.1.17 (registry+https://github.com/rust-lang/crates.io-index)", "tokio-io 0.1.6 (registry+https://github.com/rust-lang/crates.io-index)", "tokio-proto 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)", "tokio-service 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)", "unicase 2.1.0 (registry+https://github.com/rust-lang/crates.io-index)", ] [[package]] name = "tokio" version = "0.1.5" source = "registry+https://github.com/rust-lang/crates.io-index" dependencies = [ "futures 0.1.21 (registry+https://github.com/rust-lang/crates.io-index)", "mio 0.6.14 (registry+https://github.com/rust-lang/crates.io-index)", "tokio-executor 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)", "tokio-io 0.1.6 (registry+https://github.com/rust-lang/crates.io-index)", "tokio-reactor 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)", "tokio-tcp 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)", "tokio-threadpool 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)", "tokio-timer 0.2.1 (registry+https://github.com/rust-lang/crates.io-index)", "tokio-udp 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)", ] This is just a hunch, and may not be the issue, but I've seen an impact on benchmarks before: try pinning to v0.1.12 of tokio-core. So, in your Cargo.toml, have the line for tokio be tokio-core = "=0.1.12". After that version, tokio upgraded to a new reactor implementation. It's actually better in real world situations, since it more fairly distributes work on your system, but when benchmarking with wrk, where the work is always exactly the same, you only notice the cost of a fairer reactor/executor. Sadly it doesn't seem to compile with 0.1.12, throws a whole bunch of errors. Hmmmm, I think for the TechEmpower benchmark, I also had to pin tokio-io directly, I remember some errors... As seen here: https://github.com/TechEmpower/FrameworkBenchmarks/blob/master/frameworks/Rust/hyper/Cargo.toml Yeah, that worked, made a huge difference as well. Still a ~15% off though. wrk -t2 -c4 -d20s -H "Connection: close" http://127.0.0.1:8080 Running 20s test @ http://127.0.0.1:8080 2 threads and 4 connections Thread Stats Avg Stdev Max +/- Stdev Latency 100.47us 63.45us 4.02ms 92.08% Req/Sec 13.60k 1.11k 16.16k 64.68% 543793 requests in 20.10s, 77.27MB read Requests/sec: 27055.20 Transfer/sec: 3.84MB CPU usage went down from 360% to 100%. Hm, how many threads is the Go version allowed to use? I assume if using tokio-core, you've stuck to 1 thread. That could be a difference. The Go version is single-threaded. Technically it uses runtime.GOMAXPROCS(runtime.NumCPU()), but it's using a single-threaded event loop and just barfing out hello world on the same thread. So even if I increase threads and connections in wrk, the CPU usage stays around 100% (increases when GC running, but that's on another thread afaik) and the throughput is around the same with less connections and threads: wrk -t8 -c16 -d20s -H "Connection: close" http://127.0.0.1:13370 Running 20s test @ http://127.0.0.1:13370 8 threads and 16 connections Thread Stats Avg Stdev Max +/- Stdev Latency 3.49ms 25.75ms 344.71ms 98.09% Req/Sec 4.80k 711.35 5.89k 86.02% 758862 requests in 20.10s, 68.75MB read Requests/sec: 37754.70 Transfer/sec: 3.42MB I'd think that call would actually make the Go app use as many threads as you have CPUs. At least, that's what the docs of that method says. As to hyper itself, I haven't really benchmarked with 1 request per connection, so it is possible there is some low hanging fruit. If I had to guess, it's probably not anything with accepting the connection, and probably something about setting up the HTTP state machine after accepted (and read and write buffers). Just to be sure I set GOMAXPROCS to 1, got the same result. Under the hood that app uses https://github.com/tidwall/evio, which - as states - is a single-threaded event loop. To be fair, the pure tokio implementation (without hyper) does the same as hyper. I really don't have any deep knowledge about this, but wouldn't it suggest that the bottleneck is either in tokio or in mio? Oh interesting. If the version not using hyper performs the same, then yea, it must be at a lower level than hyper. Do you reckon I should open an issue about this somewhere else? And if so, where? @seanmonstar Tbh I don't feel comfortable enough in Rust yet to be sure that my tokio/mio implementation would be perfect, so I've ran multiple benchmarks for all test cases again except tokio - so hyper old/latest, and go. Done flamegraph for all options, and also used strace if there's anything interesting happening there. All these things were running separately, so while benchmarking throughput I didn't use strace or perf and vica versa. Go wrk -t2 -c4 -d20s -H "Connection: close" http://127.0.0.1:13370 Running 20s test @ http://127.0.0.1:13370 2 threads and 4 connections Thread Stats Avg Stdev Max +/- Stdev Latency 55.62us 51.12us 3.09ms 87.85% Req/Sec 18.71k 2.47k 23.86k 60.60% 746448 requests in 20.10s, 67.63MB read Requests/sec: 37137.91 Transfer/sec: 3.36MB Avg. CPU load: 100% (single thread). strace % time seconds usecs/call calls errors syscall ------ ----------- ----------- --------- --------- ---------------- 98.72 0.656000 7718 85 6 futex 0.62 0.004110 0 23435 epoll_wait 0.27 0.001764 0 23405 close 0.19 0.001245 0 23403 write 0.09 0.000565 0 46824 fcntl 0.04 0.000292 0 70222 epoll_ctl 0.04 0.000235 0 23412 accept 0.02 0.000160 0 23407 read 0.02 0.000127 0 23407 getsockname 0.00 0.000006 2 4 sched_yield 0.00 0.000000 0 2 pselect6 0.00 0.000000 0 5 epoll_pwait ------ ----------- ----------- --------- --------- ---------------- 100.00 0.664504 257611 6 total graph https://cdn.rawgit.com/orangesoup/09c33197de50615d0527a5088543affd/raw/351666be5c759a7c39780063208a9bc6a48f1a40/flamegraph_go.svg hyper (tokio-core = "=0.1.12"; tokio-io = "=0.1.4") wrk -t2 -c4 -d20s -H "Connection: close" http://127.0.0.1:8080 Running 20s test @ http://127.0.0.1:8080 2 threads and 4 connections Thread Stats Avg Stdev Max +/- Stdev Latency 98.89us 65.99us 4.25ms 92.33% Req/Sec 13.47k 1.21k 15.62k 69.15% 538551 requests in 20.10s, 76.53MB read Requests/sec: 26793.80 Transfer/sec: 3.81MB Avg. CPU load: 100% (as you said, this version is single-threaded). strace % time seconds usecs/call calls errors syscall ------ ----------- ----------- --------- --------- ---------------- 43.89 0.008857 0 124116 close 33.18 0.006696 0 124112 writev 5.62 0.001135 0 155298 31183 accept4 4.65 0.000939 0 248230 setsockopt 4.64 0.000937 0 124115 readv 3.21 0.000648 0 93556 epoll_wait 2.83 0.000572 0 124115 epoll_ctl 1.96 0.000396 0 124115 ioctl 0.00 0.000000 0 8 3 read 0.00 0.000000 0 3 write 0.00 0.000000 0 1 open 0.00 0.000000 0 2 fstat 0.00 0.000000 0 1 lseek 0.00 0.000000 0 1 mmap 0.00 0.000000 0 1 munmap ------ ----------- ----------- --------- --------- ---------------- 100.00 0.020180 1117674 31186 total graph https://cdn.rawgit.com/orangesoup/c68ef2fe9cbe0fb43bf01f7b883d8096/raw/0de030d2f0e3465d7466ca9350b8982317b1a322/flamegraph_hyper_old.svg hyper (latest, tokio 0.1.5) wrk -t2 -c4 -d20s -H "Connection: close" http://127.0.0.1:8080 Running 20s test @ http://127.0.0.1:8080 2 threads and 4 connections Thread Stats Avg Stdev Max +/- Stdev Latency 121.06us 105.47us 7.58ms 97.88% Req/Sec 12.00k 1.16k 13.51k 73.38% 480046 requests in 20.10s, 68.21MB read Requests/sec: 23883.08 Transfer/sec: 3.39MB Avg. CPU load: 115%. strace % time seconds usecs/call calls errors syscall ------ ----------- ----------- --------- --------- ---------------- 40.55 0.009221 0 112527 close 29.99 0.006820 0 112523 writev 9.18 0.002087 0 225052 epoll_ctl 5.94 0.001350 0 141657 29131 accept4 4.49 0.001021 0 113994 1378 futex 4.08 0.000928 0 225052 setsockopt 3.94 0.000895 0 112526 readv 1.83 0.000417 0 112526 ioctl 0.00 0.000000 0 2 read 0.00 0.000000 0 1 open 0.00 0.000000 0 2 fstat 0.00 0.000000 0 1 lseek 0.00 0.000000 0 1 mmap 0.00 0.000000 0 1 munmap ------ ----------- ----------- --------- --------- ---------------- 100.00 0.022739 1155865 30509 total graph https://cdn.rawgit.com/orangesoup/dedaab5cd10a26ececb80b86e7b3e3aa/raw/9bcd8d92b8669f38cc75d7b7017776845b0d35f4/flamegraph_hyper_latest.svg The flamegraphs suggest that a lot of time is spent in __libc_close... I'm just guessing wildly at this point, but I wonder if something like setting TCP_NODELAY would have any impact... I think I've tried using that option with the pure tokio implementation to see if there's any difference, but didn't change much really. But then again, it might be that I just messed that up totally. Is there like a pure tokio "hello world" example that's not an echo server? So I've gone a little deeper now, wrote this little app with mio: extern crate mio; use std::io::{Write}; use std::net::Shutdown; use mio::net::{TcpListener}; use mio::*; const SERVER: Token = Token(0); fn main() { let addr = "127.0.0.1:8080".parse().unwrap(); let server = TcpListener::bind(&addr).unwrap(); let poll = Poll::new().unwrap(); poll.register( &server, SERVER, Ready::readable(), PollOpt::edge(), ).unwrap(); let mut events = Events::with_capacity(1024); loop { poll.poll(&mut events, None).unwrap(); for event in events.iter() { match event.token() { SERVER => { let (mut stream, _) = server.accept().unwrap(); let _ = stream.set_keepalive(None); let _ = stream.write_all(b"HTTP/1.1 200 OK\r\nContent-Length: 11\r\nContent-Type: text/plain\r\nConnection: close\r\n\r\nhello world"); let _ = stream.shutdown(Shutdown::Read); } _ => unreachable!(), } } } } And this got me to even lower: 22k req/s (although only ~50% cpu usage). But... this example uses edge-triggered epoll. So I've swapped out PollOpt::edge() for PollOpt::empty(), and got... 53k req/s with only 90% avg cpu usage. This is just superb now. I'm guessing under the hood tokio is using edge-triggered epoll, but I honestly really think this should be an available option to set at least in tokio. Probably wouldn't make much sense for hyper since most of its users will serve HTTP 1.1/2 clients with keep-alive. Wow, fantastic find. With that, a case could be made on the tokio repo that the TcpListener should perhaps register on level triggered events, instead of edge... Or at least be an option. Care would be needed that someone is actually accepting the sockets as fast as possible, or else epoll would start to waste CPU alerting to an event that the user isn't ready to handle... @orangesoup you probably get better performance w/ level because the edge version is broken. With edge triggered, you must drain the resource. In your case, you only try accepting one socket. There are probably more sockets waiting to be accepted, but you loop back into poll. So, sockets are pending being accepted while you block in poll.
gharchive/issue
2018-04-19T18:04:27
2025-04-01T04:34:32.475614
{ "authors": [ "carllerche", "orangesoup", "seanmonstar" ], "repo": "hyperium/hyper", "url": "https://github.com/hyperium/hyper/issues/1493", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
173383988
configure ssl Cargo.toml contains dependencies as below [dependencies] hyper="0.9.10" when type cargo run in cli , error shows as below error: failed to run custom build command for `openssl v0.7.14` Then how to configure ssl on hyper, could you give some examples or should I compile hyper locally ,and Then introduct it into my project, rather than get hyper with cargo on site of cr]ate.io? It seems work, but how to add ssl certification If you want to figure out how to get openssl building, look at https://github.com/sfackler/rust-openssl If you like to disable SSL, you can try this in your Cargo.toml: [dependencies] hyper = { version="0.9.10", default-features=false }
gharchive/issue
2016-08-26T07:06:03
2025-04-01T04:34:32.480413
{ "authors": [ "fanyer", "seanmonstar" ], "repo": "hyperium/hyper", "url": "https://github.com/hyperium/hyper/issues/903", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
557460605
Add a blocking hello_world server example. A blocking server example to go along with the client will be helpful. Hello! I'd like to take a shot at this issue. I believe the new example can be placed in the file https://github.com/hyperium/tonic/blob/master/examples/helloworld-tutorial.md. I intend to use futures::executor::block_on as executor, so as to make it blocking in nature. Thanks. @govardhangdg sounds good, you will still need to use the tokio runtime. So for that I would use https://docs.rs/tokio/0.2.13/tokio/runtime/struct.Runtime.html#method.block_on
gharchive/issue
2020-01-30T12:27:02
2025-04-01T04:34:32.482883
{ "authors": [ "LucioFranco", "amrx101", "govardhangdg" ], "repo": "hyperium/tonic", "url": "https://github.com/hyperium/tonic/issues/253", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1946791274
Refactor: Replace Manual Validation with Joi Monika Pull Request (PR) What feature/issue does this PR add Replace manual validation with Joi validation for probe data input. How did you implement / how did you fix it Change manual validation with Joi validation. Remove the following unknown properties from Symon. incidentThreshold locations notifications projectId recoveryThreshold requests[0].alerts[0].createdAt requests[0].alerts[0].updatedAt requests[0].createdAt requests[0].ipAddress requests[0].order requests[0].port requests[0].probeId requests[0].protocol requests[0].updatedAt How to test Run Monika with invalid configuration. Run Monika with valid configuration. Run Monika in Symon mode. Screenshot Before After Demo Symon & Monika mode https://github.com/hyperjumptech/monika/assets/15191978/e58145a2-0fbf-44d2-848e-fe98b1f4856a Could you update Joi to the latest version too?
gharchive/pull-request
2023-10-17T07:58:23
2025-04-01T04:34:32.488985
{ "authors": [ "haricnugraha", "kevinhermawan" ], "repo": "hyperjumptech/monika", "url": "https://github.com/hyperjumptech/monika/pull/1145", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
461513098
Load SPID as text (#85) This PR allows to store SPID as text format and performs conversion to binary format within ercc. Signed-off-by: Marcus Brandenburger bur@zurich.ibm.com you require now goimports as part of builds, whereas goimports is added only later via target 'gotools' in top-level Makefile. Maybe gotools should be a target unconditionally added as dependency to build target in build.mk? well my IDE runs goimports. Also, while it works in simulator mode, i couldn't get it to work for either linkable or unlinkable attestation mode as get always a 400 from IAS (and yes, i did update spid.txt :-). Looking at debug output i also see a printed out SPID which is evidentally wrong (but maybe that's before decoding?) I fixed that by moving the decoding to the decorator for the moment. The problem was, that the decoding was not even called since ERCCstub in ecc also reads from the decorator instead invoking getSPID at ercc. I created an issue for that #100. This will be fixed with refactoring ercc. Actually it was signed-off but now DCO checks runs through.
gharchive/pull-request
2019-06-27T12:55:41
2025-04-01T04:34:32.493588
{ "authors": [ "mbrandenburger" ], "repo": "hyperledger-labs/fabric-private-chaincode", "url": "https://github.com/hyperledger-labs/fabric-private-chaincode/pull/100", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1994862786
test: add k6 credential definition performance scenario Overview Adds K6 performance scenario to measure credential definition endpoint Checklist My PR contains... [x] No code changes (changes to documentation, CI, metadata, etc.) [ ] Bug fixes (non-breaking change which fixes an issue) [ ] Improvements (misc. changes to existing features) [ ] Features (non-breaking change which adds functionality) My changes... [ ] are breaking changes [x] are not breaking changes [ ] If yes to above: I have updated the documentation accordingly Documentation [x] My changes do not require a change to the project documentation [ ] My changes require a change to the project documentation [ ] If yes to above: I have updated the documentation accordingly Tests [x] My changes can not or do not need to be tested [ ] My changes can and should be tested by unit and/or integration tests [ ] If yes to above: I have added tests to cover my changes [ ] If yes to above: I have taken care to cover edge cases in my tests Linter failure is OK, we can't fix it. I've picked up bringing this work into the current release in this new PR https://github.com/hyperledger-labs/open-enterprise-agent/pull/865
gharchive/pull-request
2023-11-15T14:14:17
2025-04-01T04:34:32.511439
{ "authors": [ "antonbaliasnikov", "davidpoltorak-io" ], "repo": "hyperledger-labs/open-enterprise-agent", "url": "https://github.com/hyperledger-labs/open-enterprise-agent/pull/787", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2047941125
Issuing credentials with optional values In Indy SDK it was allowed to issue a credential based on a credential definition and omitting certain fields from the credential, thus making those values optional. You could then use attribute markers in a proof request to make sure the value is defined. This feature seems to have gotten lost in AnonCreds RS and will throw an error "AnoncredsError: Invalid state: Credential attribute 'age' value not provided". It seems like it's a valid flow to issue a credential with optional fields, and currently you'd have to use an empty string or something to be able to allow for this (but this is of course not very clean). Not sure if this got lost in the move from indy-sdk -> indy-credx, or indy-credx -> anoncreds-rs? (@andrewwhitehead @berendsliedrecht) This was updated in anoncreds-clsignatures (https://github.com/hyperledger/anoncreds-clsignatures-rs/pull/21) as credentials with missing attributes were not considered valid when it came to presentations, but they could still be issued (and processed) without an error being raised. When signing the messages there isn't any special handling for omitted messages, and I believe they would effectively be mapped to the zero scalar. Given that an integer zero is mapped to the same value, this seems ripe for abuse. @andrewwhitehead could you elaborate on why this would be abusable? If this would allow for incorrect verification, we definitely should not add this back. Would the abuse just be that if some wants to check whether your age is non-negative (for some reason...) and the age property was not issued, it will verify anyways? To illustrate a use case, take for instance the W3C Citizenship vocabulary that supports a permanent resident credential with "minimal" set of attributes and a "full" set of attributes such as maritalStatus, marriageCertificateNumber, and marriageLocation. marriageCertificateNumber and marriageLocation would be optional if the person is unmarried. I realize AnonCreds and W3C VCDM are different animals but it's likely that implementers may have similar predicaments when working with AnonCreds implementations. Is it still reasonable to expect defining schema variants for this type of scenario?
gharchive/issue
2023-12-19T05:18:45
2025-04-01T04:34:32.515979
{ "authors": [ "TimoGlastra", "andrewwhitehead", "berendsliedrecht", "jorgefl0" ], "repo": "hyperledger/anoncreds-rs", "url": "https://github.com/hyperledger/anoncreds-rs/issues/290", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
734912269
Failing hive consensus tests There are currently 258 hive consensus tests failing as shown here https://hivetests.ethdevops.io/?page=v-pills-results-tab&suite=1604262811-5fefe26d48ca5648989132f32b1bc576.json Closed for latest hive
gharchive/issue
2020-11-02T23:41:54
2025-04-01T04:34:32.528183
{ "authors": [ "davemec", "non-fungible-nelson" ], "repo": "hyperledger/besu", "url": "https://github.com/hyperledger/besu/issues/1520", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1388286542
Warn: --tx-pool-future-max-by-account has been deprecated Mainnet v22.7.4 Checkpoint sync Bonsai Paired with Lighthouse On startup 2022-09-27 12:12:17.399+02:00 | main | WARN | Besu | --tx-pool-future-max-by-account has been deprecated, use --tx-pool-limit-by-account-percentage instead. This is the correct behavior. We intend to deprecate this flag.
gharchive/issue
2022-09-27T19:54:46
2025-04-01T04:34:32.529644
{ "authors": [ "estensen", "non-fungible-nelson" ], "repo": "hyperledger/besu", "url": "https://github.com/hyperledger/besu/issues/4452", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2064241887
Under heavy load eth_estimateGas returns INTERNAL_ERROR Description Running a moderately high workload using caliper, after ~ 2000 transactions eth_estimateGas returns INTERNAL_ERROR to the caliper WS client Command that was run to start Besu (single validator): besu --data-path=data --genesis-file=../genesis.json --min-gas-price=0 --sync-mode=FULL --rpc-http-enabled --rpc-ws-enabled --rpc-http-apis=DEBUG,TXPOOL,WEB3,ETH,TRACE,QBFT,ADMIN --tx-pool-limit-by-account-percentage=0.20 --rpc-ws-max-active-connections=300 --tx-pool=sequenced -l TRACE Logs I've pinned it down to this code block: final BlockHeader blockHeader = blockHeader(); if (blockHeader == null) { return errorResponse(requestContext, RpcErrorType.INTERNAL_ERROR); } https://github.com/hyperledger/besu/blob/e0cd89f9b5ca27763ba4b7b2589d486df81a9626/ethereum/api/src/main/java/org/hyperledger/besu/ethereum/api/jsonrpc/internal/methods/EthEstimateGas.java#L53 It was difficult to pin down the offending line using TRACE but adding a new Exception().printStackTrace() to the JsonRpcErrorResponse(...) constructor gave the stack trace: java.lang.Exception at org.hyperledger.besu.ethereum.api.jsonrpc.internal.response.JsonRpcErrorResponse.<init>(JsonRpcErrorResponse.java:47) at org.hyperledger.besu.ethereum.api.jsonrpc.internal.methods.AbstractEstimateGas.errorResponse(AbstractEstimateGas.java:135) at org.hyperledger.besu.ethereum.api.jsonrpc.internal.methods.AbstractEstimateGas.errorResponse(AbstractEstimateGas.java:130) at org.hyperledger.besu.ethereum.api.jsonrpc.internal.methods.EthEstimateGas.response(EthEstimateGas.java:53) at org.hyperledger.besu.ethereum.api.jsonrpc.execution.BaseJsonRpcProcessor.process(BaseJsonRpcProcessor.java:44) at org.hyperledger.besu.ethereum.api.jsonrpc.execution.JsonRpcExecutor.execute(JsonRpcExecutor.java:92) at org.hyperledger.besu.ethereum.api.jsonrpc.websocket.WebSocketMessageHandler.lambda$handle$1(WebSocketMessageHandler.java:90) at io.vertx.core.impl.ContextBase.lambda$null$0(ContextBase.java:137) at io.vertx.core.impl.ContextInternal.dispatch(ContextInternal.java:264) at io.vertx.core.impl.ContextBase.lambda$executeBlocking$1(ContextBase.java:135) at io.vertx.core.impl.TaskQueue.run(TaskQueue.java:76) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.base/java.lang.Thread.run(Thread.java:833) Once I've got a fix in place I might add some additional TRACE logs in EthEstimateGas wherever an INTERNAL_ERROR is returned, to make diagnosing similar issues a little quicker. Looks to be the same issue as https://github.com/hyperledger/besu/pull/6143
gharchive/issue
2024-01-03T15:39:36
2025-04-01T04:34:32.533727
{ "authors": [ "matthew1001" ], "repo": "hyperledger/besu", "url": "https://github.com/hyperledger/besu/issues/6344", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
513065545
[Docs] RPC port numbers It seems there is no docs for port numbers of RPCs such as web3, grpc, etc. Also, I'm not sure if they are configurable or not. We'd love a quick PR on this if you were able. The ports are, indeed, configurable, via the config file you use. We dump defaults into the default config file provided when burrow config is ran. If you'd like to see the defaults quickly in your terminal you can run burrow spec -f1 | burrow configure -s- I would like to work on this if it nobody started working on this. Please go ahead @deepakchethan!
gharchive/issue
2019-10-28T02:30:33
2025-04-01T04:34:32.535899
{ "authors": [ "compleatang", "conanoc", "deepakchethan", "gregdhill" ], "repo": "hyperledger/burrow", "url": "https://github.com/hyperledger/burrow/issues/1301", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2144401077
refactor(gui): gui framework change from solid to react changed gui framework to react @petermetz Yes, the point of changing is that React is more common among people. It will be easier to develop the package for newcomers too because imo React has that advantage over Solid it is more documented/supported and more people tend to know it.
gharchive/pull-request
2024-02-20T13:36:02
2025-04-01T04:34:32.536986
{ "authors": [ "rwat17" ], "repo": "hyperledger/cacti", "url": "https://github.com/hyperledger/cacti/pull/3028", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2488124184
feat(core-api): add createIsJwsGeneralTypeGuard, createAjvTypeGuard createAjvTypeGuard() is the lower level utility which can be used to construct the more convenient, higher level type predicates/type guards such as createIsJwsGeneralTypeGuard() which uses createAjvTypeGuard under the hood. This commit is also meant to be establishing a larger, more generic pattern of us being able to create type guards out of the Open API specs in a convenient way instead of having to write the validation code by hand. An example usage of the new createAjvTypeGuard() utility is the createIsJwsGeneralTypeGuard() function itself. An example usage of the new createIsJwsGeneralTypeGuard() can be found in packages/cactus-plugin-consortium-manual/src/main/typescript/plugin-consortium-manual.ts The code documentation contains examples as well for maximum discoverabilty and I'll also include it here: import { JWSGeneral } from "@hyperledger/cactus-core-api"; import { createIsJwsGeneralTypeGuard } from "@hyperledger/cactus-core-api"; export class PluginConsortiumManual { private readonly isJwsGeneral: (x: unknown) => x is JWSGeneral; constructor() { // Creating the type-guard function is relatively costly due to the Ajv schema // compilation that needs to happen as part of it so it is good practice to // cache the type-guard function as much as possible, for examle by adding it // as a class member on a long-lived object such as a plugin instance which is // expected to match the life-cycle of the API server NodeJS process itself. // The specific anti-pattern would be to create a new type-guard function // for each request received by a plugin as this would affect performance // negatively. this.isJwsGeneral = createIsJwsGeneralTypeGuard(); } public async getNodeJws(): Promise<JWSGeneral> { // rest of the implementation that produces a JWS ... const jws = await joseGeneralSign.sign(); if (!this.isJwsGeneral(jws)) { throw new TypeError("Jose GeneralSign.sign() gave non-JWSGeneral type"); } return jws; } } Relevant discussion took place here: https://github.com/hyperledger/cacti/pull/3471#discussion_r1731894747 Signed-off-by: Peter Somogyvari peter.somogyvari@accenture.com Pull Request Requirements [x] Rebased onto upstream/main branch and squashed into single commit to help maintainers review it more efficient and to avoid spaghetti git commit graphs that obfuscate which commit did exactly what change, when and, why. [x] Have git sign off at the end of commit message to avoid being marked red. You can add -s flag when using git commit command. You may refer to this link for more information. [x] Follow the Commit Linting specification. You may refer to this link for more information. Character Limit [x] Pull Request Title and Commit Subject must not exceed 72 characters (including spaces and special characters). [x] Commit Message per line must not exceed 80 characters (including spaces and special characters). A Must Read for Beginners For rebasing and squashing, here's a must read guide for beginners. cc: @RafaelAPB @eduv09 Please take a look at how the casting to JWSGeneral can be eliminated once this PR gets merged (look at the diff for the file packages/cactus-plugin-consortium-manual/src/main/typescript/plugin-consortium-manual.ts in this PR to see what I mean exactly) The cast LGTM, thanks @petermetz. @RafaelAPB You got it! Any other casting/error handling related stuff you need help with just let me know! It is important for production deployments to have these nailed down and I want to help as much as I can!
gharchive/pull-request
2024-08-27T02:26:12
2025-04-01T04:34:32.545453
{ "authors": [ "petermetz" ], "repo": "hyperledger/cacti", "url": "https://github.com/hyperledger/cacti/pull/3494", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1054705287
Fix generator integration test Downgrade to v3 Yeoman generator in dependencies Downgrade to v3 Yeoman generator in integration test Note: Yeoman v4 requires Node.js v14 LTS, bump back versions once Node version is also bumped. Signed-off-by: Attila Klenik a.klenik@gmail.com Well, the generator parts work fine, despite the bogus Fabric network init (#1177 ) addresses #1157
gharchive/pull-request
2021-11-16T10:26:12
2025-04-01T04:34:32.547560
{ "authors": [ "aklenik", "davidkel" ], "repo": "hyperledger/caliper", "url": "https://github.com/hyperledger/caliper/pull/1176", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
604627298
FAB-17777 Create basic settings.yaml This represents the current settings of this repo. Signed-off-by: Ry Jones ry@linux.com /azp run
gharchive/pull-request
2020-04-22T10:04:41
2025-04-01T04:34:32.548841
{ "authors": [ "btl5037", "ryjones" ], "repo": "hyperledger/fabric-chaincode-go", "url": "https://github.com/hyperledger/fabric-chaincode-go/pull/24", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
235510164
master Description Motivation and Context Fixes # How Has This Been Tested? Checklist: [] I have added a Signed-off-by. [] I have either added documentation to cover my changes or this change requires no new documentation. [] I have either added unit tests to cover my changes or this change requires no new tests. [] I have run golint and have fixed valid warnings in code I have added or modified. This tool generates false positives so you may choose to ignore some warnings. The goal is clean, consistent, and readable code. Signed-off-by: Thank you for your contribution! This is a read only mirror, however. Please submit your change using gerrit. you would need to check out the repo here: https://gerrit.hyperledger.org/r/#/admin/projects/fabric and make your edits, pushing them to gerrit here is a walkthrough for zephyrproject which is much the same WRT setting up an LFID, adding ssh keys, etc: https://www.zephyrproject.org/doc/1.1.0/collaboration/code/gerrit_accounts.html feel free to ask on rocket.chat - https://chat.hyperledger.org/ - discuss in #fabric one note: when you set up your LFID do not use social logins the first time, create the account and you can add social logins later
gharchive/pull-request
2017-06-13T10:37:00
2025-04-01T04:34:32.553758
{ "authors": [ "aditya095", "rjones-lf" ], "repo": "hyperledger/fabric", "url": "https://github.com/hyperledger/fabric/pull/56", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
2623781236
docs: adding Pluto readme https://github.com/hyperledger/identus-edge-agent-sdk-ts/issues/260 Starting a README for Pluto, trying to better explain the options available. Pull Request Test Coverage Report for Build 11592275437 Details 1 of 1 (100.0%) changed or added relevant line in 1 file are covered. 3 unchanged lines in 1 file lost coverage. Overall coverage decreased (-0.06%) to 72.788% Files with Coverage Reduction New Missed Lines % src/domain/utils/DER.ts 3 35.14% Totals Change from base Build 11556271983: -0.06% Covered Lines: 3434 Relevant Lines: 4514 💛 - Coveralls
gharchive/pull-request
2024-10-30T11:28:38
2025-04-01T04:34:32.560329
{ "authors": [ "coveralls", "curtis-h" ], "repo": "hyperledger/identus-edge-agent-sdk-ts", "url": "https://github.com/hyperledger/identus-edge-agent-sdk-ts/pull/314", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1110527872
Inconsistent versioning of non-release python packages cause resolution issues for newer versions of pip Example message from pip (for indy-plenum): WARNING: Requested python3-indy==1.15.0-dev-1625 from https://files.pythonhosted.org/packages/a0/78/74c1d6206c1eae93f677abd6ff03bd6357412716615f0df6858bd655b6a1/python3-indy-1.15.0-dev-1625.tar.gz#sha256=821729e796a47fd591520c8c73fefc91dce7bfc9f784c6702ca9d03d82f2e76a (from indy-plenum==1.13.0.dev0), but installing version 1.15.0 WARNING: Discarding https://files.pythonhosted.org/packages/a0/78/74c1d6206c1eae93f677abd6ff03bd6357412716615f0df6858bd655b6a1/python3-indy-1.15.0-dev-1625.tar.gz#sha256=821729e796a47fd591520c8c73fefc91dce7bfc9f784c6702ca9d03d82f2e76a (from https://pypi.org/simple/python3-indy/). Requested python3-indy==1.15.0-dev-1625 from https://files.pythonhosted.org/packages/a0/78/74c1d6206c1eae93f677abd6ff03bd6357412716615f0df6858bd655b6a1/python3-indy-1.15.0-dev-1625.tar.gz#sha256=821729e796a47fd591520c8c73fefc91dce7bfc9f784c6702ca9d03d82f2e76a (from indy-plenum==1.13.0.dev0) has inconsistent version: filename has '1.15.0.dev1625', but metadata has '1.15.0' ERROR: Could not find a version that satisfies the requirement python3-indy==1.15.0-dev-1625 (from indy-plenum[tests]) (from versions: ... ... 1.12.0.dev1370, 1.12.0.dev1371, 1.12.0.dev1373, 1.12.0.dev1379, 1.12.0.dev1382, 1.12.0.dev1386, 1.12.0rc95, 1.12.0rc96, 1.12.0, 1.13.0.dev1387, 1.13.0.dev1389, 1.13.0.dev1390, 1.13.0.dev1391, 1.13.0.dev1396, 1.13.0.dev1397, 1.13.0.dev1400, 1.13.0.dev1402, 1.13.0.dev1404, 1.13.0.dev1414, 1.13.0.dev1415, 1.13.0.dev1420, 1.13.0.dev1423, 1.13.0rc111, 1.13.0, 1.14.0.dev1424, 1.14.0rc117, 1.14.0, 1.14.1.dev1425, 1.14.1.dev1427, 1.14.1.dev1432, 1.14.1.dev1433, 1.14.1.dev1437, 1.14.1.dev1440, 1.14.1.dev1450, 1.14.1.dev1454, 1.14.1.dev1459, 1.14.1.dev1467, 1.14.1rc120, 1.14.1, 1.14.2.dev1496, 1.14.2.dev1498, 1.14.2.dev1499, 1.14.2.dev1500, 1.14.2.dev1504, 1.14.2.dev1507, 1.14.2.dev1510, 1.14.2.dev1523, 1.14.2.dev1524, 1.14.2rc123, 1.14.2, 1.14.3rc124, 1.14.3rc127, 1.14.3, 1.14.4rc130, 1.14.4rc131, 1.14.4rc135, 1.14.4rc137, 1.14.4rc138, 1.14.4rc139, 1.15.0.dev1528, 1.15.0.dev1532, 1.15.0.dev1533, 1.15.0.dev1535, 1.15.0.dev1536, 1.15.0.dev1541, 1.15.0.dev1542, 1.15.0.dev1543, 1.15.0.dev1544, 1.15.0.dev1545, 1.15.0.dev1546, 1.15.0.dev1547, 1.15.0.dev1548, 1.15.0.dev1549, 1.15.0.dev1551, 1.15.0.dev1552, 1.15.0.dev1553, 1.15.0.dev1554, 1.15.0.dev1555, 1.15.0.dev1557, 1.15.0.dev1558, 1.15.0.dev1560, 1.15.0.dev1563, 1.15.0.dev1565, 1.15.0.dev1567, 1.15.0.dev1568, 1.15.0.dev1595, 1.15.0.dev1597, 1.15.0.dev1604, 1.15.0.dev1605, 1.15.0.dev1606, 1.15.0.dev1607, 1.15.0.dev1608, 1.15.0.dev1609, 1.15.0.dev1618, 1.15.0.dev1624, 1.15.0.dev1625, 1.15.0.dev1626, 1.15.0.dev1627, 1.15.0.dev1628, 1.15.0.dev1629, 1.15.0rc144, 1.15.0, 1.15.0.post17, 1.15.0.post31, 1.15.0.post37, 1.16.0.dev1631, 1.16.0.dev1632, 1.16.0.dev1633, 1.16.0.dev1634, 1.16.0.dev1636, 1.16.0.dev1638, 1.16.0rc161, 1.16.0rc162, 1.16.0rc170, 1.16.0rc172, 1.16.0, 1.16.0.post40, 1.16.0.post47, 1.16.0.post51, 1.16.0.post54, 1.16.0.post56, 1.16.0.post59, 1.16.0.post60, 1.16.0.post64, 1.16.0.post80) ERROR: No matching distribution found for python3-indy==1.15.0-dev-1625 This error is caused by an inconsistency in the package version and the version listed in the package's setup.py. The package version is set correctly by the build: However the version in setup.py is not updated with the matching version number: The build process and/or code needs to be updated to set the code's version numbers consistently. indy-plenum uses more consistent approach to versioning (though more complicated). indy-node uses the same approach as indy-plenum. I think the problem lies here in L936: https://github.com/hyperledger/indy-sdk/blob/a1095be324d4fd6e678fdcb73476ee6d5130ba86/.github/workflows/main.yml#L934-L937 The Version is set via the environment variable PACKAGE_VERSION. But on the enduser side this variable isn't set and the version is set to the fallback from: https://github.com/hyperledger/indy-sdk/blob/a1095be324d4fd6e678fdcb73476ee6d5130ba86/wrappers/python/setup.py#L4 Is it possible to store the env.publish version variable in a file and in the setup.py read the version from that file? The only problem i see with that is, that the version is set by the action but the file in the repository if tracked at all would be out of date. The new packages to fix this issue start here: https://pypi.org/project/python3-indy/1.16.0.post220/ https://pypi.org/project/python3-indy/1.16.0.dev1651/
gharchive/issue
2022-01-21T14:26:08
2025-04-01T04:34:32.570336
{ "authors": [ "WadeBarnes", "pSchlarb" ], "repo": "hyperledger/indy-sdk", "url": "https://github.com/hyperledger/indy-sdk/issues/2473", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
293819651
Fix examples, add valid keys case Fix comma issue as described in https://github.com/hyperledger/iroha/issues/779 Replace ... with valid keys Any progress on that?
gharchive/pull-request
2018-02-02T09:10:56
2025-04-01T04:34:32.571850
{ "authors": [ "grimadas", "l4l" ], "repo": "hyperledger/iroha-api", "url": "https://github.com/hyperledger/iroha-api/pull/56", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1961603830
2024 update calendar Adding updates for 2024 @tkuhrt plz review @tkuhrt if this is ready, go ahead and merge
gharchive/pull-request
2023-10-25T14:44:48
2025-04-01T04:34:32.572939
{ "authors": [ "ryjones" ], "repo": "hyperledger/toc", "url": "https://github.com/hyperledger/toc/pull/176", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
308442226
VisNetworkService fails after NgOnDestroy [x] I've read, understood, and done my best to follow the *CONTRIBUTING guidelines. What did you do? I'm trying to use the getConnectedNodes method through the visNetworkService. What happened instead? This only happens when you change routes in Angular through the router-outlet then going back to the network view page, i get below error. On the first click it fails, then when i click again it suddenly works. Any idea? Code: html => <div #treeview id="mynetwork" class="network-canvas h-100 d-block" [visNetwork]="visNetwork" [visNetworkData]="visNetworkData" [visNetworkOptions]="visNetworkOptions" (initialized)="networkInitialized()"></div> typescript => public networkInitialized(): void { this.visNetwork = 'mynetwork'; this.visNetworkService.on(this.visNetwork, 'click'); this.visNetworkService.click .subscribe((eventData: PropertiesExt[]) => { const selectedNode: string = this.setupService.getSelectedNode(eventData); const connectedNodes = this.visNetworkService.getConnectedNodes(this.visNetwork, selectedNode); }); } public ngOnDestroy(): void { this.visNetworkService.destroy(this.visNetwork); this.visNetwork = null; } Your Environment Angular CLI: 1.6.5 Node: 8.9.1 OS: win32 x64 Angular: 5.2.8 ngx-vis: 0.2.0 close due to no response
gharchive/issue
2018-03-26T06:00:35
2025-04-01T04:34:32.647206
{ "authors": [ "hypery2k", "sarahm7" ], "repo": "hypery2k/ngx-vis", "url": "https://github.com/hypery2k/ngx-vis/issues/77", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1888578357
Mouse jumps to center of monitor on click while playing Eve. Hyprland Version Hyprland, built from branch main at commit 664827473583f8e986f9fb2a37a13e9b3a232cc2 dirty (fix: focusWindow on hidden workspace triggers another focusWindow. (3216)). Tag: v0.29.1-33-g66482747 Bug or Regression? Bug Description Hello, Bit of a weird bug. Whenever i click on anything in Eve Online my mouse will immediately jump to the center of the current monitor How to reproduce Download Eve and try to click something. Crash reports, logs, images, videos https://github.com/hyprwm/Hyprland/assets/110136044/d221aa18-6284-481d-b148-6c670195cb2f Seems like another case of https://github.com/hyprwm/Hyprland/issues/3190. Could either try the patch there or revert https://github.com/hyprwm/Hyprland/commit/28a90d6055f7b616c611c839967765f6536a7cd9 until this is fixed.
gharchive/issue
2023-09-09T04:59:03
2025-04-01T04:34:32.721757
{ "authors": [ "Eckstrom13", "tchofy" ], "repo": "hyprwm/Hyprland", "url": "https://github.com/hyprwm/Hyprland/issues/3222", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
2177372978
Unable to update hyprpm Hyprland Version System/Version info <Paste the output of the command here> Bug or Regression? Bug Description I wanted to get static tiling on hyprland using a plugin following the install guide hyprpm update output: ! Cloning https://github.com/hyprwm/hyprland, this might take a moment. ✔ cloned ✔ checked out to running ver ! configuring Hyprland ✔ configured Hyprland ✖ failed to install headers with error code 2 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5 / 5 Failed ✖ Headers missing. Please run hyprpm update to fix those. How to reproduce hyprpm update Crash reports, logs, images, videos hyprland.log also i got hyprland from pacman if that helps you're most likely missing deps, please rtfm: https://wiki.hyprland.org/Plugins/Using-Plugins/#hyprpm I do have the deps. really? post the output with -v i did have dependencies doing, hyprpm update gives me an error about headers missing. After doing hyprpm update -v it works now. thanks
gharchive/issue
2024-03-09T19:38:30
2025-04-01T04:34:32.727737
{ "authors": [ "Z0achary", "vaxerski" ], "repo": "hyprwm/Hyprland", "url": "https://github.com/hyprwm/Hyprland/issues/5046", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
2192715341
Monitors Use 60Hz Refresh Rate Despite Configuration Hyprland Version System/Version info Hyprland, built from branch HEAD at commit 84ab8d11e8951a6551d1e1bf87796a8589da6d47 (props: bump ver to 0.35.0). Date: Mon Feb 5 01:59:02 2024 Tag: v0.35.0 flags: (if any) System Information: System name: Linux Node name: rfad Release: 6.7.9-arch1-1 Version: #1 SMP PREEMPT_DYNAMIC Fri, 08 Mar 2024 01:59:01 +0000 GPU information: 07:00.0 VGA compatible controller [0300]: NVIDIA Corporation TU104 [GeForce RTX 2080] [10de:1e82] (rev a1) (prog-if 00 [VGA controller]) os-release: NAME="Arch Linux" PRETTY_NAME="Arch Linux" ID=arch BUILD_ID=rolling ANSI_COLOR="38;2;23;147;209" HOME_URL="https://archlinux.org/" DOCUMENTATION_URL="https://wiki.archlinux.org/" SUPPORT_URL="https://bbs.archlinux.org/" BUG_REPORT_URL="https://gitlab.archlinux.org/groups/archlinux/-/issues" PRIVACY_POLICY_URL="https://terms.archlinux.org/docs/privacy-policy/" LOGO=archlinux-logo plugins: Bug or Regression? Bug Description In my hyprland config file, I have specified the following values: monitor=DP-3, 3840x2160@120, 2560x0, 1 monitor=DP-2, 2560x1440@120, 0x0, 1 However, hyprctl still seems to be reporting the monitors as operating around 60Hz: Monitor DP-2 (ID 0): 2560x1440@59.95100 at 0x0 description: Dell Inc. S2719DGF GR3P7P2 (DP-2) make: Dell Inc. model: S2719DGF serial: GR3P7P2 active workspace: 2 (2) special workspace: 0 () reserved: 0 30 0 0 scale: 1.00 transform: 0 focused: no dpmsStatus: 1 vrr: 0 activelyTearing: false Monitor DP-3 (ID 1): 3840x2160@60.00000 at 2560x0 description: LG Electronics LG ULTRAFINE 312NTNHAE479 (DP-3) make: LG Electronics model: LG ULTRAFINE serial: 312NTNHAE479 active workspace: 1 (1) special workspace: 0 () reserved: 0 30 0 0 scale: 1.00 transform: 0 focused: yes dpmsStatus: 1 vrr: 0 activelyTearing: false Why might this be? I recently changed my monitor setup, but am still using two DP cables. Before that change, both monitors were registered at the refresh rate that I desired. How to reproduce Start up a hyprland session, hyprctl reports monitors at 60Hz. Crash reports, logs, images, videos No response for the love of god update this is really outdated hyprland At any rate, this is not a hyprland bug. The driver reports what modes a CRTC supports, and we adhere to that. If you update, likely that your changes in the setup have made the 120hz configuration be considered invalid by your driver.
gharchive/issue
2024-03-18T16:37:44
2025-04-01T04:34:32.731538
{ "authors": [ "lhearachel", "vaxerski" ], "repo": "hyprwm/Hyprland", "url": "https://github.com/hyprwm/Hyprland/issues/5158", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
2235728650
crash after screen unlock and turning on dpms Hyprland Version System/Version info Hyprland, built from branch at commit 303b9956b2ae15508b09dffae602550ca17e6539 (). Date: 2024-04-10 Tag: , commits: @COMMITS@ flags: (if any) System Information: System name: Linux Release: 6.8.4 GPU information: 0a:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Navi 31 [Radeon RX 7900 XT/7900 XTX/7900M] [1002:744c] (rev c8) (prog-if 00 [VGA controller]) os-release: NAME=NixOS PRETTY_NAME="NixOS 24.05 (Uakari)" VERSION="24.05 (Uakari)" VERSION_CODENAME=uakari VERSION_ID="24.05" plugins: Bug or Regression? Bug Description After the screen is unlocked (through hyprlock) and dpms is turned back on, randomly firefox (and emacs) scaling looks completely broken. Afterwards it leads to an immediate crash, in other cases the crash is delayed or it doesn't happen at all. Log below shows: [ERR] BUG THIS: No CWLSurface for surface in damageSurface!!! How to reproduce I can eventually replicate the issue by running loginctl lock-session; sleep 5; hyprctl dispatch dpms off; sleep 15; hyprctl dispatch dpms on; sleep 5; kill -SIGUSR1 $(pidof hyprlock) a bunch of times. Crash reports, logs, images, videos hyprlandCrashReport506900.txt Another crash report that is different. hyprlandCrashReport587062.txt can you try systemd's coredumpctl debug Hyprland with bt -full? the stacktraces kinda poop The backtrace is useless, just prints "no symbol table info available" for all threads. I'll run a debug build and try to reproduce. Unlocked the screen, emacs scaling was fucked, closed it, opened rofi and crash. hyprlandCrashReport52152.txt #0 0x00007f96270a2efc in __pthread_kill_implementation () from /nix/store/ddwyrxif62r8n6xclvskjyy6szdhvj60-glibc-2.39-5/lib/libc.so.6 No symbol table info available. #1 0x00007f9627052e86 in raise () from /nix/store/ddwyrxif62r8n6xclvskjyy6szdhvj60-glibc-2.39-5/lib/libc.so.6 No symbol table info available. #2 0x00007f962703b935 in abort () from /nix/store/ddwyrxif62r8n6xclvskjyy6szdhvj60-glibc-2.39-5/lib/libc.so.6 No symbol table info available. #3 0x00000000006dc57e in handleUnrecoverableSignal(int) () No symbol table info available. #4 <signal handler called> No symbol table info available. #5 0x00000000020250d8 in ?? () No symbol table info available. #6 0x00007f9627dd1a0c in wl_signal_emit_mutable () from /nix/store/blw10rx1cayp2n2pkmyihpipifzgj2xq-wayland-1.22.0/lib/libwayland-server.so.0 No symbol table info available. #7 0x00007f9627d14393 in output_bind () from /nix/store/sgavsg936bdg00cs3lg011ydxinjiv04-wlroots-hyprland-2024-03-09_50eae51/lib/libwlroots.so.13 No symbol table info available. #8 0x00007f96274d1052 in ffi_call_unix64 () from /nix/store/f8ipgi6l1n1c0wr1r5aj40phnd6fkmv8-libffi-3.4.6/lib/libffi.so.8 No symbol table info available. #9 0x00007f96274ceee5 in ffi_call_int () from /nix/store/f8ipgi6l1n1c0wr1r5aj40phnd6fkmv8-libffi-3.4.6/lib/libffi.so.8 No symbol table info available. #10 0x00007f96274cfad8 in ffi_call () from /nix/store/f8ipgi6l1n1c0wr1r5aj40phnd6fkmv8-libffi-3.4.6/lib/libffi.so.8 No symbol table info available. #11 0x00007f9627dd5841 in wl_closure_invoke () from /nix/store/blw10rx1cayp2n2pkmyihpipifzgj2xq-wayland-1.22.0/lib/libwayland-server.so.0 No symbol table info available. #12 0x00007f9627dd0c4b in wl_client_connection_data () from /nix/store/blw10rx1cayp2n2pkmyihpipifzgj2xq-wayland-1.22.0/lib/libwayland-server.so.0 No symbol table info available. #13 0x00007f9627dd38f2 in wl_event_loop_dispatch () from /nix/store/blw10rx1cayp2n2pkmyihpipifzgj2xq-wayland-1.22.0/lib/libwayland-server.so.0 No symbol table info available. #14 0x00007f9627dd1455 in wl_display_run () from /nix/store/blw10rx1cayp2n2pkmyihpipifzgj2xq-wayland-1.22.0/lib/libwayland-server.so.0 No symbol table info available. #15 0x000000000090aa7a in CEventLoopManager::enterLoop(wl_display*, wl_event_loop*) () No symbol table info available. #16 0x00000000006e1b8f in CCompositor::startCompositor() () No symbol table info available. #17 0x00000000008bd640 in main () No symbol table info available. After a bisect session this is the commit introducing the problem: c3882bb83240b602277f2d22f21d71690531f62e No idea what's happening here. I don't have any hyprcursor theme, just using xcursor. I tried building Hyprland with latest hyprcursor, setting an hyprcursor theme, not setting any xcursor theme. In all cases was able to replicate. Maybe ASan will show something interesting? https://wiki.hyprland.org/Crashes-and-Bugs/#building-the-wayland-stack-with-asan Asan log seems pretty useless 😥 ================================================================= ==207007==ERROR: AddressSanitizer: SEGV on unknown address 0x617000051798 (pc 0x617000051798 bp 0x7ffffffed160 sp 0x7ffffffed058 T0) ==207007==The signal is caused by a READ memory access. ==207007==Hint: PC is at a non-executable region. Maybe a wild jump? #0 0x617000051798 (<unknown module>) #1 0x7ffff7544c80 in output_bind ../types/output/output.c:126 #2 0xe770d2 in registry_bind ../src/wayland-server.c:1023 #3 0x7ffff6db8051 in ffi_call_unix64 (/nix/store/f8ipgi6l1n1c0wr1r5aj40phnd6fkmv8-libffi-3.4.6/lib/libffi.so.8+0xa051) AddressSanitizer can not provide additional info. SUMMARY: AddressSanitizer: SEGV (<unknown module>) ==207007==ABORTING This is what it looks like after unlocking, the browser window just gets blurred (it happens in emacs as well). It seems that the issue isn't related to hyprcursor though, this just happened on 0.36.0 although it didn't lead to a crash and it's way less prevalent (been running it for some days now without triggering). Also in the case of firefox the blurriness fixed itself after I interacted with the window. Any idea about which part of the code could be related to this? I'm using fractional scaling in case that's relevant. Nevermind, it did crash eventually: #0 | /nix/store/fx7plpzxdnbffm97b84mw6qc6m1d2irf-hyprland-0.36.0+date=2024-04-14_ff39ac1/bin/.Hyprland-wrapped_(_Z12getBacktracev+0x48) [0x5788f8] getBacktrace() ??:? #1 | /nix/store/fx7plpzxdnbffm97b84mw6qc6m1d2irf-hyprland-0.36.0+date=2024-04-14_ff39ac1/bin/.Hyprland-wrapped_(_ZN13CrashReporter18createAndSaveCrashEi+0x6b5) [0x52b7f5] CrashReporter::createAndSaveCrash(int) ??:? #2 | /nix/store/fx7plpzxdnbffm97b84mw6qc6m1d2irf-hyprland-0.36.0+date=2024-04-14_ff39ac1/bin/.Hyprland-wrapped_(_Z25handleUnrecoverableSignali+0x44) [0x4a2fb4] handleUnrecoverableSignal(int) ??:? #3 | /nix/store/ddwyrxif62r8n6xclvskjyy6szdhvj60-glibc-2.39-5/lib/libc.so.6(+0x3ff30) [0x7fd61c252f30] ?? ??:0 #4 | /nix/store/blw10rx1cayp2n2pkmyihpipifzgj2xq-wayland-1.22.0/lib/libwayland-server.so.0(wl_list_insert+0x14) [0x7fd61cfce494] ?? ??:0 #5 | /nix/store/blw10rx1cayp2n2pkmyihpipifzgj2xq-wayland-1.22.0/lib/libwayland-server.so.0(wl_signal_emit_mutable+0x39) [0x7fd61cfc99c9] ?? ??:0 #6 | /nix/store/h147jd0fh2qyg2k24cxf6z5av7jx9y9y-wlroots-hyprland-2024-02-21_0cb091f/lib/libwlroots.so.13(wlr_output_schedule_frame+0x9) [0x7fd61cf0cc59] ?? ??:0 #7 | /nix/store/fx7plpzxdnbffm97b84mw6qc6m1d2irf-hyprland-0.36.0+date=2024-04-14_ff39ac1/bin/.Hyprland-wrapped_(_ZN13CHyprRenderer9damageBoxEP4CBox+0xa8) [0x653478] CHyprRenderer::damageBox(CBox*) ??:? #8 | /nix/store/fx7plpzxdnbffm97b84mw6qc6m1d2irf-hyprland-0.36.0+date=2024-04-14_ff39ac1/bin/.Hyprland-wrapped_(_ZN6Events27listener_commitLayerSurfaceEPvS0_+0xaa) [0x54fe9a] Events::listener_commitLayerSurface(void*, void*) ??:? #9 | /nix/store/fx7plpzxdnbffm97b84mw6qc6m1d2irf-hyprland-0.36.0+date=2024-04-14_ff39ac1/bin/.Hyprland-wrapped_(_ZN15CHyprWLListener4emitEPv+0x3b) [0x591fdb] CHyprWLListener::emit(void*) ??:? #10 | /nix/store/fx7plpzxdnbffm97b84mw6qc6m1d2irf-hyprland-0.36.0+date=2024-04-14_ff39ac1/bin/.Hyprland-wrapped_(_Z13handleWrappedP11wl_listenerPv+0x3f) [0x59384f] handleWrapped(wl_listener*, void*) ??:? #11 | /nix/store/blw10rx1cayp2n2pkmyihpipifzgj2xq-wayland-1.22.0/lib/libwayland-server.so.0(wl_signal_emit_mutable+0x7c) [0x7fd61cfc9a0c] ?? ??:0 #12 | /nix/store/h147jd0fh2qyg2k24cxf6z5av7jx9y9y-wlroots-hyprland-2024-02-21_0cb091f/lib/libwlroots.so.13(+0x8533d) [0x7fd61cf2233d] ?? ??:0 #13 | /nix/store/f8ipgi6l1n1c0wr1r5aj40phnd6fkmv8-libffi-3.4.6/lib/libffi.so.8(+0xa052) [0x7fd61c6ff052] ?? ??:0 I also tried running with damage tracking disabled for a while and could still reproduce. Haven't replicated so far on latest, will close and reopen if necessary.
gharchive/issue
2024-04-10T14:22:16
2025-04-01T04:34:32.741990
{ "authors": [ "andresilva", "vaxerski" ], "repo": "hyprwm/Hyprland", "url": "https://github.com/hyprwm/Hyprland/issues/5535", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
1906997166
Double-sided ImagePlanes and Slits Currently our ImagePlanes & Slits only detect ray collisions from one side, and don't collide with rays from the other side. I think I figured out how to implement double-sided collision though. The reason this was never merged was that tests like PlaneGratingDeviationDefault fail when we do it. This stems from the fact that the ImagePlane (the second object in the RML file) has a lower z position than the Grating (the first object in the RML file), and thus the rays would hit the ImagePlane first before hitting the grating. (This doesn't happen on Ray-UI due to sequential tracing) How do we want to address this? @treegardel The easiest solution is probably to force ray-x to do sequential tracing in tests that directly compare with Ray-UI? Implemented in 7c650d3. Closing.
gharchive/issue
2023-09-21T13:32:12
2025-04-01T04:34:32.754761
{ "authors": [ "memoryleak47" ], "repo": "hz-b/rayx", "url": "https://github.com/hz-b/rayx/issues/182", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1842554608
Umlauts break links in posts See this post: https://nostrudel.ninja/#/n/nevent1qqsv3340guhmqadgxhe8cvshnx304mjtss2r37pn3rvlnu5jrrg7vwspzemhxue69uhhyetvv9ujumn0wd68ytnzv9hxgqgnwaehxw309aex2mrp09skymr99ehhyecpnksay Link at the bottom stops at the Umlaut. This is something that's been on my todo list for a while. @ also breaks links. the issue is how I have the link RegExp written To fix this I need to search for a better link regexp that supports non-english characters and other symbols Can you post this regex (or code location)? Maybe I can find something. the regexp is located here https://github.com/hzrd149/nostrudel/blob/master/src/helpers/embeds.ts#L56 It needs to be simplified and Unicode support added. however it also needs to avoid false positives as much as possible. An example of a false positive would be http://sub.example.verylongingaliddomain/index.html or https://example.com/???test=0 Or even two urls back to back without a space http://example.comhttp://example.com I haven't figured out a good regexp yet, but I know one has to exist. if not how would other social media sites auto detect links Thanks, I'll try some stuff over the next days. So, I tried some stuff - best I came up with is this: https?:\/\/([\w \.-]+\.\w+)(\S*) Check it here: https://regex101.com/r/J4hHHn/1 http://sub.example.verylongingaliddomain/index.html can be valid as there are TLDs like ".cancerresearch" which is already 15 chars long. https://example.com/???test=0 seems to be a special case. You can prevent it but I guess it's not worth the effort. http://example.comhttp://example.com must be valid because of this: https://example.com?host=https://example.com What do you think? That's a good start, although the use of \S (not white space) picks up a some of characters like ) or , after the url which people normally add. also I don't think it would break anything but \w includes _ which technically invalid for domains I replace the use of \w with a-zA-Z0-9 so it would not include _ and \S with \p{Letter}\p{Number} which should include any Unicode letter or number characters (not just English) https://www.regular-expressions.info/unicode.html#category I also added a lot more example of URLs and false positives https://regex101.com/r/GOWi8J/1 What do you think? can you think of any other strange URL formats that might need to be considered? I think you're right on the domain part with \w, but we need \S* for the path, because it's perfectly fine to use ();,._[] etc. in URL params. Please check https://regex101.com/r/GOWi8J/2 - I added just two real links, which aren't working. Fixing both links is pretty easy, just needed to add _and , to the list of accepted characters. although it will break the , separated URLs ( but thats find because github dose not event support that ) I'm not sure about using ();,._[] in URL params though, I know they can be used, but I believing they have to be escaped. either way Ive seen more markdown and links surrounded by () than I've seen those characters used in URL params. I'm hesitant to use \S is because it covers too much and it think it would be better for a few links to be broken then to have it select some of the text after the link Test https://example.com,https://example.com https://example.com) I'm with you. Let's just add _ and , to your last version: https://regex101.com/r/GOWi8J/4 Thia should fix most issues. forgot to close this issue, but the fix for this was released a few days ago I have to reopen this. Sevaral news sites use tildes in image links. Can we include "~" in the regex? Sample: https://nostrudel.ninja/#/n/note1fquun6a9hjcsv0lcd8fafx53zqepqwf5xm6arez7sdzxuscpz28sq79c2g Man it seems to be specifically news sites that are the worst offenders when it comes to breaking the URL spec Fix should be out in the alpha version
gharchive/issue
2023-08-09T06:09:23
2025-04-01T04:34:32.793200
{ "authors": [ "hzrd149", "psic4t" ], "repo": "hzrd149/nostrudel", "url": "https://github.com/hzrd149/nostrudel/issues/5", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
199673841
Make Labels and Shape Views's Backgrounds Transparent Perhaps this should be an option when instantiating things, but this was a quick fix for a use that I needed. Thanks so much for your work. Thanks! Makes sense! For what do you need clipToBounds? Could you please remove the fork infos from the readme? After the merge it's not in the fork anymore :) Also, can you commit only the files that actually changed? It seems you rebased, or amended and changed the hash of all the files...
gharchive/pull-request
2017-01-09T21:59:15
2025-04-01T04:34:32.799892
{ "authors": [ "carbamide", "i-schuetz" ], "repo": "i-schuetz/ChartLegends", "url": "https://github.com/i-schuetz/ChartLegends/pull/1", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
54758518
Fix for error thrown by JSON.parse(body) in sync.js Fixed an fatal error thrown by "JSON.parse(body)" if "body" is no valid json object. published v1.6.2 of the remote sync backend
gharchive/pull-request
2015-01-19T12:34:31
2025-04-01T04:34:32.800823
{ "authors": [ "faecke", "jamuhl" ], "repo": "i18next/i18next-node", "url": "https://github.com/i18next/i18next-node/pull/167", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }