id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
432601370
Bug at coco dataset Line bug https://github.com/HRNet/HRNet-Object-Detection/blob/4d257bf0a41d265f3b262347fb67b8f6330b86b7/mmdet/datasets/coco.py#L39 def lget_ann_info(self, idx): instead of def get_ann_info(self, idx): Thanks, this line was modified by accidental operations. I'll fix it now. fixed
gharchive/issue
2019-04-12T14:42:39
2025-04-01T04:55:09.316309
{ "authors": [ "E1eMenta", "wondervictor" ], "repo": "HRNet/HRNet-Object-Detection", "url": "https://github.com/HRNet/HRNet-Object-Detection/issues/1", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
38641489
I'd love to be able to compare side by side stats for a website against average The stats feature is great. But I'd love to be able to compare an individual website to see how it stacks up against average statistics across the web. For example, are my transfer sizes larger than average, etc. You can view individual stats for a website. I don't think plotting that against the average across the entire web is very useful. If we could do it vs competitors or a specific vertical market perhaps. Closing.
gharchive/issue
2014-07-24T15:14:12
2025-04-01T04:55:09.338685
{ "authors": [ "nolanvinny", "stevesouders" ], "repo": "HTTPArchive/httparchive", "url": "https://github.com/HTTPArchive/httparchive/issues/31", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
221010492
Detect publishing platforms Similar to https://github.com/HTTPArchive/httparchive/issues/77, detect the presence of publishing platforms like Wordpress and Drupal. A secondary goal would be to detect themes and plugins. Unlike #77, the key metric here would just be a single string/enum value representing the detected platform - as opposed to a list. For accurate detection, need to come up with a list of signals for each platform. This may be hard to achieve through custom metrics alone. For example, if it requires introspection of a script file's comments, that wouldn't be possible with client-side JS alone. We may need to do post-processing on response bodies. Ideally, we want a breakdown similar to https://trends.builtwith.com/cms/. However, as a starting point, I think we can focus our conversation on WordPress and figure out the requirements and pipeline for that. With that in mind, a few thoughts... There are two (complementary) ways we can attempt to detect these platforms: At crawl runtime by looking for some platform-specific JS objects or signatures Post-crawl by analyzing response headers and bodies My hunch is that we'll get the most mileage from focusing on (2). In the context of WordPress: Extract value of <meta generator...> in HTML Look for response headers like Link rel=shortlink e.g. WordPress VIP appends X-Hacker and Link: wp.me shortlink in response Look for wp-... references in the HTML and resource names This is how https://allthingsblogging.com/wordpress-theme-plugin-statistics/ is detecting themes and plugins ... and other heuristics There may also be runtime specific signals we can extract, but I propose we focus on (2) as a starting point and see how far that gets us. Also, the other benefit of (2) is that we can update the logic and rerun the analysis on past crawls.. giving us access to trending data, and all the rest. Last but not least, we shouldn't restrict ourselves to a single label. In some cases we may be able to extract version number and other meta-data, so I think we should think of the output as another bag of values: {platform: x, version: y, theme: z, plugins: [...]}. Concretely, we can extend the current DataFlow pipeline with an extra step and start encoding these rules there. For prototyping we can also run queries directly in BigQuery.. Working on a proof of concept: https://github.com/rviscomi/httparchive/blob/pub-cm/custom_metrics/publishing-platform.js Generated a httparchive:scratchspace.response_headers table with the response headers of 100k pages: SELECT page, JSON_EXTRACT(payload, '$.response.headers') AS response_headers FROM [httparchive:har.2017_03_15_chrome_requests] LIMIT 100000 Then ran this query on it: SELECT page, response_headers FROM ( SELECT page, response_headers, REGEXP_MATCH(response_headers, 'X-Hacker') AS wordpress FROM [httparchive:scratchspace.response_headers] ) WHERE wordpress = true No results. Changed the regexp pattern to 'wp\.me[^}]+rel=shortlink' and got 20 results: https://bigquery.cloud.google.com/savedquery/226352634162:91fdb59df0cd4af2ade704c45cf70d66 Seems like not a strong signal. WDYT? Hmm. We could sanity check against: https://vip.wordpress.com/clients/ curl -L -vv http://motori.virgilio.it/ < X-hacker: If you're reading this, you should visit automattic.com/jobs and apply to join the fun, mention this header. < Link: <http://wp.me/6uzoc>; rel=shortlink On the other hand, lots of sites on that client list don't deliver above header either.. curl -L -vv http://www.nationalpost.com/ li data-src-fullsize="http://nationalpostcom.files.wordpress.com/2013/01/ ^ perhaps we should also look for files.wordpress.com, although I'm not sure if that's VIP only or true for any wordpress.com hosted site. Oh I didn't do a case insensitive search. I'll try that to include X-hacker. Ok I updated the query to be case insensitive and match both X-Hacker and rel=shortlink patterns. I stuffed those 27 results into httparchive:scratchspace.wordpress_headers and used that to generate another table, httparchive:scratchspace.wordpress_response_bodies: SELECT page, url, body FROM [httparchive:har.2017_03_15_chrome_requests_bodies] WHERE page IN (SELECT page FROM [httparchive:scratchspace.wordpress_headers]) It joins the pages with WP headers with corresponding response bodies. Finally, I queried this table with the same signals in the custom metric POC: SELECT COUNT(0), page FROM [httparchive:scratchspace.wordpress_response_bodies] WHERE REGEXP_MATCH(body, r'(?i)(<meta[^>]*WordPress|<link[^>]*wlwmanifest|src=[\'"]?[^\'"]*wp-includes)') GROUP BY page See https://bigquery.cloud.google.com/savedquery/226352634162:6f88d370ccec4fe59d45dc28040f9982 Of the 100,000 pages sampled, 27 pages were detected with WP headers. 25 of those also had corresponding WP signals in the response body. The discrepancy seems to be due to a conflicting use of the X-hacker header for non-WordPress use. That said, it seems like markup analysis is no worse of a signal than header analysis. So I ran a related query to see how much better markup analysis is: SELECT COUNT(0), page FROM [httparchive:har.2017_03_15_chrome_requests_bodies] WHERE REGEXP_MATCH(body, r'(?i)(<meta[^>]*WordPress|<link[^>]*wlwmanifest|src=[\'"]?[^\'"]*wp-includes)') AND page IN (SELECT page FROM [httparchive:scratchspace.response_headers]) GROUP BY page See https://bigquery.cloud.google.com/savedquery/226352634162:e34f729043dc4ca18719ca716d3a4642 This looks only at the 100,000 pages sampled by the header analysis and runs the body analysis. There are 9,677 results, or about 10%. It's still half as much as reported, so it seems like there are other strong signals we're missing. To recap: header analysis is a very weak signal compared to body analysis body analysis is something we can do easily in a custom metric without altering the pipeline need to shore up the signals in the POC for parity with prior research As a meta thing, It'd be nice to start building a list of test cases and explanations for each pattern: <meta generator> --> list of URLs wlmanifest --> list of URLs ... Otherwise, based on past experience, you quickly end up with unwieldy regexes that break easily and are impossible to maintain long-term. It'd be nice to start building a list of test cases and explanations for each pattern Definitely. I'd first like to figure out which signals are weak/strong/redundant and narrow it down to a minimal list of strong signals. The /docs would be a good place to explain what each signal in that list is measuring and its efficacy. "Weak" can mean that the signal has a high number of false positives or a low number of true positives. Eg X-header seems to be a weak signal for the latter reason. There may still be some value in these types of weak signals, for example if many of them combined produce a significant number of detections. Good news! Someone has already thought about this 😄 See AliasIO/Wappalyzer
gharchive/issue
2017-04-11T16:35:02
2025-04-01T04:55:09.357832
{ "authors": [ "igrigorik", "rviscomi" ], "repo": "HTTPArchive/httparchive", "url": "https://github.com/HTTPArchive/httparchive/issues/90", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
2403833953
API does not support bulk synchronizing of data Issues: Currently, external callers can only add/update one record at a time. This will likely fail at scale. Results for bulk insert/udpate must return bulk results. So an array of success/fails in the same order as what was sent. The HTTP result code is insufficient since it speaks to the overall request, not individual records inside the request. Only the fields that are known to be modified need be sent for an update. E.g. An ElementName might be required for an insert, but for an update, if it wasn't changed, then can be omitted. This means that for inserts, the required fields are different from required fields for an update. Likely, the only required fields for an update is the primary id and a revision number (see below), plus the the subset of changed fields. Updates might require a per-record revisioning mechanism for merge conflict checks. This might be done using a LastMod timestamp or a specific new field with an incremented version number. The fundamental idea is that updates include a revision number (or the prior LastMod). If the data in the DB has a newer revision number that what is coming from the update, then the operation fails because someone else already modified it. Probably should clearly define "UPSERT" means. Does it mean a bulk operation that includes a mix of inserts/updates/deletes? Or, the database concept of upserts? DB upserts often means NOT knowing the record ID when sending the data? (DB Unique Keys concept where a combination of fields make a row unique might make sense? But it would need to be defined for each table and is a probably a hard problem.) I think it's OK to say the first iteration of the API is intended for client-to-server communications where the HMIS server is "source of truth" for the data. This means that it's responsible for generating new record ids for create operations, and for generating timestamps for fields like DateCreated, DateModified and DateDeleted. Maybe we can add a "Revisit in the future" or "Future work" github label?
gharchive/issue
2024-07-11T18:29:05
2025-04-01T04:55:09.371633
{ "authors": [ "TomNUSDS" ], "repo": "HUD-Data-Lab/Data.Exchange.and.Interoperability", "url": "https://github.com/HUD-Data-Lab/Data.Exchange.and.Interoperability/issues/20", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
453907622
J1J2 exact example leads to NaN values The simple J1-J2 example with exact optimization (j1j2_2d_exact_4.py) seems to lead to NaN / overflow values after a few iterations: /cs/labs/shashua/ors07/Pyket/src/pyket/optimization/exact_variational.py:92: RuntimeWarning: overflow encou$ tered in multiply np.multiply(np.real(self.naive_local_energy_minus_energy), np.real(self.naive_local_energy_minus_energy), out=self.naive_local_energy_minus_energy_squared) /cs/labs/shashua/ors07/Pyket/src/pyket/optimization/exact_variational.py:93: RuntimeWarning: invalid value encountered in multiply np.multiply(self.naive_local_energy_minus_energy_squared, self.exact_variational.probs, out=self.probs_mult_local_energy_variance) Is this example fixed? can we close this issue?
gharchive/issue
2019-06-09T15:13:41
2025-04-01T04:55:09.374387
{ "authors": [ "orsharir" ], "repo": "HUJI-Deep/FlowKet", "url": "https://github.com/HUJI-Deep/FlowKet/issues/5", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
134425371
AnExceptionReportsDuration Test fails I'm submitting a fix shortly This was fixed with PR 32: https://github.com/Haacked/Scientist.net/pull/32 so closing
gharchive/issue
2016-02-17T22:52:41
2025-04-01T04:55:09.377975
{ "authors": [ "jtreuting" ], "repo": "Haacked/Scientist.net", "url": "https://github.com/Haacked/Scientist.net/issues/31", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
332631605
Update README.md add in link to repos by OWASP Thanks for the PR. Sorry this couldn't be merged as adding OWASP org on GitHub doesnt make any sense. Its an organization and not an awesome list. two of their major yearly contributions are top ten web application security checkLISTS... On Fri, Jun 15, 2018 at 8:03 AM Chandrapal notifications@github.com wrote: Closed #35 https://github.com/Hack-with-Github/Awesome-Hacking/pull/35. — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/Hack-with-Github/Awesome-Hacking/pull/35#event-1683616673, or mute the thread https://github.com/notifications/unsubscribe-auth/AEior2grL7ntp-VbDBmN5NXzjd2mDbHFks5t88zMgaJpZM4Uo83g . -- Sammy Owen San Francisco, CA.
gharchive/pull-request
2018-06-15T03:03:51
2025-04-01T04:55:09.401993
{ "authors": [ "Chan9390", "sammysfo" ], "repo": "Hack-with-Github/Awesome-Hacking", "url": "https://github.com/Hack-with-Github/Awesome-Hacking/pull/35", "license": "cc0-1.0", "license_type": "permissive", "license_source": "bigquery" }
57799491
nogc related compilation error I get the following error after the latest commit: • dub libdparse: ["libdparse"] dfmt: ["dfmt", "libdparse"] Target is up to date. Using existing build in /home/edwin/.dub/packages/libdparse-master/.dub/build/library-debug-linux.posix-x86_64-dmd-E198791B6673AEAE0964E351E0B4BB0A/. Use --force to force a rebuild. Building dfmt configuration "application", build type debug. Compiling... src/dfmt.d(499): Error: @nogc function 'dfmt.TokenFormatter!(LockingTextWriter).TokenFormatter.expressionLength' cannot call non-@nogc function 'std.d.lexer.isBasicType' src/dfmt.d(73): Error: template instance dfmt.format!(LockingTextWriter) error instantiating src/dfmt.d(40): Error: function D main no return exp; or assert(0); at end of function FAIL .dub/build/application-debug-linux.posix-x86_64-dmd-D80DC97764A9DF7D438DB64CB56228E4 dfmt executable Error executing command run: DMD compile run failed with exit code 1 D version information: • dmd DMD64 D Compiler v2.066.1-devel Copyright (c) 1999-2014 by Digital Mars written by Walter Bright Documentation: http://dlang.org/ dfmt builds successfully for me with dmd and ldc. Can you confirm that you're still having build problems? I now seem to get the following different error (still on 2.066.1). It might be interesting to get travis CI integrated to test the builds automatically. I'd be happy to submit a pull request with a travis file for that purpose. dfmt dub WARNING: A deprecated branch based version specification is used for the dependency libdparse. Please use numbered versions instead. Also note that you can still use the dub.selections.json file to override a certain dependency to use a branch instead. Target libdparse ~master is up to date. Use --force to rebuild. Building dfmt 0.2.2 configuration "application", build type debug. Compiling using dmd... src/dfmt/formatter.d(1373): Error: @nogc function 'dfmt.formatter.canFindIndex' cannot call non-@nogc function 'std.range.assumeSorted!("a < b", const(ulong)[]).assumeSorted' FAIL .dub/build/application-debug-linux.posix-x86_64-dmd_2066-6D71006389B3841AD8C75F8D43E64D44/ dfmt executable Error executing command run: dmd failed with exit code 1. https://travis-ci.org/Hackerpilot/dfmt/builds I even have the build status badge on the project's front page... Sorry, am currently cursed with very slow network, so didn't take the time to double check the front page :( If it works on travis it should be fine for me (I am on linux as well). Will try to hunt it down myself and report back. On 21 April 2015 at 11:10, Brian Schott notifications@github.com wrote: https://travis-ci.org/Hackerpilot/dfmt/builds I even have the build status badge on the project's front page... — Reply to this email directly or view it on GitHub https://github.com/Hackerpilot/dfmt/issues/13#issuecomment-94729865. for me using dub build -b release --combined made it work The problem is that SortedRange.this is not nogc in -debug mode. Because it calls dbgVerifySorted which calls uniform which is not nogc. It may be worth removing the @nogc annotation from dfmt until this is fixed. @yannick thanks, it helps. I'm going to close this because the code works for me (and the CI system).
gharchive/issue
2015-02-16T12:39:45
2025-04-01T04:55:09.418302
{ "authors": [ "BlackEdder", "Hackerpilot", "dmi7ry", "yannick", "yebblies" ], "repo": "Hackerpilot/dfmt", "url": "https://github.com/Hackerpilot/dfmt/issues/13", "license": "BSL-1.0", "license_type": "permissive", "license_source": "github-api" }
1854594411
/usr/share/sangfor/EasyConnect/resources/conf/ECDomainFile domain socket connect failed, errno:2, errstr:No such file or directory. 这个错误是正常现象吗? 是的,登录过程中产生的是吧? 同样有这个提示, 不过,你启动 docker,登陆 easyconnect 后,可以访问内网吗 同样有这个提示, 不过,你启动 docker,登陆 easyconnect 后,可以访问内网吗 访问内网是什么意思,我现在这个vpn没办法用,不知道什么原因 访问内网是什么意思,我现在这个vpn没办法用,不知道什么原因 你现在已经可以在 VNC 提供的 web 端,登陆账号了吗(我的是证书登陆),我是已经登陆成功,但无法使用 访问内网是什么意思,我现在这个vpn没办法用,不知道什么原因 你现在已经可以在 VNC 提供的 web 端,登陆账号了吗(我的是证书登陆),我是已经登陆成功,但无法使用 可以,vnc也可以用,clash也配置成功了,也无法使用,相同的用户名和密码用docker cli显示用户名或者密码错误,但是ec登陆本地是可以的,不知道怎么排查问题 是的,登录过程中产生的是吧? 为什么图形界面可以正常登陆,cli却显示用户名和密码不正确呢? 一个电脑应该是可以打开多个ec-docker的吧,只要换掉端口就行吧 一个电脑应该是可以打开多个ec-docker的吧,只要换掉端口就行吧 你现在一个 ec-docker 可以工作了吗, 是的,登录过程中产生的是吧? 我也有这个问题,是点击登录之后的报错。 一个电脑应该是可以打开多个ec-docker的吧,只要换掉端口就行吧 你现在一个 ec-docker 可以工作了吗, 可以,但是另一个不行 一个电脑应该是可以打开多个ec-docker的吧,只要换掉端口就行吧 你现在一个 ec-docker 可以工作了吗, 可以,但是另一个不行 我的挂起来一个也是会出现 https://github.com/Hagb/docker-easyconnect/issues/285#issue-1854594411 这个问题,也是无法登录。想请教一下题主怎么解决的。 我这边测试成功了,我在 同一个容器尝试了多次启动、关闭、重启,多次反复。 最终成功了。撒欢~ 希望我的这个操作能够帮到还没成功的朋友。 !!! 抵制站在网络自由对立面的软件 !!! 同样是这个情况,用web登录后每次都提示要更新,更完后提示登录成功,但没一会就显示这个,在过一会网页就出现注销成功了
gharchive/issue
2023-08-17T09:16:09
2025-04-01T04:55:09.427911
{ "authors": [ "Akira-TL", "Hagb", "Michaelzhouisnotwhite", "bran-nie", "l1teng" ], "repo": "Hagb/docker-easyconnect", "url": "https://github.com/Hagb/docker-easyconnect/issues/285", "license": "WTFPL", "license_type": "permissive", "license_source": "github-api" }
2330685697
Multiplying 10**gamSta by electron number density I see that the Stark broadening parameter 10**gamSta in the gamma_ functions in atomll.py isn''t multiplied by the electron number density although the Vald3 format page specifies their units as '(s*Ne)^-1' 'where Ne is the electron number density'. Is this a mistake? Thanks! @chonma0ctopus Can you check it? Thank you for pointing it out! I will check it in a day. Sorry for the confusion. It was not the version I processed this time, it was giving the error before I edited it. I haven't been able to pinpoint at which point in development the cause was. I thought it might be at Issue #488, but it wasn't; at 49b9e3283ea64b3cee8197a3ebe058b442b9eaad we already get an AssertionError. Do you happen to know any smart way to track down at what point the difference occurred? Or, do you think there is no need to track it and rather I can just update the assert statement?
gharchive/issue
2024-06-03T09:50:38
2025-04-01T04:55:09.437814
{ "authors": [ "HajimeKawahara", "chonma0ctopus", "code29563" ], "repo": "HajimeKawahara/exojax", "url": "https://github.com/HajimeKawahara/exojax/issues/488", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
58457092
Remove unneeded version_compares Because hoa/core ~2.0 requires PHP >=5.4.0, Php-Metrics ended up requiring that PHP version as well so it seems to me that we have some unneeded version_compare(PHP_VERSION, '5.4.0') >= 0 Perhaps we should remove those. :+1: If we do so, we should IMHO add php >= 5.4 to our dependencies. hoa/core might change to php >= 5.33 without a bc break, so it's up to us to ensure that phpmetrics runs once it's installed. There is also a version_compare('5.0.4', PHP_VERSION), this might be removed also. So it seems to me that this leaves us with two options: Php-Metrics should support PHP >= 5.3 and we should lower hoa/core dependency. Php-Metrics should support PHP >= 5.4 and we should remove every line of code that is <= PHP 5.4 related. (this does not affect the target version its just the PHP dependency for Php-Metrics it self to run) @Halleck45 which option will we choose? I vote to leave PHP 5.3 support. The less code the better. What do you think @UFOMelkor? This might help to decide http://php.net/supported-versions.php Hum... I think also that we should leave PHP 5.3 support. Who want make a litte PR to change minimum version in composer.json? :) @Halleck45 Like this #144? PHP 5.4 is now required All version_compare are preserved: even if we need to have PHP 5.4 to install phpmetrics, today phar archive continues to work with PHP 5.3
gharchive/issue
2015-02-21T12:00:57
2025-04-01T04:55:09.459257
{ "authors": [ "Halleck45", "TomasVotruba", "UFOMelkor", "fonsecas72" ], "repo": "Halleck45/PhpMetrics", "url": "https://github.com/Halleck45/PhpMetrics/issues/101", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
758761658
Hero promo core This was a mistaken pull request while trying to get Github to function - you can remove this, since it's requesting to merge into the wrong branch. This was a bad request based on merging into the wrong branch, so closing this.
gharchive/pull-request
2020-12-07T18:56:04
2025-04-01T04:55:09.498272
{ "authors": [ "Ruduen" ], "repo": "Handelabra/WorkshopSample", "url": "https://github.com/Handelabra/WorkshopSample/pull/3", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
653191599
TR-DE-ES Language Only DE-ES language has been added to the HandyControl project. @Taiizor thank you if you can please add de-es languages for demo @Taiizor thank you if you can please add de-es languages for demo Not now, but I'll add soon. would you pls add de-es languages for demo? 🙏 if you have no time, we can only close this pr. 😭
gharchive/pull-request
2020-07-08T10:47:18
2025-04-01T04:55:09.501374
{ "authors": [ "NaBian", "Taiizor", "ghost1372" ], "repo": "HandyOrg/HandyControl", "url": "https://github.com/HandyOrg/HandyControl/pull/429", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1276004619
Use correct translation when author isn't watching anything title Sorreh, already locally 🤌
gharchive/pull-request
2022-06-19T09:32:14
2025-04-01T04:55:09.502467
{ "authors": [ "NoahvdAa", "kennytv" ], "repo": "HangarMC/Hangar", "url": "https://github.com/HangarMC/Hangar/pull/693", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1976613367
🛑 Assets CDN server is down In bea026e, Assets CDN server (https://cdn.assets.scratch.mit.edu) was down: HTTP code: 503 Response time: 1388 ms Resolved: Assets CDN server is back up in ac494ca after 17 minutes.
gharchive/issue
2023-11-03T17:08:53
2025-04-01T04:55:09.510565
{ "authors": [ "Auto5958" ], "repo": "Hans5958/Scratch-Upptime-2", "url": "https://github.com/Hans5958/Scratch-Upptime-2/issues/264", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1986423675
🛑 Assets server is down In 84bef1b, Assets server (https://assets.scratch.mit.edu) was down: HTTP code: 503 Response time: 1401 ms Resolved: Assets server is back up in 2c86cc2 after 1 hour, 17 minutes.
gharchive/issue
2023-11-09T21:23:30
2025-04-01T04:55:09.513176
{ "authors": [ "Auto5958" ], "repo": "Hans5958/Scratch-Upptime", "url": "https://github.com/Hans5958/Scratch-Upptime/issues/381", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1989527017
🛑 Assets server is down In c7b3f33, Assets server (https://assets.scratch.mit.edu) was down: HTTP code: 503 Response time: 1471 ms Resolved: Assets server is back up in 59593c5 after 36 minutes.
gharchive/issue
2023-11-12T18:59:48
2025-04-01T04:55:09.515929
{ "authors": [ "Auto5958" ], "repo": "Hans5958/Scratch-Upptime", "url": "https://github.com/Hans5958/Scratch-Upptime/issues/445", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2041741999
🛑 Assets server is down In 4b4e451, Assets server (https://assets.scratch.mit.edu) was down: HTTP code: 503 Response time: 1422 ms Resolved: Assets server is back up in 9da0027 after 11 minutes.
gharchive/issue
2023-12-14T13:49:15
2025-04-01T04:55:09.518257
{ "authors": [ "Auto5958" ], "repo": "Hans5958/Scratch-Upptime", "url": "https://github.com/Hans5958/Scratch-Upptime/issues/805", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2260478627
🛑 CDN server 1 is down In 20b4f9a, CDN server 1 (https://cdn.scratch.mit.edu/scratchr2/static/867ec47c1657f9fde21932c086a84195/images/logo_sm.png) was down: HTTP code: 503 Response time: 462 ms Resolved: CDN server 1 is back up in cfc7618 after 11 minutes.
gharchive/issue
2024-04-24T06:40:10
2025-04-01T04:55:09.520762
{ "authors": [ "Auto5958" ], "repo": "Hans5958/Scratch-Upptime", "url": "https://github.com/Hans5958/Scratch-Upptime/issues/822", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1976008534
🛑 Japanese is down In e8dae26, Japanese (https://ja.scratch-wiki.info) was down: HTTP code: 508 Response time: 1746 ms Resolved: Japanese is back up in aee6b9a after 8 minutes.
gharchive/issue
2023-11-03T11:44:54
2025-04-01T04:55:09.523079
{ "authors": [ "Auto5958" ], "repo": "Hans5958/Scratch-Wiki-Upptime", "url": "https://github.com/Hans5958/Scratch-Wiki-Upptime/issues/1075", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1977482971
🛑 French is down In 7db73d2, French (https://fr.scratch-wiki.info) was down: HTTP code: 508 Response time: 817 ms Resolved: French is back up in 4c9d9c9 after 2 hours, 28 minutes.
gharchive/issue
2023-11-04T19:46:29
2025-04-01T04:55:09.525589
{ "authors": [ "Auto5958" ], "repo": "Hans5958/Scratch-Wiki-Upptime", "url": "https://github.com/Hans5958/Scratch-Wiki-Upptime/issues/1124", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2421553085
🛑 Test is down In e83b24c, Test (https://test.scratch-wiki.info) was down: HTTP code: 508 Response time: 492 ms Resolved: Test is back up in 6ada479 after 16 minutes.
gharchive/issue
2024-07-21T18:30:58
2025-04-01T04:55:09.528055
{ "authors": [ "Auto5958" ], "repo": "Hans5958/Scratch-Wiki-Upptime", "url": "https://github.com/Hans5958/Scratch-Wiki-Upptime/issues/4044", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1892524743
🛑 Indonesian is down In 7cf4b82, Indonesian (https://id.scratch-wiki.info) was down: HTTP code: 508 Response time: 649 ms Resolved: Indonesian is back up in 5ae2788 after 17 minutes.
gharchive/issue
2023-09-12T13:26:27
2025-04-01T04:55:09.530422
{ "authors": [ "Auto5958" ], "repo": "Hans5958/Scratch-Wiki-Upptime", "url": "https://github.com/Hans5958/Scratch-Wiki-Upptime/issues/413", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2454398314
Question: tabular representation of a report I aim to train a ML model to predict future financial reports based on historical data. To achieve this, I need a tabular representation of the financial statements, as follows: tabular representation of financial statement with this structure: cik, period_start_date, period_length, ..., IS_col1, IS_col2, ..., IS_colx, CF_col1, CF_col2, ... CF_colx, BS_col1, BS_col2, ...BS_colx period_start_date, period_length, cik as the key. standardized CF, IS for the specified reporting period. standardized BS reports are for the time point = period_start_date + period_length related to https://github.com/HansjoergW/sec-fincancial-statement-data-set/issues/5#issuecomment-2274358132 I think it would be better to think of "enddate" instead of period start_date, since this is the date, of the report. So I would cik, ddate, qtrs_is_cf, is_cols.. , cf_cols, bs_cols ddate is the endate of the period, qtrs is the period length in qtrs (but only relevant for IS und CF). As written in the other thread, for IS and CF you need to join on qtrs and ddate, but for BS, you just need to join on ddate. Thanks for the post. Hi @ebadi Yes, date is a real date format, so that can be easily used when plotting. I kept ddate since it is just an integer and we only have "end-of-months-dates". Hence, date-calculations are probably a little faster, if I do them directly as int, instead of converting all the ddate to a real date format and don't the operations with the datetime API. However, I didn't measure it. Moreover, the ddate could also be removed for the final standardized presentation. Indeed, it would make sense to add filed to the final result. I mean, you could create a hashmap with adsh as key and filed as value by reading all sub_txt files and then merging that into final result. However, as a quick solution, I would suggest you write your own the "presenting" method. Just extract it from the standardizing logic, and use as input the dataframe and the standardized you want to use. Something like that: def present(standardizer, databag: JoinedDataBag) -> pd.DataFrame: standardized_df = standardizer.process(databag.pre_num_df) data_to_merge_df = databag.sub_df[['adsh', 'cik', 'form', 'fye', 'fy', 'fp', 'filed']].copy() # The name of a company can change during its liftime. However, we want to have the # same name for the same cik in all entries. Therefore, we first have to find the # latest name of the company. # first, create a cik-name look up table, df_latest = (databag.sub_df[['cik', 'name', 'period']].sort_values('period') .drop_duplicates('cik', keep='last')) # Create the dictionary cik_name_dict = dict(zip(df_latest['cik'], df_latest['name'])) data_to_merge_df['name'] = data_to_merge_df['cik'].map(cik_name_dict) merged_df = pd.merge(data_to_merge_df, standardized_df, on='adsh', how='inner') # create the date column and sort by date merged_df['date'] = pd.to_datetime(merged_df['ddate'], format='%Y%m%d') # sort the columns merged_df = merged_df[['adsh', 'cik', 'name', 'form', 'fye', 'fy', 'fp', 'date'] + standardized_df.columns.tolist()[1:]] result = merged_df.sort_values(by='date') # store it in the standardizer object as new result standardizer.result = result return result ```` You could also write your own subclasses of the three standardizer and overwrite the present method or implement your own. In order to add additional tags, you would need write your own Standardizer. So you basically copy the code of the BS,IS, or CF standardizer and if you just want to add a new tag (if we assume that the tag is very common and widely used) you could jus add the tagname to the "final_tags" definition Thanks for the explanation. Adding filed column : Updating standardizing.py as you suggested did the trick: data_to_merge_df = databag.sub_df[['adsh', 'cik', 'form', 'fye', 'fy', 'fp', 'filed']].copy() and not to forget this line as well: merged_df = merged_df[['adsh', 'cik', 'name', 'form', 'fye', 'fy', 'fp', 'date', 'filed'] + standardized_df.columns.tolist()[1:]] Adding new tag: adding 'NumberOfSharesIssued' to final_tags solved my issue as well. I used https://www.ifrs.org/content/dam/ifrs/standards/taxonomy/2024/taxonomy-illustrated/taxonomy-iti-2024-by-fs.xlsx and https://github.com/Nneoma-Ihueze/SEC-Mapping/blob/main/xbrl_to_fin-statement_mapping.json to find the right name for the tag. I modified my package and added NumberOfSharesIssued to the final_tags but now noticed that the NumberOfSharesIssued contains only 0. Any idea how to fix this? @ebadi I have to look into it. The tag names should be WeightedAverageNumberOfSharesOutstandingBasic and WeightedAverageNumberOfDilutedSharesOutstanding. But just adding them doesn't work. I will let you know when I found time to have a look at the problem @ebadi I found the problem, but i will need to create a new minor version for it. Thank you so much. I would also appreciate if you document how you find these tag names. I am using this excel sheet and I cannot find WeightedAverageNumberOfSharesOutstandingBasic and WeightedAverageNumberOfDilutedSharesOutstanding there. @ebadi In order to find used tags, go to the SEC Edgar page and open a report. E.g. https://www.sec.gov/cgi-bin/browse-edgar?CIK=0000320193&owner=exclude and the selecting the "interactive data" of a 10-Q or 10-K. Then select the "Financial Statements" Section and choose a statement. Select the position you are interested in expand the "Details" as shown here: Instead of using "interactive data", you can also open the "document" and then browse the section with the financial statement. You can click on any value in the document to get the details of that position. When having a tag name, you can also ask ChatGPT if there are other tags referencing the same/similar position, or if that tag belongs to "Hierarchy". ChatGPT often comes up with good answers, but you still have to check if these tags really exist in the data. And of course, there is also the official documentation for the standard which can be browsed online. But I haven't really used that. @ebadi i released a new version 1.6.1 Thanks. I just tested it and everything works (except minor changes needed for the missing uom column) I forgot o thank you for the your clear instruction on how to find the tags. I noticed that there are two columns EarningsPerShareBasic/EarningsPerShareDiluted that can be calculated if we have NetIncomeLos and WeightedAverageNumberOfSharesOutstandingBasic/WeightedAverageNumberOfDilutedSharesOutstanding. It is up to you to keep the column
gharchive/issue
2024-08-07T21:36:01
2025-04-01T04:55:09.543962
{ "authors": [ "HansjoergW", "ebadi" ], "repo": "HansjoergW/sec-fincancial-statement-data-set", "url": "https://github.com/HansjoergW/sec-fincancial-statement-data-set/issues/8", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1838871802
🛑 Deepl is down In 3c79248, Deepl (https://deep.haoda.repl.co/) was down: HTTP code: 502 Response time: 404 ms Resolved: Deepl is back up in e71816e.
gharchive/issue
2023-08-07T07:38:51
2025-04-01T04:55:09.547399
{ "authors": [ "Haoqi7" ], "repo": "Haoqi7/uptime", "url": "https://github.com/Haoqi7/uptime/issues/2466", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1848536453
🛑 AI对话 is down In 602cb2d, AI对话 (https://haoqi7-question.hf.space/) was down: HTTP code: 503 Response time: 44 ms Resolved: AI绘画 is back up in ae679d6.
gharchive/issue
2023-08-13T10:49:50
2025-04-01T04:55:09.549894
{ "authors": [ "Haoqi7" ], "repo": "Haoqi7/uptime", "url": "https://github.com/Haoqi7/uptime/issues/3052", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1853465890
🛑 AI对话 is down In ae1540e, AI对话 (https://haoqi7-question.hf.space/) was down: HTTP code: 503 Response time: 48 ms Resolved: AI绘画 is back up in c5d4e99.
gharchive/issue
2023-08-16T15:25:19
2025-04-01T04:55:09.552428
{ "authors": [ "Haoqi7" ], "repo": "Haoqi7/uptime", "url": "https://github.com/Haoqi7/uptime/issues/3361", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1867762131
🛑 AI对话 is down In c9d72d1, AI对话 (https://haoqi7-question.hf.space/) was down: HTTP code: 503 Response time: 45 ms Resolved: AI绘画 is back up in 2f6b56e after 196 days, 8 hours, 27 minutes.
gharchive/issue
2023-08-25T22:51:33
2025-04-01T04:55:09.554740
{ "authors": [ "Haoqi7" ], "repo": "Haoqi7/uptime", "url": "https://github.com/Haoqi7/uptime/issues/4232", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1174619091
🛑 Happy Vibes Bot 3 is down In 5c88219, Happy Vibes Bot 3 ($BOT3) was down: HTTP code: 0 Response time: 0 ms Resolved: Happy Vibes Bot 3 is back up in e140dbb.
gharchive/issue
2022-03-20T17:16:44
2025-04-01T04:55:09.558391
{ "authors": [ "samosaman73" ], "repo": "Happy-Vibes-Bot/status", "url": "https://github.com/Happy-Vibes-Bot/status/issues/1126", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1393867401
🛑 Happy Vibes Bot 1 is down In 2fc3dc6, Happy Vibes Bot 1 ($BOT1) was down: HTTP code: 0 Response time: 0 ms Resolved: Happy Vibes Bot 1 is back up in 29d1471.
gharchive/issue
2022-10-02T19:06:35
2025-04-01T04:55:09.560456
{ "authors": [ "samosaman73" ], "repo": "Happy-Vibes-Bot/status", "url": "https://github.com/Happy-Vibes-Bot/status/issues/3427", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1044146618
🛑 Happy Vibes Bot 3 is down In e508ef6, Happy Vibes Bot 3 ($BOT3) was down: HTTP code: 502 Response time: 844 ms Resolved: Happy Vibes Bot 3 is back up in cfd602f.
gharchive/issue
2021-11-03T21:49:19
2025-04-01T04:55:09.562490
{ "authors": [ "samosaman73" ], "repo": "Happy-Vibes-Bot/status", "url": "https://github.com/Happy-Vibes-Bot/status/issues/517", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1827806968
🛑 Happy Vibes Bot 1 is down In 314cf70, Happy Vibes Bot 1 ($BOT1) was down: HTTP code: 502 Response time: 361 ms Resolved: Happy Vibes Bot 1 is back up in 7adceb7.
gharchive/issue
2023-07-30T05:11:07
2025-04-01T04:55:09.564749
{ "authors": [ "samosaman73" ], "repo": "Happy-Vibes-Bot/status", "url": "https://github.com/Happy-Vibes-Bot/status/issues/5476", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
79756541
Building the MVP Here we can discuss the technical details of building the minimum viable product. For background reading, see the following threads: https://github.com/Harvard-Open-Data-Project/droid/issues/12 https://github.com/Harvard-Open-Data-Project/droid/issues/4 What are we looking at building? A WordPress site with embedded Dataverse widgets (effectively a rebranded Dataverse) or a more full-featured site that talks to Dataverse under the hood while exposing a different interface? Some undergrads are interested in getting started building an MVP this summer so the sooner we have an idea of what to build the better. I'd say that we already have an MVP-- take a look at https://dataverse.harvard.edu/dataverse/harvardopendata Which is up, running, and even contains some APIs. While we might be able to build something that we like better with some combination of workpress and the like, this approach means that we don't need to find someone to run the result. Believe me, that is a HUGE advantage. There are some things that we need to do before publishing this. Getting a better skin on the dataverse would be nice; someone working with the Dataverse team could get to work on this (bringing in the HPAC folks, as well). We will also need to construct a script that will check to make sure that APIs, especially, are still live; something that will go through the published APIs, collect the endpoints, call those endpoints, and report if there is no response. This will help to maintain the site; the problem with sites like this (which is why the Mass. open data site has been shut down) is that links get stale, data ceases to be updated, and things fall apart. Then there is the question of getting and cataloging more data. The more data we can get and the better the descriptions, the more likely it is that people will come to the site. I'm in the process of getting the Harvard course catalog, but other data sources would be great to add. @hathix @jimwaldo -- I'd suggest we re-open this issue and continue this conversation. At the most recent meet-up on this project, we attempted to enter a couple dozen data sets into Dataverse, and it became clear that it is really not designed to be a open data catalog (as opposed to data repository software). @hathix -- can you give some specific examples here about the limitations of Dataverse? I'd suggest we explore the use of CKAN again, as a service, hosted on AWS, or on Harvard infrastructure.
gharchive/issue
2015-05-23T06:51:56
2025-04-01T04:55:09.598270
{ "authors": [ "hathix", "jimwaldo", "nsinai" ], "repo": "Harvard-Open-Data-Project/hodp", "url": "https://github.com/Harvard-Open-Data-Project/hodp/issues/16", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
139449353
get rid of imap4d We may someday have a MTA but it's not likely to be this old piece of cruft. Signed-off-by: Ronald G. Minnich rminnich@gmail.com LGTM @rminnich needs to be updated This CR is probably corrupting #87 since I also removed imap4d. Shall I remove that commit first? Yes, please remove all of imap4d from 87 and apply Ron's suggestions. Then we could merge 87 too. I have applied Ron's suggestions (unless I have missed something). I don't have access to git right now but consider it done in a couple of hours. Great! thanks so do I remove this CL or merge it or ... They are now detached so it can be merged.
gharchive/pull-request
2016-03-09T01:52:30
2025-04-01T04:55:09.601475
{ "authors": [ "elbing", "grd", "rminnich", "sevki" ], "repo": "Harvey-OS/harvey", "url": "https://github.com/Harvey-OS/harvey/pull/94", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
674332852
关于stack换源的问题 我按照官网的指导来配置并且替换了stack中的config.yaml的内容了stack,结果敲了stack setup以后一直报错如下: Exception while reading snapshot from lts-16.8: HttpExceptionRequest Request { host = "raw.githubusercontent.com" port = 443 secure = True requestHeaders = [("User-Agent","Haskell pantry package")] path = "/commercialhaskell/stackage-snapshots/master/lts/16/8.yaml" queryString = "" method = "GET" proxy = Nothing rawBody = False redirectCount = 10 responseTimeout = ResponseTimeoutDefault requestVersion = HTTP/1.1 } (ConnectionFailure user error (Network.Socket.gai_strerror not supported: 11004)) 请问如何解决 host = "raw.githubusercontent.com" raw.githubusercontent.com这个网址,国内是访问不了的。 🤦 🤦 墙的问题。 如果已经安装了 stack,可以同时使用 TUNA 的 Stackage 源 + Hackage 源来解决。 https://mirrors.tuna.tsinghua.edu.cn/help/hackage/ 这个 repo 貌似主要是旧论坛的内容 dump。 要用中文交流 Haskell 推荐在 https://zhihu.com 相关话题提问或 QQ 群 72874436 哇,我刚注意到题主给的是 stack 链接的 China-based users 章节 Here 👋 我也一样,不过我挂了全局代理,依然有问题。 我配置了TUNA的stackage和hackage。 @SnowOnion 墙的问题。 如果已经安装了 Haskell stack,可以同时使用 TUNA 的 Stackage 源 + Hackage 源来解决。(我记得 2019 年时安装 Haskell stack 本身也需要翻墙,而且无法改源解决;现在也许不需要了) https://mirrors.tuna.tsinghua.edu.cn/help/stackage/ https://mirrors.tuna.tsinghua.edu.cn/help/hackage/ @OkitaSan @leihao0401 @freizl 这是我的~/.stack/config.yaml package-indices: - download-prefix: http://mirrors.tuna.tsinghua.edu.cn/hackage/ hackage-security: keyids: - 0a5c7ea47cd1b15f01f5f51a33adda7e655bc0f0b0615baa8e271f4c3351e21d - 1ea9ba32c526d1cc91ab5e5bd364ec5e9e8cb67179a471872f6e26f0ae773d42 - 280b10153a522681163658cb49f632cde3f38d768b736ddbc901d99a1a772833 - 2a96b1889dc221c17296fcc2bb34b908ca9734376f0f361660200935916ef201 - 2c6c3627bd6c982990239487f1abd02e08a02e6cf16edb105a8012d444d870c3 - 51f0161b906011b52c6613376b1ae937670da69322113a246a09f807c62f6921 - 772e9f4c7db33d251d5c6e357199c819e569d130857dc225549b40845ff0890d - aa315286e6ad281ad61182235533c41e806e5a787e0b6d1e7eef3f09d137d2e9 - fe331502606802feac15e514d9b9ea83fee8b6ffef71335479a2e68d84adc6b0 key-threshold: 3 # number of keys required # ignore expiration date, see https://github.com/commercialhaskell/stack/pull/4614 ignore-expiry: no setup-info-locations: ["http://mirrors.tuna.tsinghua.edu.cn/stackage/stack-setup.yaml"] urls: latest-snapshot: http://mirrors.tuna.tsinghua.edu.cn/stackage/snapshots.json 报错信息: Exception while reading snapshot from https://raw.githubusercontent.com/commercialhaskell/stackage-snapshots/master/lts/16/20.yaml: HttpExceptionRequest Request { host = "raw.githubusercontent.com" port = 443 secure = True requestHeaders = [("User-Agent","Haskell pantry package")] path = "/commercialhaskell/stackage-snapshots/master/lts/16/20.yaml" queryString = "" method = "GET" proxy = Nothing rawBody = False redirectCount = 10 responseTimeout = ResponseTimeoutDefault requestVersion = HTTP/1.1 } (InternalException (HandshakeFailed (Error_Misc "Network.Socket.recvBuf: resource vanished (Connection reset by peer)"))) 我的浏览器倒是能正常访问报错信息中提到的https://raw.githubusercontent.com/commercialhaskell/stackage-snapshots/master/lts/16/20.yaml @TENX-S 遇到了同样的问题,配置了清华的源和您报错一样,这个问题您现在解决了么 @TENX-S 遇到了同样的问题,配置了清华的源和您报错一样,这个问题您现在解决了么 @liyingjack 我是通过让shell走代理解决的 @TENX-S 遇到了同样的问题,配置了清华的源和您报错一样,这个问题您现在解决了么 @liyingjack 我是通过让shell走代理解决的 请问您说的代理是什么代理?科学上网的么? @TENX-S 遇到了同样的问题,配置了清华的源和您报错一样,这个问题您现在解决了么 @liyingjack 我是通过让shell走代理解决的 请问您说的代理是什么代理?科学上网的么? 是的,macOS就算开启了全局代理,bash也不会默认走代理 @TENX-S 你好,我想问下我执行stack install时可以安装ghc-8.10.4,但是紧接着会有安装7z.dll的错误,这个怎么解决呢? stack install Warning: http://mirrors.tuna.tsinghua.edu.cn/stackage/stack-setup.yaml: Unrecognized field in GHCDownloadInfo: version Preparing to install GHC to an isolated location. This will not interfere with any system-level installation. Downloaded ghc-8.10.4. Preparing to download 7z.dll ... Download expectation failure: HttpExceptionRequest Request { host = "github.com" port = 443 secure = True requestHeaders = [("User-Agent","The Haskell Stack")] path = "/fpco/minghc/blob/master/bin/7z.dll" queryString = "?raw=true" method = "GET" proxy = Nothing rawBody = False redirectCount = 10 responseTimeout = ResponseTimeoutDefault requestVersion = HTTP/1.1 } ConnectionTimeout @TENX-S 你好,我想问下我执行stack install时可以安装ghc-8.10.4,但是紧接着会有安装7z.dll的错误,这个怎么解决呢? stack install Warning: http://mirrors.tuna.tsinghua.edu.cn/stackage/stack-setup.yaml: Unrecognized field in GHCDownloadInfo: version Preparing to install GHC to an isolated location. This will not interfere with any system-level installation. Downloaded ghc-8.10.4. Preparing to download 7z.dll ... Download expectation failure: HttpExceptionRequest Request { host = "github.com" port = 443 secure = True requestHeaders = [("User-Agent","The Haskell Stack")] path = "/fpco/minghc/blob/master/bin/7z.dll" queryString = "?raw=true" method = "GET" proxy = Nothing rawBody = False redirectCount = 10 responseTimeout = ResponseTimeoutDefault requestVersion = HTTP/1.1 } ConnectionTimeout @attackon1point6 你好,或许有两个办法: 全局代理(推荐) 操作系统 macOS 还是 Windows ?,如果是macOS,要在终端设置 http_proxy和https_proxy 为你的代理端口,比如在 bash 中export http_proxy=127.0.0.1:1087;export_https_proxy=127.0.0.1:1087,我这里的监听端口是1087,具体到你自己要自行查看。 修改hosts 教程 macOS 下的 hosts 文件位于 /etc 下 @attackon1point6 你好,或许有两个办法: 全局代理(推荐) 操作系统 macOS 还是 Windows ?,如果是macOS,要在终端设置 http_proxy 和 https_proxy 为你的代理端口,比如在 bash 中export http_proxy=127.0.0.1:1087;export_https_proxy=127.0.0.1:1087,我这里的监听端口是 1087,具体到你自己要自行查看。 修改hosts 教程 macOS 下的 hosts 文件位于 /etc 下 @TENX-S 我是win10系统,初学haskell,不太会设置代理,感谢你的回复,我会查这方面教程的。 @attackon1point6 你好,或许有两个办法: 全局代理(推荐) 操作系统 macOS 还是 Windows ?,如果是macOS,要在终端设置 http_proxy 和 https_proxy 为你的代理端口,比如在 bash 中export http_proxy=127.0.0.1:1087;export_https_proxy=127.0.0.1:1087,我这里的监听端口是 1087,具体到你自己要自行查看。 修改hosts 教程 macOS 下的 hosts 文件位于 /etc 下 @TENX-S 我是win10系统,初学haskell,不太会设置代理,感谢你的回复,我会查这方面教程的。 老哥, 问题解决了吗? 我也遇到同样的问题了... 排除了墙的原因. 挂上clash for windows之后仍然不行; 两类问题交替出现: (InternalException Network.Socket.recvBuf: invalid argument (Invalid argument)) 和 (InternalException (HandshakeFailed (Error_Misc "Network.Socket.recvBuf: invalid argument (Invalid argument)"))) 能请您指明条道路吗? 感激不尽. @attackon1point6 你好,或许有两个办法: 全局代理(推荐) 操作系统 macOS 还是 Windows ?,如果是macOS,要在终端设置 http_proxy 和 https_proxy 为你的代理端口,比如在 bash 中export http_proxy=127.0.0.1:1087;export_https_proxy=127.0.0.1:1087,我这里的监听端口是 1087,具体到你自己要自行查看。 修改hosts 教程 macOS 下的 hosts 文件位于 /etc 下 @TENX-S 我是win10系统,初学haskell,不太会设置代理,感谢你的回复,我会查这方面教程的。 老哥, 问题解决了吗? 我也遇到同样的问题了... 排除了墙的原因. 挂上clash for windows之后仍然不行; 两类问题交替出现: (InternalException Network.Socket.recvBuf: invalid argument (Invalid argument)) 和 (InternalException (HandshakeFailed (Error_Misc "Network.Socket.recvBuf: invalid argument (Invalid argument)"))) 能请您指明条道路吗? 感激不尽. windows下ss+proxifier走全局代理 用wsl,在linux环境下操作 推荐第二种 @attackon1point6 你好,或许有两个办法: 全局代理(推荐) 操作系统 macOS 还是 Windows ?,如果是macOS,要在终端设置 http_proxy 和 https_proxy 为你的代理端口,比如在 bash 中export http_proxy=127.0.0.1:1087;export_https_proxy=127.0.0.1:1087,我这里的监听端口是 1087,具体到你自己要自行查看。 修改hosts 教程 macOS 下的 hosts 文件位于 /etc 下 @TENX-S 我是win10系统,初学haskell,不太会设置代理,感谢你的回复,我会查这方面教程的。 老哥, 问题解决了吗? 我也遇到同样的问题了... 排除了墙的原因. 挂上clash for windows之后仍然不行; 两类问题交替出现: (InternalException Network.Socket.recvBuf: invalid argument (Invalid argument)) 和 (InternalException (HandshakeFailed (Error_Misc "Network.Socket.recvBuf: invalid argument (Invalid argument)"))) 能请您指明条道路吗? 感激不尽. windows下ss+proxifier走全局代理 用wsl,在linux环境下操作 推荐第二种 非常感谢您的回复, 不过在这之前问题已经解决了. 之前考虑过, 如果7zip下载不下来的两种方法: 1. 寻找使得网络可以下载下来7zip的方法 (大概是您说得第一种方法) 2. 自行下载7z.dll和7z.exe, 放到本应下载的文件夹, 跳过检查. 3. 放弃windows, 使用linux系统 (您说得第二种) 我是用第二种方法解决的. 之前在网上一直没有找到2中"对应的文件夹"在哪里, 直到看到这篇文章: https://memcpy0.blog.csdn.net/article/details/118878150. 里面的安装信息提到, 下载的内容都放到路径 C:\Users\21839\AppData\Local\Programs\stack\x86_64-windows\ 下了. 再结合出错信息中提到的url路径, 拼接后可以发现是从github上下载的7z.dll和7z.exe, 直接访问该url下载, 并放到上述路径下, 就可以绕过这里的下载了. 尽管如此, 还是感谢您耐心的回复! @attackon1point6 你好,或许有两个办法: 全局代理(推荐) 操作系统 macOS 还是 Windows ?,如果是macOS,要在终端设置 http_proxy 和 https_proxy 为你的代理端口,比如在 bash 中export http_proxy=127.0.0.1:1087;export_https_proxy=127.0.0.1:1087,我这里的监听端口是 1087,具体到你自己要自行查看。 修改hosts 教程 macOS 下的 hosts 文件位于 /etc 下 @TENX-S 我是win10系统,初学haskell,不太会设置代理,感谢你的回复,我会查这方面教程的。 老哥, 问题解决了吗? 我也遇到同样的问题了... 排除了墙的原因. 挂上clash for windows之后仍然不行; 两类问题交替出现: (InternalException Network.Socket.recvBuf: invalid argument (Invalid argument)) 和 (InternalException (HandshakeFailed (Error_Misc "Network.Socket.recvBuf: invalid argument (Invalid argument)"))) 能请您指明条道路吗? 感激不尽. windows下ss+proxifier走全局代理 用wsl,在linux环境下操作 推荐第二种 非常感谢您的回复, 不过在这之前问题已经解决了. 之前考虑过, 如果7zip下载不下来的两种方法: 寻找使得网络可以下载下来7zip的方法 (大概是您说得第一种方法) 自行下载7z.dll和7z.exe, 放到本应下载的文件夹, 跳过检查. 放弃windows, 使用linux系统 (您说得第二种) 我是用第二种方法解决的. 之前在网上一直没有找到2中"对应的文件夹"在哪里, 直到看到这篇文章: https://memcpy0.blog.csdn.net/article/details/118878150. 里面的安装信息提到, 下载的内容都放到路径 C:\Users\21839\AppData\Local\Programs\stack\x86_64-windows\ 下了. 再结合出错信息中提到的url路径, 拼接后可以发现是从github上下载的7z.dll和7z.exe, 直接访问该url下载, 并放到上述路径下, 就可以绕过这里的下载了. 尽管如此, 还是感谢您耐心的回复! 能不能问一下具体要怎么操作,我好像没找到这两个文件
gharchive/issue
2020-08-06T13:53:54
2025-04-01T04:55:09.640645
{ "authors": [ "OkitaSan", "SMSQO", "SnowOnion", "TENX-S", "YouWillBe", "attackon1point6", "freizl", "leihao0401", "liyingjack", "yy025" ], "repo": "HaskellCNOrg/haskellcn", "url": "https://github.com/HaskellCNOrg/haskellcn/issues/171", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
2080349308
Add an HTTPException to the query endpoint to handle engine failure Add an HTTPException to the query endpoint to handle engine failure 📝 Indexing I'm indexing your repository.
gharchive/issue
2024-01-13T15:35:58
2025-04-01T04:55:09.643197
{ "authors": [ "Haste171" ], "repo": "Haste171/llamaindex-retrieval-api", "url": "https://github.com/Haste171/llamaindex-retrieval-api/issues/21", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
128358599
Docker-compose / Docker Hi guys, Hope you are doing well ! Is there any dockerfile or docker compose template available for deepdive ? That would be amazing :-) Cheers, Luc Michalski There is this link, please see if it is useful.. for me somehow, the linking between Postgres and Docker was not working.. http://deepdive.stanford.edu/doc/advanced/docker.html If you make it work, please let me know how it works. Thanks Krishna See also #231 for past attempts of Docker support. We will be looking into this at some point, but if you can contribute a Dockerfile that would be awesome! I have a working dockerfile to commit, but it's having a problem with the spouse_example in 0.8 that I'm troubleshooting first. @joshneland @netj Is there any progress on this task? I have some free time this weekend, and I already have Deepdive Docker to share with you. Just need some time to verify and add Docker Compose.
gharchive/issue
2016-01-23T23:10:02
2025-04-01T04:55:09.688153
{ "authors": [ "joshneland", "lanphan", "lucmichalski", "netj", "skprasadu" ], "repo": "HazyResearch/deepdive", "url": "https://github.com/HazyResearch/deepdive/issues/467", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
333313732
Bug in brat.py Hello, At line 561 from contrib/brat/brat.py anno_filelist = set([os.path.basename(fn).strip(".ann") for fn in glob.glob(input_dir + "/*.ann")]) I think the method strip is unappropriated here. According to the documentation span remove chars from the beginning and the end of the string. So the string "news01.ann" would be transform in "ews01" instead of "news01". It prevents the file to be found. I think a fix would be to use os.path.basename(fn)[:-4]. Hi @tuxmam Good catch! We'll file this as a bug.
gharchive/issue
2018-06-18T15:14:44
2025-04-01T04:55:09.690701
{ "authors": [ "jason-fries", "tuxmam" ], "repo": "HazyResearch/snorkel", "url": "https://github.com/HazyResearch/snorkel/issues/952", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
544615253
Navigation drawer closing issue In kivymd 0.103.0 when I am clicking on navigation drawer 's OneLineAvatarListItem then it is not closing the navigation bar automatically every time I need to click on the close button why this is not closing automatically on click on OneLineAvatarListItem? @navjotcis This behavior is not implemented because the NavigationDrawer is an empty MDCard class. Closing you must control yourself.
gharchive/issue
2020-01-02T15:05:15
2025-04-01T04:55:09.692359
{ "authors": [ "HeaTTheatR", "navjotcis" ], "repo": "HeaTTheatR/KivyMD", "url": "https://github.com/HeaTTheatR/KivyMD/issues/155", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
211946130
Fix overflowing lines with manual linbreaks Card descriptions with manual linebreak (starting with [x]) tend to overflow the body. We should be rendering them and then check whether they exceed the body. If they do, either resize the sprite or just rerender with a slightly smaller font size until they are contained. Dup of #9
gharchive/issue
2017-03-05T11:16:18
2025-04-01T04:55:09.698523
{ "authors": [ "beheh", "jleclanche" ], "repo": "HearthSim/Sunwell", "url": "https://github.com/HearthSim/Sunwell/issues/37", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1074347945
#CRASH Smartphone (please complete the following information): 47% iOS 15 47% iOS 14 6% iPadOS 14 Additional context Crashed: com.apple.main-thread 0 UIKitCore 0x3e37f4 -[UIView(AdditionalLayoutSupport) _uili_existingBaseFrameVariables] + 4 1 UIKitCore 0x1cf200 __57-[UIView(AdditionalLayoutSupport) _switchToLayoutEngine:]_block_invoke + 452 2 UIKitCore 0x3bd804 -[UIView(AdditionalLayoutSupport) _switchToLayoutEngine:] + 216 3 UIKitCore 0x1e1748 __57-[UIView(AdditionalLayoutSupport) _switchToLayoutEngine:]_block_invoke_2 + 188 4 CoreAutoLayout 0x85f8 -[NSISEngine withBehaviors:performModifications:] + 88 5 UIKitCore 0x1cf26c __57-[UIView(AdditionalLayoutSupport) _switchToLayoutEngine:]_block_invoke + 560 6 UIKitCore 0x3bd804 -[UIView(AdditionalLayoutSupport) _switchToLayoutEngine:] + 216 7 UIKitCore 0x1dfdbc __45-[UIView(Hierarchy) _postMovedFromSuperview:]_block_invoke + 100 8 CoreAutoLayout 0x85f8 -[NSISEngine withBehaviors:performModifications:] + 88 9 UIKitCore 0x26eff4 -[UIView(Hierarchy) _postMovedFromSuperview:] + 836 10 UIKitCore 0x18b8d4 -[UIView(Internal) _addSubview:positioned:relativeTo:] + 2148 11 UIKitCore 0x3b3c2c -[UITableView _addSubview:positioned:relativeTo:] + 108 12 UIKitCore 0x24bcf8 CreateScrollIndicator + 364 13 UIKitCore 0x31e11c -[UIScrollView _adjustScrollerIndicators:alwaysShowingThem:] + 276 14 UIKitCore 0x254334 -[UITableView _updateShowScrollIndicatorsFlag] + 376 15 HWPanModal 0xd658 -[HWPanModalPresentableHandler trackScrolling:] + 340 (HWPanModalPresentableHandler.m:340) 16 HWPanModal 0xd858 -[HWPanModalPresentableHandler didPanOnScrollViewChanged:] + 386 (HWPanModalPresentableHandler.m:386) 17 HWPanModal 0x15a9c -[KeyValueObserver didChange:] + 75 (KeyValueObserver.m:75) 18 Foundation 0x3204c NSKeyValueNotifyObserver + 292 19 Foundation 0x1d460 NSKeyValueDidChange + 356 20 Foundation 0x2bf78 -[NSObject(NSKeyValueObservingPrivate) _changeValueForKeys:count:maybeOldValuesDict:maybeNewValuesDict:usingBlock:] + 644 21 Foundation 0x21c5c -[NSObject(NSKeyValueObservingPrivate) _changeValueForKey:key:key:usingBlock:] + 72 22 Foundation 0x1fc8c _NSSetPointValueAndNotify + 328 23 UIKitCore 0x1c9c0c -[UIScrollView _setContentOffset:animated:animationCurve:animationAdjustsForContentOffsetDelta:animation:animationConfigurator:] + 824 24 HWPanModal 0xd6cc -[HWPanModalPresentableHandler haltScrolling:] + 349 (HWPanModalPresentableHandler.m:349) 25 HWPanModal 0xd858 -[HWPanModalPresentableHandler didPanOnScrollViewChanged:] + 386 (HWPanModalPresentableHandler.m:386) 26 HWPanModal 0x15a9c -[KeyValueObserver didChange:] + 75 (KeyValueObserver.m:75) 27 Foundation 0x3204c NSKeyValueNotifyObserver + 292 28 Foundation 0x1d460 NSKeyValueDidChange + 356 29 Foundation 0x2bf78 -[NSObject(NSKeyValueObservingPrivate) _changeValueForKeys:count:maybeOldValuesDict:maybeNewValuesDict:usingBlock:] + 644 30 Foundation 0x21c5c -[NSObject(NSKeyValueObservingPrivate) _changeValueForKey:key:key:usingBlock:] + 72 31 Foundation 0x1fc8c _NSSetPointValueAndNotify + 328 32 UIKitCore 0x2ae9ac -[UITableView _updateVisibleCellsNow:] + 2356 33 UIKitCore 0x2ae9e0 -[UITableView _updateVisibleCellsNow:] + 2408 34 UIKitCore 0x2ae9e0 -[UITableView _updateVisibleCellsNow:] + 2408 35 UIKitCore 0x2ae9e0 -[UITableView _updateVisibleCellsNow:] + 2408 36 UIKitCore 0x2ae9e0 -[UITableView _updateVisibleCellsNow:] + 2408 37 UIKitCore 0x2ae9e0 -[UITableView _updateVisibleCellsNow:] + 2408 38 UIKitCore 0x2ae9e0 -[UITableView _updateVisibleCellsNow:] + 2408 39 UIKitCore 0x2ae9e0 -[UITableView _updateVisibleCellsNow:] + 2408 40 UIKitCore 0x2ae9e0 -[UITableView _updateVisibleCellsNow:] + 2408 41 UIKitCore 0x2ae9e0 -[UITableView _updateVisibleCellsNow:] + 2408 42 UIKitCore 0x2ae9e0 -[UITableView _updateVisibleCellsNow:] + 2408 43 UIKitCore 0x17d8a8 -[UITableView layoutSubviews] + 456 44 UIKitCore 0x18ded8 -[UIView(CALayerDelegate) layoutSublayersOfLayer:] + 2620 45 QuartzCore 0x3fe24 CA::Layer::layout_if_needed(CA::Transaction*) + 536 46 QuartzCore 0x32644 CA::Layer::layout_and_display_if_needed(CA::Transaction*) + 144 47 QuartzCore 0x46c6c CA::Context::commit_transaction(CA::Transaction*, double, double*) + 524 48 QuartzCore 0x4f560 CA::Transaction::commit() + 680 49 QuartzCore 0x31dac CA::Transaction::flush_as_runloop_observer(bool) + 88 50 CoreFoundation 0x41560 CFRUNLOOP_IS_CALLING_OUT_TO_AN_OBSERVER_CALLBACK_FUNCTION + 36 51 CoreFoundation 0x10844 __CFRunLoopDoObservers + 572 52 CoreFoundation 0xb8dc __CFRunLoopRun + 1052 53 CoreFoundation 0x1f3b8 CFRunLoopRunSpecific + 600 54 GraphicsServices 0x138c GSEventRunModal + 164 55 UIKitCore 0x5196a8 -[UIApplication _run] + 1100 56 UIKitCore 0x2987f4 UIApplicationMain + 2092 57 *** 0xb9b4 main + 42 (AppDelegate.swift:42) 58 ??? 0x103ff1a24 (Missing) 这个还没找到复现方法 我也遇到了同样的问题,请问修复了吗?在你的业务场景中 Crash demo: HWPanModal Crash Demo. And I create a PR to fix this issue. Crash demo: HWPanModal Crash Demo. And I create a PR to fix this issue. Nice. As I can not produce this crash, so this issue can not resolve. Later I will check the PR. Thank you so much. HWPanModal (0.9.4) successfully published, fix this crash. If this crash exists again, pls reopen it. @devSC @iBlackStone @lchenfox @GandjaFuzz 线上还是会崩的 请问有解决方案吗?这个崩溃目前在我们Firebase排在首位。 Issue: d613e968aae6ac852c7ef1e792783c8d Session: 596156ebd9c440faa9ff4359c5d6dd0c_DNE_0_v2 Date: Sun Jan 07 2024 04:35:08 GMT+0800 (中国标准时间) Crashed: com.apple.main-thread 0 libsystem_c.dylib 0x9800 localeconv_l + 4 1 libsystem_c.dylib 0x1f74 _vsnprintf + 224 2 libsystem_c.dylib 0x5df8 snprintf_l + 32 3 CoreFoundation 0x50f30 __CFStringAppendFormatCore + 9436 4 CoreFoundation 0x841d4 _CFStringCreateWithFormatAndArgumentsReturningMetadata + 184 5 CoreFoundation 0x80aac _CFStringCreateWithFormatAndArgumentsAux2 + 44 6 Foundation 0x1aeb8 +[NSString stringWithFormat:] + 68 7 FBSDKCoreKit 0x61880 +[FBSDKSwizzler object:ofClass:isCallingSelector:] + 339 (FBSDKSwizzler.m:339) 8 FBSDKCoreKit 0x624e8 fb_findSwizzle + 67 (FBSDKSwizzler.m:67) 9 FBSDKCoreKit 0x61dc8 fb_swizzledMethod_2 + 90 (FBSDKSwizzler.m:90) 10 UIKitCore 0x1d0c8 -[UIView(Internal) _didMoveFromWindow:toWindow:] + 1812 11 UIKitCore 0x1cc6c -[UIView(Internal) _didMoveFromWindow:toWindow:] + 696 12 UIKitCore 0xd0248 __45-[UIView(Hierarchy) _postMovedFromSuperview:]_block_invoke + 112 13 CoreAutoLayout 0x4e70 -[NSISEngine withBehaviors:performModifications:] + 84 14 UIKitCore 0x105d858 -[UIView _postMovedFromSuperview:] + 672 15 UIKitCore 0x1ddf4 -[UIView(Internal) _addSubview:positioned:relativeTo:] + 1952 16 UIKitCore 0x93af0 -[UITableView _addSubview:positioned:relativeTo:] + 100 17 UIKitCore 0x145058 CreateScrollIndicator + 292 18 UIKitCore 0x52a6c -[UIScrollView _adjustScrollerIndicators:alwaysShowingThem:] + 240 19 UIKitCore 0x102740 -[UITableView _updateShowScrollIndicatorsFlag] + 308 20 HelloTalk_Binary 0x210c244 -[HWPanModalPresentableHandler trackScrolling:] + 301 (HWPanModalPresentableHandler.m:301) 21 HelloTalk_Binary 0x210c3a0 -[HWPanModalPresentableHandler didPanOnScrollViewChanged:] + 336 (HWPanModalPresentableHandler.m:336) 22 HelloTalk_Binary 0x21134bc -[KeyValueObserver didChange:] + 69 (KeyValueObserver.m:69) 23 Foundation 0x3c0d0 NSKeyValueNotifyObserver + 252 24 Foundation 0x52618 NSKeyValueDidChange + 356 25 Foundation 0x3f518 -[NSObject(NSKeyValueObservingPrivate) _changeValueForKeys:count:maybeOldValuesDict:maybeNewValuesDict:usingBlock:] + 680 26 Foundation 0x3f248 -[NSObject(NSKeyValueObservingPrivate) _changeValueForKey:key:key:usingBlock:] + 64 27 Foundation 0x3f1cc _NSSetPointValueAndNotify + 300 28 UIKitCore 0x55784 -[UIScrollView _setContentOffset:animated:animationCurve:animationAdjustsForContentOffsetDelta:animation:animationConfigurator:] + 588 29 HelloTalk_Binary 0x210c2c4 -[HWPanModalPresentableHandler haltScrolling:] + 319 (HWPanModalPresentableHandler.m:319) 30 HelloTalk_Binary 0x210c3d4 -[HWPanModalPresentableHandler didPanOnScrollViewChanged:] + 357 (HWPanModalPresentableHandler.m:357) 31 HelloTalk_Binary 0x21134bc -[KeyValueObserver didChange:] + 69 (KeyValueObserver.m:69) 32 Foundation 0x3c0d0 NSKeyValueNotifyObserver + 252 33 Foundation 0x52618 NSKeyValueDidChange + 356 34 Foundation 0x3f518 -[NSObject(NSKeyValueObservingPrivate) _changeValueForKeys:count:maybeOldValuesDict:maybeNewValuesDict:usingBlock:] + 680 35 Foundation 0x3f248 -[NSObject(NSKeyValueObservingPrivate) _changeValueForKey:key:key:usingBlock:] + 64 36 Foundation 0x3f1cc _NSSetPointValueAndNotify + 300 37 UIKitCore 0x56ae4 -[UITableView _updateVisibleCellsNow:] + 1784 38 UIKitCore 0x56b0c -[UITableView _updateVisibleCellsNow:] + 1824 39 UIKitCore 0x56b0c -[UITableView _updateVisibleCellsNow:] + 1824 40 UIKitCore 0x56b0c -[UITableView _updateVisibleCellsNow:] + 1824 41 UIKitCore 0x56b0c -[UITableView _updateVisibleCellsNow:] + 1824 42 UIKitCore 0x56b0c -[UITableView _updateVisibleCellsNow:] + 1824 43 UIKitCore 0x56b0c -[UITableView updateVisibleCellsNow:] + 1824 44 UIKitCore 0x56b0c -[UITableView updateVisibleCellsNow:] + 1824 45 UIKitCore 0x56b0c -[UITableView updateVisibleCellsNow:] + 1824 46 UIKitCore 0x56b0c -[UITableView updateVisibleCellsNow:] + 1824 47 UIKitCore 0x56b0c -[UITableView updateVisibleCellsNow:] + 1824 48 UIKitCore 0x56320 -[UITableView layoutSubviews] + 148 49 UIKitCore 0x4420 -[UIView(CALayerDelegate) layoutSublayersOfLayer:] + 1992 50 QuartzCore 0x9f30 CA::Layer::layout_if_needed(CA::Transaction*) + 500 51 UIKitCore 0xceb30 -[UIView(Hierarchy) layoutBelowIfNeeded] + 296 52 HelloTalk_Binary 0x213fd14 -[UIViewController(PanModalDefault) allowsExtendedPanScrolling] + 104 (UIViewController+PanModalDefault.m:104) 53 HelloTalk_Binary 0x210c890 -[HWPanModalPresentableHandler configureViewLayout] + 419 (HWPanModalPresentableHandler.m:419) 54 HelloTalk_Binary 0x2110658 -[HWPanModalPresentationController configureViewLayout] + 315 (HWPanModalPresentationController.m:315) 55 HelloTalk_Binary 0x210f6dc -[HWPanModalPresentationController setNeedsLayoutUpdate] + 151 (HWPanModalPresentationController.m:151) 56 HelloTalk_Binary 0x2140388 -[UIViewController(Presentation) hw_panModalSetNeedsLayoutUpdate] + 41 (UIViewController+Presentation.m:41) 57 HelloTalk_Binary 0x3cf5aa8 partial apply for closure #1 in VoiceRVisitorViewController.bindViewModel() + 73 (VoiceRVisitorViewController.swift:73) 58 HelloTalk_Binary 0x5063e50 partial apply for closure #1 in ObservableType.subscribe(:) + 22 (ObservableType+Extensions.swift:22) 59 HelloTalk_Binary 0x50177f0 AnonymousObserver.onCore(:) + 23 (AnonymousObserver.swift:23) 60 HelloTalk_Binary 0x50667bc ObserverBase.on(:) + 16 (ObserverBase.swift:16) 61 HelloTalk_Binary 0x5066900 protocol witness for ObserverType.on(:) in conformance ObserverBase + 6102948 (:6102948) 62 HelloTalk_Binary 0x5065a70 closure #1 in ObserveOnSerialDispatchQueueSink.init(scheduler:observer:cancel:) + 196 (ObserveOn.swift:196) 63 HelloTalk_Binary 0x506638c partial apply for thunk for @escaping @callee_guaranteed (@guaranteed ObserveOnSerialDispatchQueueSink, @in_guaranteed Event<A.ObserverType.Element>) -> (@out Disposable) + 6101552 (:6101552) 64 HelloTalk_Binary 0x50454d8 partial apply for closure #1 in DispatchQueueConfiguration.schedule(:action:) + 27 (DispatchQueueConfiguration.swift:27) 65 HelloTalk_Binary 0x8477c thunk for @escaping @callee_guaranteed () -> () + 4330915708 (:4330915708) 66 libdispatch.dylib 0x2320 _dispatch_call_block_and_release + 32 67 libdispatch.dylib 0x3eac _dispatch_client_callout + 20 68 libdispatch.dylib 0x126a4 _dispatch_main_queue_drain + 928 69 libdispatch.dylib 0x122f4 _dispatch_main_queue_callback_4CF + 44 70 CoreFoundation 0x98c28 CFRUNLOOP_IS_SERVICING_THE_MAIN_DISPATCH_QUEUE + 16 71 CoreFoundation 0x7a560 __CFRunLoopRun + 1992 72 CoreFoundation 0x7f3ec CFRunLoopRunSpecific + 612 73 GraphicsServices 0x135c GSEventRunModal + 164 74 UIKitCore 0x39cf58 -[UIApplication _run] + 888 75 UIKitCore 0x39cbbc UIApplicationMain + 340 76 HelloTalk_Binary 0x54ff7f8 main + 25 (main.m:25) 77 ??? 0x1e1174dec (缺少)
gharchive/issue
2021-12-08T12:17:52
2025-04-01T04:55:09.742251
{ "authors": [ "HeathWang", "devSC", "iBlackStone", "lchenfox", "pandaleecn" ], "repo": "HeathWang/HWPanModal", "url": "https://github.com/HeathWang/HWPanModal/issues/107", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1528597856
TS2430: Interface 'JoiPasswordExtend' incorrectly extends interface 'Root'. Joi-password works as expected, but when I have a pre-push husky check, I get the following error. How do I fix it? Thanks node_modules/joi-password/lib/index.d.ts:28:18 - error TS2430: Interface 'JoiPasswordExtend' incorrectly extends interface 'Root'. The types returned by 'string().validate(...)' are incompatible between these types. Type 'ValidationResult<string>' is not assignable to type 'ValidationResult<TSchema>'. Type '{ error: undefined; warning?: ValidationError | undefined; value: string; }' is not assignable to type 'ValidationResult<TSchema>'. Type '{ error: undefined; warning?: ValidationError | undefined; value: string; }' is not assignable to type '{ error: undefined; warning?: ValidationError | undefined; value: TSchema; }'. Types of property 'value' are incompatible. Type 'string' is not assignable to type 'TSchema'. 'TSchema' could be instantiated with an arbitrary type which could be unrelated to 'string'. 28 export interface JoiPasswordExtend extends joi.Root { ~~~~~~~~~~~~~~~~~ Found 1 error. husky - pre-push hook exited with code 1 (error) Hello @chuasonglin1995 I don't know where the errors come from. However, you can add to your tsconfig.json to over this problem. { "compilerOptions": { "skipLibCheck": true }, ... } thanks @Heaty566, I shall do that then. Thanks for the package, its lovely Hi @Heaty566, I figured that it could be because the package is not TS compatible. In your validate functions, you are returning value as any. And when it compiled, it didn't infer that this is a string, and left it as any.
gharchive/issue
2023-01-11T08:05:01
2025-04-01T04:55:09.745830
{ "authors": [ "Heaty566", "chuasonglin1995" ], "repo": "Heaty566/joi-password", "url": "https://github.com/Heaty566/joi-password/issues/4", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1740683675
Allow proxying to custom LLM APIs Currently, Helicone only allows people to proxy to the following services: https://github.com/Helicone/helicone/blob/868d3b7e424c938067611ecbd5c8d37459bdc3ff/worker/src/lib/HeliconeProxyRequest/mapper.ts#L209-L214 However there are many other OpenAI compatible-services, and people are building OpenAI interfaces to open-source models like LLama and company, so Helicone could provide metrics without any code modifications. I would really like to see this functionality as well so I could use Helicone with something like LocalAI Hi! We are adding this super soon! Next week this should be merged in. @alexkreidler, and @coreywagehoft - @colegottdank is working on this right now actually :) Hi. May I know if PaLM2 support was added or not? Or its already in some branch pending merge to main? This was finished ??? I need to track my users tokens consuming. @leandrosilvaferreira, hi, you can use our gateway integration! It allows any target URL. We maintain a whitelist of providers and if not apart of that whitelist, we allow up to 10k requests per day. If your desired provider is not there, if valid, we can add it.
gharchive/issue
2023-06-05T00:28:22
2025-04-01T04:55:09.766646
{ "authors": [ "alexkreidler", "chingweesze-oursky", "chitalian", "colegottdank", "coreywagehoft", "leandrosilvaferreira" ], "repo": "Helicone/helicone", "url": "https://github.com/Helicone/helicone/issues/464", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
422325426
Showing UNDEFINED-TRANSLATION in large image section and rest image Comparing src/config.js and i18n/enUS.json files, translation of large image description and translation of rest description fails due to a type error. I would have pull requested this, but I don't have access to do so. For fixing the problem, simply change the following lines in i18en/enUS.json line 69 "config-smallImage-desc": "Change content of large image on Rich Presence", line 95 "config-rest-description": "Change behaviour while you're resting", to line 69 "config-largeImage-desc": "Change content of large image on Rich Presence", line 95 "config-rest-desc": "Change behaviour while you're resting", And that's all, I checked the other files and I don't see them have these lines in theme anyways. Edit: I have managed to create a pull request by forking the repository. Fixed by #84
gharchive/issue
2019-03-18T16:49:20
2025-04-01T04:55:09.774263
{ "authors": [ "GamesProSeif", "HelloWorld017" ], "repo": "HelloWorld017/atom-discord", "url": "https://github.com/HelloWorld017/atom-discord/issues/83", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
208474703
Priority of snippets Excuse me, may you set a lower priority like -50 (same as vim-snippets) or -49 as default? It will get more compatible with other UltiSnips plugins and users can override your snippets with a default priority ฅ' ω 'ฅ btw, how about set a tabstop at the end of line or a newline for "import reference" snippet? like /// \<reference path='${1:file}' />$0 or /// \<reference path='${1:file}' /> $0 Hi, thanks for trying out yats. I don't know where I can set snippet priority. According to the doc https://github.com/SirVer/ultisnips/blob/master/doc/UltiSnips.txt#L224-L245, it seems you can specify your snippet loading order. Builtin snippet cannot be customized. You can writing your own and then load it to override yats. You can set snippet priority by 'priority' keyword at the beginning of line. It looks like this https://github.com/honza/vim-snippets/blob/master/UltiSnips/javascript.snippets Maybe this paragraph would help (´・ω・)ノ https://github.com/SirVer/ultisnips/blob/master/doc/UltiSnips.txt#L619 Thanks for pointing out! I don't know that! Negative priority seems to be a better practice. Will add it! Thanks for your adoption, too (๑• ω •๑) And your proposed /// reference snippet is definitely better the current one! Also added! Thanks!
gharchive/issue
2017-02-17T15:31:11
2025-04-01T04:55:09.814832
{ "authors": [ "HerringtonDarkholme", "shirohana" ], "repo": "HerringtonDarkholme/yats.vim", "url": "https://github.com/HerringtonDarkholme/yats.vim/issues/17", "license": "Vim", "license_type": "permissive", "license_source": "github-api" }
85616047
Optimizing Images (via ImageOptim). Reduce the images in size using lossless techniques. Images will often contain data that's not part of the displayed information. This was performed using ImageOptim. For some reason this PR failed in the JSLint. It was not your fault. I will accept that and fix that other probably independently. Thanks for your expert support on this.
gharchive/pull-request
2015-06-05T19:34:33
2025-04-01T04:55:09.872570
{ "authors": [ "alansouzati", "mattfarina" ], "repo": "HewlettPackard/grommet", "url": "https://github.com/HewlettPackard/grommet/pull/31", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1769397728
Error when starting the second SN (SMLETHNode: Transferring 100 ETH to local account failed) - MNIST example Issue description issue description: error obtained when starting the second Swarm Network Node. occurrence - consistent or rare: error messages: SMLETHNode: Transferring 100 ETH to local account failed commands used for starting containers: the ones provided in the MNIST example (https://github.com/HewlettPackard/swarm-learning/blob/master/examples/mnist/README.md) docker logs [APLS, SPIRE, SN, SL, SWCI]: ###################################################################### HPE SWARM LEARNING SN NODE ###################################################################### © Copyright 2019-2022 Hewlett Packard Enterprise Development LP ###################################################################### 2023-06-22 10:19:21,321 : swarm.blCnt : INFO : Setting up blockchain layer for the swarm node: START 2023-06-22 10:19:22,628 : swarm.blCnt : INFO : Creating Autopass License Provider 2023-06-22 10:19:23,407 : swarm.blCnt : INFO : Creating license server 2023-06-22 10:19:23,407 : swarm.blCnt : INFO : Setting license servers 2023-06-22 10:19:23,421 : swarm.blCnt : INFO : Acquiring floating license 1100000380:1 2023-06-22 10:19:24,047 : swarm.SN : INFO : Using URL : https://213.227.143.136:30304/is_up 2023-06-22 10:19:24,170 : swarm.SN : INFO : Sentinel Node is UP! 2023-06-22 10:19:43,727 : swarm.SN : INFO : SMLETHNode: Starting GETH ... 2023-06-22 10:22:16,547 : swarm.SN : ERROR : SMLETHNode: Transferring 100 ETH to local account failed Traceback (most recent call last): File "", line 1, in File "start_swarm_sn.py", line 196, in start_swarm_sn.main File "swarmfactory.py", line 615, in swarmfactory.createBCFullNodeForContainer File "swarmbcnode.py", line 739, in swarmbcnode.smlethnode.initialize File "swarmutils.py", line 678, in swarmutils.swarmlogger.emitError RuntimeError: SMLETHNode: Transferring 100 ETH to local account failed 2023-06-22 10:22:16,556 : swarm.blCnt : WARNING : Releasing license Swarm Learning Version: Find the docker tag of the Swarm images ( $ docker images | grep hub.myenterpriselicense.hpe.com/hpe_eval/swarm-learning ): Version 2.0.0 OS and ML Platform details of host OS: Ubuntu 20.04.6 LTS details of ML platform used: details of Swarm learning Cluster (Number of machines, SL nodes, SN nodes): 2 hosts, exactly the same as MNIST example Quick Checklist: Respond [Yes/No] APLS server web GUI shows available Licenses? Yes If Multiple systems are used, can each system access every other system? Yes Is Password-less SSH configuration setup for all the systems? Yes If GPU or other protected resources are used, does the account have sufficient privileges to access and use them? Is the user id a member of the docker group? Yes Additional notes Are you running documented example without any modification? Yes, just modifying the IPs of host 1 and host 2 Add any additional information about use case or any notes which supports for issue investigation: All the steps 1-9 and 11 from the README (https://github.com/HewlettPackard/swarm-learning/blob/master/examples/mnist/README.md) are followed correctly, but error in step 10 appears. I think the issue is related to "Ethereum" and the creation of the blockchain layer, but I do not have more information about the error. Can you check if the systems in which the nodes running are time synchronized? @joaquingarciaatos please confirm, whether the issue is resolved post time synchronization using NTP? Can you check if the systems in which the nodes running are time synchronized? Can you check if the systems in which the nodes running are time synchronized? Hi! I tried to check the nodes are time synchronized, and they are. But it didn't solve anything about the issue... Do you have any idea about what can be the issue? The error might occur due to unsynchronized time between nodes, where even a slight time difference of few milli seconds can cause the issue. To resolve this, you can synchronize the nodes using NTP (Network Time Protocol). Afterward, restart the Docker service and try running the example again.
gharchive/issue
2023-06-22T10:28:34
2025-04-01T04:55:09.888989
{ "authors": [ "Deepthiappasani", "iArpanPatel", "joaquingarciaatos" ], "repo": "HewlettPackard/swarm-learning", "url": "https://github.com/HewlettPackard/swarm-learning/issues/181", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
480217117
Getting Service Unavailable when trying to import existing infra terraform version Terraform v0.11.14 + provider.oneview (unversioned) Your version of Terraform is out of date! The latest version is 0.12.6. You can update by downloading from www.terraform.io/downloads.html I am trying to terraform import as you described here And this is the result: Error: provider.oneview: Post https://10.XX.XX.XX/rest/login-sessions: Service Unavailable here is my main.tf provider "oneview" { ov_username = "BLABLABLA" ov_password = "BLABLABLA" ov_endpoint = "https://10.XX.XX.XX" ov_sslverify = false ov_apiversion = 600 ov_domain = "LOCAL" ov_ifmatch = "*" } resource "oneview_logical_interconnect_group" "logical_interconnect_group" { } resource "oneview_logical_interconnect" "li"{ } I can login to the OneView UI using a web-browser and also used curl to verify, actually that is how I discovered the ov_apiversion and ov_domain values: curl -kL --noproxy "*" -H "X-Api-Version: 600" -H "Content-Type: application/json" -d '{"userName":"BLABLABLA","password":"BLABLABLA","authLoginDomain":"LOCAL","loginMsgAck":null}' https://10.XX.XX.XX/rest/login-sessions And I get a session ID: {"sessionID":"LTM5MDIwXXXXXXXXXXXXXXXXXXXXXXXXXXX","partnerData":{}} So the service is available. Could you please advise? I have realized that the OneView IP addresses were not defined in NO_PROXY, once they were defined, the import worked.
gharchive/issue
2019-08-13T15:18:28
2025-04-01T04:55:09.892891
{ "authors": [ "Baykonur" ], "repo": "HewlettPackard/terraform-provider-oneview", "url": "https://github.com/HewlettPackard/terraform-provider-oneview/issues/127", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1127728645
Update dependencies This pull request introduces updates for skeleton's dependencies @adam-arold Could you merge it? 🙂 I don't have the rights to do so Done!
gharchive/pull-request
2022-02-08T20:31:16
2025-04-01T04:55:09.938005
{ "authors": [ "adam-arold", "falconepl" ], "repo": "Hexworks/zircon.skeleton.scala", "url": "https://github.com/Hexworks/zircon.skeleton.scala/pull/2", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
826799959
Delta to Axis not appearing Ive done everything said in the text and imported the plugins into the folder, but changes nothing, theres nothing new. Right click the dll, choose properties and make sure it's not blocked.
gharchive/issue
2021-03-09T23:33:43
2025-04-01T04:55:09.947874
{ "authors": [ "LInus123456789", "redcubie" ], "repo": "HidWizards/UCR-Plugins", "url": "https://github.com/HidWizards/UCR-Plugins/issues/25", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
205410143
SharePoint monitor should run on an actual SP server Only runs remotely right now Works fine on the server, just requires the regular remote sessions to be set up. This is fine anyway in a multi-server farm, but might be an issue for single-server farms as it's definitely overkill then. Closing for now, pending demand
gharchive/issue
2017-02-05T08:18:01
2025-04-01T04:55:09.954099
{ "authors": [ "HiltonGiesenow" ], "repo": "HiltonGiesenow/PoShMon", "url": "https://github.com/HiltonGiesenow/PoShMon/issues/89", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
993791168
[BUG] 25 words instead of 24 when creating a new private key Describe the bug I was playing around with adding a new wallet to my chia. I clicked on "Create new private key" but instead of 24 fields with 24 words, I saw 25 of them. To Reproduce Steps to reproduce the behavior: Click on the "Create new private key" go back Click on the "Create new private key" go back Click on the "Create new private key" go back Click on the "Create new private key" go back Click on the "Import from mnemonic (24 words)" go back Click on the "Create new private key" go back Click on the "Create new private key" go back Click on the "Create new private key" That additional 25. field should appear Expected behavior As I know there should always be 24 words for mnemonic Desktop OS: Windows 10 PRO OS Version: 21H1 OS Build: 19043.1202 CPU: Ryzen 7 3700x Additional context Please check the video I have sent to you in the zip file. Added steps for reproducing the issue are not 100% sure. It may depend on how much times do you have to click on these buttons and to go back. chives w01.zip It's fixed in chia currently. https://github.com/Chia-Network/chia-blockchain-gui/commit/6edcd4878aacc5e561c2033023a3ddd758e00c23
gharchive/issue
2021-09-11T09:01:05
2025-04-01T04:55:09.969125
{ "authors": [ "LocoSlug", "harambasapp" ], "repo": "HiveProject2021/chives-blockchain", "url": "https://github.com/HiveProject2021/chives-blockchain/issues/34", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1786769552
After initial 401 Error while trying to update token: 'EnphaseToken' object has no attribute 'update' #4 is solved, this time I see missing attribute 'update' Setup Enphase gateway HTTP GET Attempt #1: https://192.168.3.112/production.json: Header:{'Authorization': 'Bearer <TOKEN'} Received status_code 401 from Gateway Request header: {'Authorization': 'Bearer <TOKEN>'} Trying to update token Error while trying to update token: 'EnphaseToken' object has no attribute 'update' After HACS update and restart log changed to: [custom_components.enphase_gateway.enphase_token] New Enphase owner Token valid until: 2024-07-02 20:28:23+00:00 [custom_components.enphase_gateway.gateway_reader.gateway_reader] Setup Enphase gateway [custom_components.enphase_gateway.gateway_reader.gateway_reader] HTTP GET Attempt #1: https://192.168.3.112/production.json: Header:{'Authorization': 'Bearer <token>'} [custom_components.enphase_gateway.gateway_reader.gateway_reader] Received status_code 401 from Gateway [custom_components.enphase_gateway.gateway_reader.gateway_reader] Request header: {'Authorization': 'Bearer <token>'} [custom_components.enphase_gateway.gateway_reader.gateway_reader] Trying to update token [py.warnings] /config/custom_components/enphase_gateway/gateway_reader/gateway_reader.py:289: RuntimeWarning: coroutine 'EnphaseToken.update' was never awaited self._enphase_token.update() Silly mistake. Should be finally fixed now.
gharchive/issue
2023-07-03T20:22:43
2025-04-01T04:55:09.971050
{ "authors": [ "Hoffmann77", "catsmanac" ], "repo": "Hoffmann77/home_assistant_enphase_gateway", "url": "https://github.com/Hoffmann77/home_assistant_enphase_gateway/issues/7", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
181886871
Keg.relocate_text_files performance is awful Keg.relocate_text_files reads every single text file and libtool file in a bottle, replaces the cellar, prefix and repository placeholders (if any), and rewrites the files that need modifications. This process is single-handedly inflating bottle pouring time 10–100 times. A few examples I tested: git: untar ~0.3s, relocate text files ~6s; go: untar ~1.6s, relocate text files ~70s; mysql: untar ~2.7s, relocate text files ~170s. @zmwangx Thanks for filing this! Looks like we're currently doing three separate passes over each file when we could just do one with multiple replacements. Not an asymptotic improvement, but ~3x isn't bad. Scratch that; looks like the real bottleneck is in text_files, particularly running /usr/bin/file on every file and parsing the output. After some quick googling, I can't find a better way to determine whether a file is text or binary without bringing in external dependencies like ruby-filemagic. A few options: do that in parallel to try and speed it up cache the files we actually rewrite in bottle.rb and store them with inside the bottle
gharchive/issue
2016-10-09T13:27:40
2025-04-01T04:55:09.989790
{ "authors": [ "MikeMcQuaid", "jawshooah", "zmwangx" ], "repo": "Homebrew/brew", "url": "https://github.com/Homebrew/brew/issues/1250", "license": "BSD-2-Clause", "license_type": "permissive", "license_source": "github-api" }
1104774254
Unable to uninstall adoptopenjdk cask brew config output HOMEBREW_VERSION: 3.3.10-58-g069ab08 ORIGIN: https://github.com/Homebrew/brew HEAD: 069ab087f99fd183aee599bf785cb835d5868407 Last commit: 21 hours ago Core tap ORIGIN: https://github.com/Homebrew/homebrew-core Core tap HEAD: 6e7ef3c8080bcb285cee2d1c6332c46562f49046 Core tap last commit: 6 minutes ago Core tap branch: master HOMEBREW_PREFIX: /usr/local HOMEBREW_CASK_OPTS: [] HOMEBREW_CORE_GIT_REMOTE: https://github.com/Homebrew/homebrew-core HOMEBREW_DISPLAY: /private/tmp/com.apple.launchd.ElGdl325Tp/org.xquartz:0 HOMEBREW_EDITOR: vim HOMEBREW_MAKE_JOBS: 4 Homebrew Ruby: 2.6.8 => /usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.6.8/bin/ruby CPU: quad-core 64-bit kabylake Clang: 12.0.5 build 1205 Git: 2.30.1 => /Library/Developer/CommandLineTools/usr/bin/git Curl: 7.64.1 => /usr/bin/curl macOS: 11.4-x86_64 CLT: 12.5.0.22.11 Xcode: N/A brew doctor output Please note that these warnings are just used to help the Homebrew maintainers with debugging if you file an issue. If everything you use Homebrew for is working fine: please don't worry or file an issue; just ignore this. Thanks! Warning: A newer Command Line Tools release is available. Update them from Software Update in System Preferences or run: softwareupdate --all --install --force If that doesn't show you any updates, run: sudo rm -rf /Library/Developer/CommandLineTools sudo xcode-select --install Alternatively, manually download them from: https://developer.apple.com/download/all/. You should download the Command Line Tools for Xcode 13.1. Warning: Some installed kegs have no formulae! This means they were either deleted or installed with `brew diy`. You should find replacements for the following formulae: python@2 Warning: Some installed formulae are deprecated or disabled. You should find replacements for the following formulae: ilmbase sdl sdl_image sdl_mixer sdl_ttf sshfs Warning: Unbrewed header files were found in /usr/local/include. If you didn't put them there on purpose they could cause problems when building Homebrew formulae, and may need to be deleted. Unexpected header files: /usr/local/include/ffi-x86_64.h /usr/local/include/ffi.h /usr/local/include/ffitarget-x86_64.h /usr/local/include/ffitarget.h Warning: Unbrewed '.pc' files were found in /usr/local/lib/pkgconfig. If you didn't put them there on purpose they could cause problems when building Homebrew formulae, and may need to be deleted. Unexpected '.pc' files: /usr/local/lib/pkgconfig/libffi.pc Warning: You have unlinked kegs in your Cellar. Leaving kegs unlinked can lead to build-trouble and cause formulae that depend on those kegs to fail to run properly once built. Run `brew link` on these: python@2 imath six Verification [X] I ran brew update and am still able to reproduce my issue. [X] I have resolved all warnings from brew doctor and that did not fix my problem. What were you trying to do (and why)? I am trying to uninstall adoptopenjdk because I don't need it. What happened (include all command output)? The cask was reported to be uninstalled. What did you expect to happen? The cask to uninstall and not show up on brew list --cask. Step-by-step reproduction instructions (by running brew commands) ❯ brew list --cask adobe-acrobat-reader curseforge eclipse-java grammarly minecraft qbittorrent steam vlc adoptopenjdk daisydisk fontforge graphiql osxfuse rar sublime-text webstorm balenaetcher discord google-chrome iterm2 parsec shottr teamviewer zoom cloudflare-warp docker google-drive league-of-legends pycharm spotify termius ❯ brew uninstall adoptopenjdk Error: Cask 'adoptopenjdk' is not installed. Please run the command rm -r "$(brew --prefix)/Caskroom/adoptopenjdk" which should fix your issue. brew uninstall should probably know to fall back on that, I think. brew uninstall should probably know to fall back on that, I think. This was very helpful for me as well. Thank you!
gharchive/issue
2022-01-15T16:25:47
2025-04-01T04:55:09.995363
{ "authors": [ "PythonCoderAS", "carlocab", "miccal", "surety" ], "repo": "Homebrew/brew", "url": "https://github.com/Homebrew/brew/issues/12726", "license": "BSD-2-Clause", "license_type": "permissive", "license_source": "github-api" }
201108488
brew reports minimum supported version of Xcode, not actual version When running brew doctor or brew config I see the following: Warning: Your Xcode (8.0) is outdated. Please update to Xcode 8.2 (or delete it). Xcode can be updated from the App Store. ... Xcode: 8.0 => /Users/Samantha/Desktop/faux-code.app/Contents/Developer ... This can be demonstrated by taking any system with 10.12.x without Xcode.app installed (but with CLT), and make a copy of any application, and change the bundle identifier to be com.apple.dt.Xcode so it gets picked up as "Xcode". In this circumstance brew will not read the actual version number of the bundle and assumes that it must be 8.0 (minimum version supported on 10.12). (I was going to submit a PR with a fix for this but the complexity of the related code seems a bit too complicated for my ability with ruby, it seems related to the defaults in the code in Library/Homebrew/os/mac/xcode.rb and Library/Homebrew/extend/os/mac/diagnostic.rb) I'm not sure I understand the issue. Why is the bundle identifier matching Xcode if it's not Xcode? That is done to demonstrate the problem, that the reported version from brew isn't actually representative of the version of Xcode found. What version of Xcode is it? Thanks! I don't understand your question, my problem is that brew is telling me it found Xcode but is not reporting the version number from the info.plist at the path it found. I don't see how that relates to what the actual number is found in this specific install on my machine? I don't see how that relates to what the actual number is found in this specific install on my machine? I'm trying to understand what the actual bug is that doesn't involve incorrectly setting bundle identifiers. If there's a version of Xcode installed in a weird location: what version is it? Let me try to explain this again, because it seems like I did poorly the first time: The behavior I am observing from brew is that reports that it has detected an installation of Xcode when there is an application bundle with the identifier of com.apple.dt.Xcode that is able to be found. (expected behavior) brew reports that the version found is not the most recent version, and that I should update it. (expected behavior) Based on the code, it doesn't seem that brew is actually querying the version number from the Info.plist of the bundle that has the identifier of com.apple.dt.Xcode. It instead, assumes that it is the minimum supported version of the OS version you are running. (unexpected behavior) So, as a result, brew is showing me the the number 8.0 as part of the reports i posted in this issue originally. The problem isn't with the specific version number I have, but with brew giving me misleading/incorrect information about that number. Based on the code, it doesn't seem that brew is actually querying the version number from the Info.plist of the bundle that has the identifier of com.apple.dt.Xcode. It instead, assumes that it is the minimum supported version of the OS version you are running. (unexpected behavior) So, as a result, brew is showing me the the number 8.0 as part of the reports i posted in this issue originally. The problem isn't with the specific version number I have, but with brew giving me misleading/incorrect information about that number. Ok, I think I understand. We don't query the Info.plist but instead use either the xcodebuild in the Xcode.app or on the system if it's not found. That's why I'm asking about the Xcode version. And if that is not found what do you do? because right now it seems like it reports incorrectly if it cannot locate a version of xcodebuild. Yes, that's probably correct. Again, though, can we bring this back to a real-world example? Do you have an working, normal Xcode where it can't find xcodebuild? If so, what version is it? Why would one expect an app that isn't Xcode, but has been cleverly spoofed to look like Xcode, to be correctly identified as a fake by Homebrew? I don't understand why everyone is getting stuck up on this "fake Xcode" thing. The path you are identifying by looking up the bundle identifier has nothing to do with the selected Xcode version via xcode-select (which was fixed only recently by @MikeMcQuaid). The issue i'm filing here is that brew reports about the found Xcode.app bundle in a way that is incorrect and misrepresentative to the user. Sorry, @samdmarshall but if you're not going to answer the questions I've asked I'm not going to be able to even attempt to fix this. Sorry? I'm pointing out that whatever xcodebuild returns is irrelevant as that uses a completely different means of lookup from the mdfind query, so using that as a means of reporting version is incorrect. I need answers to these questions if you'd like this fixed: What version of Xcode is it? Do you have an working, normal Xcode where it can't find xcodebuild? If so, what version is it? Without answers for them as far as I can tell the issue is "if I set a bundle identifier to the same as Xcode's weird things happen" which I'm considering a WONTFIX. Also, to be explicit (and I know this needs to be documented): closed issues on Homebrew don't necessarily mean "this is not a bug" but just "this is not something we plan on addressing in the near future". We're extremely understaffed on Homebrew/brew given the scale of the project and I need to try and keep the number of issues here manageable and triaged accordingly.
gharchive/issue
2017-01-16T20:08:06
2025-04-01T04:55:10.007730
{ "authors": [ "MikeMcQuaid", "chdiza", "samdmarshall" ], "repo": "Homebrew/brew", "url": "https://github.com/Homebrew/brew/issues/1858", "license": "BSD-2-Clause", "license_type": "permissive", "license_source": "github-api" }
216038913
brew tap-new creates a broken Travis config When a user (like me) runs brew tap-new, a YAML file is created which will cause failures to build with an error like so: $ ln -s "$PWD" "$HOMEBREW_TAP_DIR" ln: /usr/local/Homebrew/Library/Taps/killerswan/homebrew-sample: No such file or directory The command "ln -s "$PWD" "$HOMEBREW_TAP_DIR"" failed and exited with 1 during . I believe the code which creates the .travis.yml file is here in tap-new.rb. In that script: $TRAVIS_REPO_SLUG is a path like killerswan/homebrew-sample, and $HOMEBREW_TAP_DIR is a path like /usr/local/Homebrew/Library/Taps/killerswan/homebrew-sample. If /usr/local/Homebrew/Library/Taps/killerswan doesn't already exist, that call to ln -s is trying to create a link at killerswan/homebrew-sample but cannot find killerswan, so it fails. One workaround is to insert some $(dirname ...) calls. What directory are you in, as in what is the value of "$PWD"?
gharchive/issue
2017-03-22T12:02:00
2025-04-01T04:55:10.011875
{ "authors": [ "JCount", "killerswan" ], "repo": "Homebrew/brew", "url": "https://github.com/Homebrew/brew/issues/2378", "license": "BSD-2-Clause", "license_type": "permissive", "license_source": "github-api" }
1069774382
Bypass searching for open Issues when failing to install a formula without a tap Given a formula with a broken build, if you install it locally with brew install path/to/formula.rb, when the build fails, Homebrew will attempt to search for open issues on the tap; but in this case formula.tap is nil and tap.full_name would raise a NoMethodError yielding this output: $ brew install ./erg.rb Error: Failed to load cask: ./erg.rb Cask 'erg' is unreadable: wrong constant name #<Class:0x00007fd89b246cd0> Warning: Treating ./erg.rb as a formula. ==> Downloading https://github.com/square/erg/archive/v1.1.1.tar.gz Already downloaded: /Users/lail/Library/Caches/Homebrew/downloads/54e3fce84302901d76d50b53aa7fe760cfcf81c9842a232d6b29e771de6ec9c5--erg-1.1.1.tar.gz Warning: Cannot verify integrity of '54e3fce84302901d76d50b53aa7fe760cfcf81c9842a232d6b29e771de6ec9c5--erg-1.1.1.tar.gz'. No checksum was provided for this resource. For your reference, the checksum is: sha256 "8dbcff3dfd67b8f6e8f2dfd4f57cf818ce0cd6ce4b52566611e698fc8778507f" ==> go get github.com/square/erg Last 15 lines from /Users/lail/Library/Logs/Homebrew/erg/01.go: 2021-12-02 16:45:31 +0000 go get github.com/square/erg go: downloading github.com/square/erg v1.2.1 go: downloading vbom.ml/util v0.0.3 go: downloading vbom.ml/util/sortorder v1.0.2 go: downloading github.com/square/grange v0.0.0-20201015231752-48d66acdd125 go: downloading github.com/deckarep/golang-set v0.0.0-20170202203032-fc8930a5e645 go: downloading github.com/fvbommel/sortorder v1.0.1 go: downloading github.com/orcaman/concurrent-map v0.0.0-20160823150647-8bf1e9bacbf6 github.com/square/erg imports vbom.ml/util/sortorder: cannot find module providing package vbom.ml/util/sortorder Do not report this issue to Homebrew/brew or Homebrew/core! /usr/local/Homebrew/Library/Homebrew/utils/github.rb:64:in `issues_for_formula': undefined method `full_name' for nil:NilClass (NoMethodError) from /usr/local/Homebrew/Library/Homebrew/exceptions.rb:489:in `fetch_issues' from /usr/local/Homebrew/Library/Homebrew/exceptions.rb:485:in `issues' from /usr/local/Homebrew/Library/Homebrew/exceptions.rb:539:in `dump' from /usr/local/Homebrew/Library/Homebrew/brew.rb:155:in `rescue in <main>' from /usr/local/Homebrew/Library/Homebrew/brew.rb:143:in `<main>' /usr/local/Homebrew/Library/Homebrew/formula.rb:2306:in `block in system': Failed executing: go get github.com/square/erg (BuildError) from /usr/local/Homebrew/Library/Homebrew/formula.rb:2242:in `open' from /usr/local/Homebrew/Library/Homebrew/formula.rb:2242:in `system' from /Users/lail/Code/homebrew-formulas/erg.rb:13:in `install' from /usr/local/Homebrew/Library/Homebrew/build.rb:172:in `block (3 levels) in install' from /usr/local/Homebrew/Library/Homebrew/utils.rb:588:in `with_env' from /usr/local/Homebrew/Library/Homebrew/build.rb:134:in `block (2 levels) in install' from /usr/local/Homebrew/Library/Homebrew/formula.rb:1297:in `block in brew' from /usr/local/Homebrew/Library/Homebrew/formula.rb:2472:in `block (2 levels) in stage' from /usr/local/Homebrew/Library/Homebrew/utils.rb:588:in `with_env' from /usr/local/Homebrew/Library/Homebrew/formula.rb:2471:in `block in stage' from /usr/local/Homebrew/Library/Homebrew/resource.rb:126:in `block (2 levels) in unpack' from /usr/local/Homebrew/Library/Homebrew/download_strategy.rb:115:in `chdir' from /usr/local/Homebrew/Library/Homebrew/download_strategy.rb:115:in `chdir' from /usr/local/Homebrew/Library/Homebrew/download_strategy.rb:102:in `stage' from /usr/local/Homebrew/Library/Homebrew/resource.rb:122:in `block in unpack' from /usr/local/Homebrew/Library/Homebrew/mktemp.rb:63:in `block in run' from /usr/local/Homebrew/Library/Homebrew/mktemp.rb:63:in `chdir' from /usr/local/Homebrew/Library/Homebrew/mktemp.rb:63:in `run' from /usr/local/Homebrew/Library/Homebrew/resource.rb:208:in `mktemp' from /usr/local/Homebrew/Library/Homebrew/resource.rb:121:in `unpack' from /usr/local/Homebrew/Library/Homebrew/resource.rb:96:in `stage' from /usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.6.8/lib/ruby/2.6.0/forwardable.rb:230:in `stage' from /usr/local/Homebrew/Library/Homebrew/formula.rb:2451:in `stage' from /usr/local/Homebrew/Library/Homebrew/formula.rb:1290:in `brew' from /usr/local/Homebrew/Library/Homebrew/build.rb:129:in `block in install' from /usr/local/Homebrew/Library/Homebrew/utils.rb:588:in `with_env' from /usr/local/Homebrew/Library/Homebrew/build.rb:124:in `install' from /usr/local/Homebrew/Library/Homebrew/build.rb:224:in `<main>' [x] Have you followed the guidelines in our Contributing document? [x] Have you checked to ensure there aren't other open Pull Requests for the same change? [x] Have you added an explanation of what your changes do and why you'd like us to include them? [ ] Have you written new tests for your changes? Here's an example. [x] Have you successfully run brew style with your changes locally? [x] Have you successfully run brew typecheck with your changes locally? [x] Have you successfully run brew tests with your changes locally? Semantically, I feel it makes more sense to move the check to whatever is calling this. GitHub.issues_for_formula("hello", tap: nil) sounds wrong to me. @Bo98, I can see that. Maybe this change instead? def fetch_issues + return [] unless formula.tap + GitHub.issues_for_formula(formula.name, tap: formula.tap, state: "open") rescue GitHub::API::RateLimitExceededError => e opoo e.message [] end LGTM :+1: Thanks again @boblail!
gharchive/pull-request
2021-12-02T16:55:01
2025-04-01T04:55:10.018749
{ "authors": [ "Bo98", "MikeMcQuaid", "boblail" ], "repo": "Homebrew/brew", "url": "https://github.com/Homebrew/brew/pull/12509", "license": "BSD-2-Clause", "license_type": "permissive", "license_source": "github-api" }
734021271
Update display of requirements [x] Have you followed the guidelines in our Contributing document? [x] Have you checked to ensure there aren't other open Pull Requests for the same change? [x] Have you added an explanation of what your changes do and why you'd like us to include them? [x] Have you written new tests for your changes? Here's an example. [x] Have you successfully run brew style with your changes locally? [x] Have you successfully run brew tests with your changes locally? [x] Have you successfully run brew man locally and committed any changes? Update how requirements are displayed in brew info [--json], when calling their inspect() methods, and their error messages when they aren't met. I added capitalization and version numbers where possible to distinguish them from formula names at a glance. Thanks again @EricFromCanada!
gharchive/pull-request
2020-11-01T17:37:22
2025-04-01T04:55:10.022947
{ "authors": [ "EricFromCanada", "MikeMcQuaid" ], "repo": "Homebrew/brew", "url": "https://github.com/Homebrew/brew/pull/9021", "license": "BSD-2-Clause", "license_type": "permissive", "license_source": "github-api" }
736500381
Add autoremove Follow up to #8683 [x] Have you followed the guidelines in our Contributing document? [ ] Have you checked to ensure there aren't other open Pull Requests for the same change? [ ] Have you added an explanation of what your changes do and why you'd like us to include them? [x] Have you written new tests for your changes? Here's an example. [x] Have you successfully run brew style with your changes locally? [x] Have you successfully run brew tests with your changes locally? [ ] Have you successfully run brew man locally and committed any changes? Is there a mechanism to mark a non-leaf package as installed on request? If not, it would be a good target for a follow-up pull request. brew install it. Even if it's already installed, it'll be marked as "installed on request". Is there a mechanism to mark a non-leaf package as installed on request? If not, it would be a good target for a follow-up pull request. brew install it. Even if it's already installed, it'll be marked as "installed on request". I was thinking about this as well. How about the reverse? Is there a way to unmark it? Is there a way to unmark it? No, not really. brew uninstall it, let it be pulled in by a brew install or brew upgrade. (run brew man again and commit the result) Thanks for getting this over the line, great work @tie624!
gharchive/pull-request
2020-11-04T23:53:48
2025-04-01T04:55:10.028666
{ "authors": [ "MikeMcQuaid", "tie624" ], "repo": "Homebrew/brew", "url": "https://github.com/Homebrew/brew/pull/9047", "license": "BSD-2-Clause", "license_type": "permissive", "license_source": "github-api" }
760309805
Revert "untap: add --force switch" Reverts Homebrew/brew#9422 Review period skipped due to critical label. Review period skipped due to critical label.
gharchive/pull-request
2020-12-09T13:01:50
2025-04-01T04:55:10.030050
{ "authors": [ "BrewTestBot", "MikeMcQuaid" ], "repo": "Homebrew/brew", "url": "https://github.com/Homebrew/brew/pull/9483", "license": "BSD-2-Clause", "license_type": "permissive", "license_source": "github-api" }
341681792
Error: undefined method `undent' Hi, I ran brew cask cleanup and brew cask cleanup One of them returned an error with undent and homebrew asked me to report the error. so here it is :) Error: undefined method `undent' for #<String:0x0000000102c8a160> Please report this bug: https://github.com/Homebrew/homebrew-bundle/issues/ /usr/local/opt/sonar-runner/.brew/sonar-runner.rb:23:in `caveats' /usr/local/Homebrew/Library/Homebrew/formula.rb:1591:in `to_hash' /usr/local/Homebrew/Library/Taps/homebrew/homebrew-bundle/lib/bundle/brew_dumper.rb:89:in `block in formulae_info' /usr/local/Homebrew/Library/Taps/homebrew/homebrew-bundle/lib/bundle/brew_dumper.rb:89:in `map' /usr/local/Homebrew/Library/Taps/homebrew/homebrew-bundle/lib/bundle/brew_dumper.rb:89:in `formulae_info' /usr/local/Homebrew/Library/Taps/homebrew/homebrew-bundle/lib/bundle/brew_dumper.rb:18:in `formulae' /usr/local/Homebrew/Library/Taps/homebrew/homebrew-bundle/lib/bundle/brew_dumper.rb:40:in `cask_requirements' /usr/local/Homebrew/Library/Taps/homebrew/homebrew-bundle/lib/bundle/dumper.rb:22:in `dump_brewfile' /usr/local/Homebrew/Library/Taps/homebrew/homebrew-bundle/lib/bundle/commands/dump.rb:9:in `run' /usr/local/Homebrew/Library/Taps/homebrew/homebrew-bundle/cmd/brew-bundle.rb:63:in `<top (required)>' /usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.3.7/lib/ruby/2.3.0/rubygems/core_ext/kernel_require.rb:55:in `require' /usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.3.7/lib/ruby/2.3.0/rubygems/core_ext/kernel_require.rb:55:in `require' /usr/local/Homebrew/Library/Homebrew/utils.rb:18:in `require?' /usr/local/Homebrew/Library/Homebrew/brew.rb:106:in `<main>' BTW, $ brew doctor reports no problems Do you mean you ran brew bundle cleanup or brew bundle dump? Hi, Sorry. I don't recall exactly... hampered by the fact that I use a script to update brew regularly... brew update brew upgrade brew cask cleanup brew cleanup brew bundle dump --force --file=~/Dropbox/Apps/"$hostname"_Brewfile Those are the commands I run. One of them resulted in the exception. So it looks like it was $ brew bundle dump to answer your question. Thanks for answering. Should be fixed now.
gharchive/issue
2018-07-16T21:15:20
2025-04-01T04:55:10.035909
{ "authors": [ "MikeMcQuaid", "jschank" ], "repo": "Homebrew/homebrew-bundle", "url": "https://github.com/Homebrew/homebrew-bundle/issues/347", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
796180974
Recommended way to have previously installed (outside of bundle) cask applications treated as installed? My use case is running a Brewfile on a machine that already has some of the cask dependencies installed. Running brew bundle install several times results in output like the following every time: ==> Downloading https://downloads.slack-edge.com/releases/macos/4.12.2/prod/x64/Slack-4.12.2-macOS.dmg Already downloaded: /Users/simensen/Library/Caches/Homebrew/downloads/427b2130fb060c1fb365430410d4f02271976427a444f21289213a0713894ee9--Slack-4.12.2-macOS.dmg ==> Installing Cask slack Error: It seems there is already an App at '/Applications/Slack.app'. ==> Purging files for version 4.12.2 of Cask slack Installing slack has failed! I get that it is deciding to not overwrite the app. That makes sense. However, it would be nice to have either of the two options at this point: Somehow inform brew bundle that this dependency was already installed Somehow inform brew bundle install to install it anyway, likely with something like --force? I tried brew bundle install --force but that did not seem to do what I'd expect. It seems like in #258, brew cask install slack --force would "fix" this in that it would actually replace the already existing installed application. I'd actually be fine with that behavior. However, it doesn't seem like brew bundle install --force is doing that. I'm... not exactly sure what --force means in this context, either, since the help docs are hard for me to understand. It looks like --force isn't actually doing anything at all for brew bundle install but I could be reading it incorrectly. It looks like --force isn't actually doing anything at all for brew bundle install but I could be reading it incorrectly. Correct. This isn't something that's currently supported (as you've noticed) and given there's a workaround (run the brew install --force command or manually delete it) I'm going to close this out. If you're interested, this does feel like something that would be a nice PR, though! It looks like --force isn't actually doing anything at all for brew bundle install but I could be reading it incorrectly. Correct. This isn't something that's currently supported (as you've noticed) and given there's a workaround (run the brew install --force command or manually delete it) I'm going to close this out. If you're interested, this does feel like something that would be a nice PR, though!
gharchive/issue
2021-01-28T16:54:35
2025-04-01T04:55:10.041013
{ "authors": [ "MikeMcQuaid", "simensen" ], "repo": "Homebrew/homebrew-bundle", "url": "https://github.com/Homebrew/homebrew-bundle/issues/893", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
180220002
stone-soup: fix sqlite link error A kind of hot fix for sqlite linker error on Sierra (#691) by making it build its own static version of sqlite. Seems to work fine at least on my local machine. Yet, we'd better need to understand the root cause of the problem. Merged. Thank you for your contribution to Homebrew!
gharchive/pull-request
2016-09-30T04:58:01
2025-04-01T04:55:10.339037
{ "authors": [ "apjanke", "tomyun" ], "repo": "Homebrew/homebrew-games", "url": "https://github.com/Homebrew/homebrew-games/pull/692", "license": "bsd-2-clause", "license_type": "permissive", "license_source": "bigquery" }
79491994
Yarp compilation failed in Yosemite Hi All! I'm trying to compile yarp for Os X (Yosemite) but I'm having this issue: ws120582:~ David$ brew install yarp ==> Installing yarp from homebrew/homebrew-x11 ==> Downloading https://github.com/robotology/yarp/archive/v2.3.63.tar.gz Already downloaded: /Library/Caches/Homebrew/yarp-2.3.63.tar.gz ==> cmake -DCMAKE_C_FLAGS_RELEASE= -DCMAKE_CXX_FLAGS_RELEASE= -DCMAKE_INSTALL_PREFIX=/usr/local/Cellar/yarp/2.3.63 -DCMAKE_BUILD_TYPE=Release -DCMAKE_ linked by target "gyarpmanager" in directory /tmp/yarp20150522-31453-8yzzi8/yarp-2.3.63/src/yarpmanager/gymanager -- Configuring incomplete, errors occurred! See also "/tmp/yarp20150522-31453-8yzzi8/yarp-2.3.63/CMakeFiles/CMakeOutput.log". See also "/tmp/yarp20150522-31453-8yzzi8/yarp-2.3.63/CMakeFiles/CMakeError.log". READ THIS: https://git.io/brew-troubleshooting If reporting this issue please do so at (not Homebrew/homebrew): https://github.com/homebrew/homebrew-x11/issues No idea how to make it compile properly. Thanks a lot for the help David Could you post brew gist-logs yarp? Ping?
gharchive/issue
2015-05-22T15:15:04
2025-04-01T04:55:10.433827
{ "authors": [ "Dazzid", "DomT4", "dunn" ], "repo": "Homebrew/homebrew-x11", "url": "https://github.com/Homebrew/homebrew-x11/issues/76", "license": "bsd-2-clause", "license_type": "permissive", "license_source": "bigquery" }
99870570
libewf fails if CLT not installed brew install --build-from-source libewf is failing with a link error for me on 10.9 and 10.11 w/o CLT installed. Works on a CLT-installed 10.9 box. https://gist.github.com/53c26e62c1cd57d3023e /bin/sh ../libtool --tag=CC --mode=link clang -g -O2 -Wall -o ewfacquire byte_size_string.o digest_hash.o device_handle.o ewfacquire.o ewfinput.o ewfoutput.o guid.o imaging_handle.o log_handle.o platform.o process_status.o storage_media_buffer.o ../libodraw/libodraw.la ../libsmdev/libsmdev.la ../libsmraw/libsmraw.la ../libhmac/libhmac.la -lcrypto -ldl ../libcsystem/libcsystem.la ../libcsplit/libcsplit.la ../libcdatetime/libcdatetime.la ../libewf/libewf.la ../libcnotify/libcnotify.la ../libclocale/libclocale.la ../libcerror/libcerror.la ../libcstring/libcstring.la -lbz2 libtool: link: clang -g -O2 -Wall -o .libs/ewfacquire byte_size_string.o digest_hash.o device_handle.o ewfacquire.o ewfinput.o ewfoutput.o guid.o imaging_handle.o log_handle.o platform.o process_status.o storage_media_buffer.o ../libodraw/.libs/libodraw.a ../libsmdev/.libs/libsmdev.a ../libsmraw/.libs/libsmraw.a ../libhmac/.libs/libhmac.a -lcrypto ../libcsystem/.libs/libcsystem.a ../libcsplit/.libs/libcsplit.a ../libcdatetime/.libs/libcdatetime.a ../libewf/.libs/libewf.dylib -L/usr/lib -lz -ldl -lpthread ../libcnotify/.libs/libcnotify.a ../libclocale/.libs/libclocale.a ../libcerror/.libs/libcerror.a ../libcstring/.libs/libcstring.a -lbz2 Undefined symbols for architecture x86_64: "_ERR_remove_thread_state", referenced from: _libhmac_md5_initialize in libhmac.a(libhmac_md5.o) _libhmac_md5_free in libhmac.a(libhmac_md5.o) _libhmac_sha1_initialize in libhmac.a(libhmac_sha1.o) _libhmac_sha1_free in libhmac.a(libhmac_sha1.o) _libhmac_sha256_initialize in libhmac.a(libhmac_sha256.o) _libhmac_sha256_free in libhmac.a(libhmac_sha256.o) ld: symbol(s) not found for architecture x86_64 clang: error: linker command failed with exit code 1 (use -v to see invocation) make[1]: *** [ewfacquire] Error 1 make: *** [install-recursive] Error 1 I cannot reproduce this issue.
gharchive/issue
2015-08-09T06:33:51
2025-04-01T04:55:10.464454
{ "authors": [ "apjanke", "ilovezfs" ], "repo": "Homebrew/legacy-homebrew", "url": "https://github.com/Homebrew/legacy-homebrew/issues/42682", "license": "bsd-2-clause", "license_type": "permissive", "license_source": "bigquery" }
1458312714
Add conda-store https://github.com/Quansight/conda-store Stay with Poetry which is very mature.
gharchive/issue
2022-11-21T17:24:45
2025-04-01T04:55:10.466984
{ "authors": [ "Hongbo-Miao" ], "repo": "Hongbo-Miao/hongbomiao.com", "url": "https://github.com/Hongbo-Miao/hongbomiao.com/issues/6075", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
165387233
Make the Save (Add, Update) buttons always visible The request is that any button to Save (Add or Update) would always be visible. Users would not need to scroll to the bottom of a page to find the button. This could mean placing a second copy of the button at the top of the screen. Request is to make this a standard across Hospital Run @jglovier what do you think about this idea? Any thoughts on how we might accomplish this - eg floating button as you scroll, or just a button at the top? we can push the button near the header,once the user clicks for edit/save. @ShadabQ I'm not sure I follow what you are suggesting. Can you explain further what you mean? @jkleinsc i am suggesting something like this @jkleinsc :i am suggesting something like this @ShadabQ I like where you are heading with that concept. I think we could rethink the screen header a bit (as is that would mean three equal weight buttons which are not all grouped together), but the idea of solving this problem by putting the button in that header area could work. Honestly, I don't think we need the new patient button in that region anyway. If anything, the action there should probably be view all or something that takes you back to the patient index. Or, the save button could just be the primary header call to action. Agree with @jglovier New patient is not needed. Really do not think you are going to edit a patient, then move to adding new patients, from a workflow perspective. @jglovier @mnorbeck I agree,seems logical.Additionally we do have a new patient on left hand menu. Is anybody taking up this issue? If not, I would like to take up this issue. @tangollama @kartik95 It looks like you are working on this one. Is that right? Just wanted to make sure two people aren't working on the same issue unknowingly. So, what I get is that we need to have add or update buttons always on the top and 'new user' button not required while editing or creating a user. Right? @ShadabQ @jglovier @kartik95 you may find the suggestions over here https://github.com/HospitalRun/hospitalrun-frontend/pull/656 @kartik95 were you able to complete this one? There have been a couple PRs against this issue and I've come to the realization that this issue really needs some better requirements and/or architectural design before work is started on it. I am pulling off the help wanted/hacktoberfest labels off of this issue until we can sort out architecturally/UX/requirements wise what we want to do. Please don't work on this issue until this has been clarified The main problem is that the location @jglovier has suggested to place the buttons is controlled by the "section" template, but placing buttons in there really messes up firing actions on those buttons because hierarchically speaking the section is at a higher level than the edit screen, so you need someway to push the action down to the controller. Ideally what should happen is the "section"/module idea needs to be refactored into components so that we can properly handle actions. Also, the title/button bar probably belongs in the edit-panel/item-listing components as opposed to the section which would make this problem easier to solve. If i may. I would suggest having that/those button/s at the bottom right end of the page. And why not visible if the page doesn't scroll (is smaller than the screen's size). If the page is longer than screen size, the button is style visible on ready, but hides when you start scrolling down. Shows back up when you scroll back up. This way you can have as many buttons as you may, and are not obtrusive. Also if the showing/hiding of those buttons is slightly but visibly animated in some nice googley material style. Plus if the app is somehow used packaged in a touch-screen device, this makes those buttons instantly accessible to a thumb, while still easy to reach for cursor navigating users.
gharchive/issue
2016-07-13T18:18:52
2025-04-01T04:55:10.492466
{ "authors": [ "Marinlemaignan", "ShadabQ", "jglovier", "jkleinsc", "jscottchapman", "kartik95", "mdarmani", "mnorbeck" ], "repo": "HospitalRun/hospitalrun-frontend", "url": "https://github.com/HospitalRun/hospitalrun-frontend/issues/569", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1420701020
Tenants Describe the feature I would like to use Houdini in the project I am currently working on. The system with the GraphQL API the client would connect to have different tenants. A client must set the header X-Tenant with the ID of the tenant to use a specific tenant when making a request. The tenant the user is using is decided by the URL, as each path is prefixed with /tenant/[tenantID]. This works very nicely with SvelteKit, as every page simply is located under /src/tenant/[tenantID]. However, I do not see any obvious way to pass the tenant ID to the fetchQuery function in client.js. I have thought about a few workarounds, but everyone has failed since certain users have access to more than one tenant and they need to be able to switch between tenants within the SPA. This creates a case where Houdini’s built-in caching need to be aware of what tenant the users are requesting data for so that responses can be cached separately for each tenant. One method I have tried is to get the tenant from the URL in the Variables function and set it as a variable. However, this would mean that some logic needs to be implemented to set the tenant in each GraphQL query that is made, which is repetitive. Additionally, not all queries will have variables, which would mean variables need to be added unnecessarily. Also, in principle, a tenant ID is not really a variable as it is not passed as a variable to the GraphQL query on the server but handled using different logic. To me, metadata looks like a good match for a location where the tenant could be located. However, I am using the standard declarative form of defining that a page requests a query, i.e: import type { PageData } from "./$houdini"; export let data: PageData; $: ({ GetEntities } = data); // When using the data in the view. $GetEntities.data?.entities You have a guide on your website where you describe that metadata can be used when using the more imperative .fetch() method but I cannot find any information about the standard declarative form. I have researched your source code to some extent and to me, it looks like this is currently not possible. Also, does the caching mechanism regard metadata? Because, if not, then using metadata to specify the tenant would not be possible. Preferably, I would like an API where it would be possible to add the tenant ID to the metadata in a load function located in /src/tenant/[tenantID]/+page.ts as this would mean the tenant would be cascaded to all pages. As a plan B, would it be otherwise possible to have a middleware where a variable could be injected? This would preferably be in /src/tenant/[tenantID]/+page.ts just as with the case described above. However, a global middleware would be fine if it is invoked each page change not just on the server but also on the client as well. It would be okay for me with such a solution even though a tenant ID is not a variable in principle. I am going to continue investigating your source code and might propose a solution if I find one. Criticality waiting for it to switch all my project to Houdini I researched the code further and found out that only the variables are used when caching and not the metadata. Thus, it seems like some kind of variable middleware would be needed. Oh, very interesting 🤔 I'll have to think a little bit about how to support this. You're correct that there isn't an immediately obvious solution. Are the IDs globally unique across tenants? Once we do have a way to point your queries to the right API, we need to make sure that the IDs are unique so the cache doesn't get confused. The IDs for the entities belonging to a certain tenant are globally unique across tenants in our backend implementation, as they refer to database keys of rows that also each have a foreign key to a tenant. However, I need to support operations that does not include a key, such as listing all entries of a certain entity within the tenant (with or without pagination). In this case, no ID is included as a query variable. Additionally, a popular alternative to the described tenant implementation is to have one database per tenant, which means the entry IDs would not be unique. I researched the code further and saw that logic exists to only cache variables that are used by the query, so even though I have managed to inject the tenant ID in the variables, it won’t be cached. Yet, as I’ve understood it, we users can pass any arbitrary data as metadata, so if the metadata is cached and can be provided from /src/tenant/[tenantID]/+page.ts, it could be a good match for specifying a tenant ID. I am going to investigate how metadata could be cached today. I would like to propose that two callbacks could be injected: one to add or modify the metadata and one to compare two metadata objects so that the caching mechanism could differentiate between two metadata instances and cache the results separately. Additionally, since probably only a subset of the library’s users will use this functionality, I propose that metadata caching only would be done if a comparator callback has been provided. What do you think about this? Sorry, I just realized what you are talking about. Since you cache the result on an entry basis, errors would be introduced if you just looked at the IDs of the individual entries without regarding the tenant. Simply “caching the metadata” as I proposed would not work here. In my case, this would not be an issue since the IDs are unique anyway, but it would not work if the tenants were stored in different databases. I guess each entry would have to be identified using a pair containing the tenant ID and the entity ID. Nevertheless, I think being able to inject metadata when using the declarative form would still be a good idea, as it would mean the metadata feature would be accessible with this type of syntax and not just from .fetch(). I have managed to create a solution where it’s possible to inject variables through a middleware while ensuring that these extra variables are included in the cache key. You can see it at this link: https://github.com/oborgen/houdini Only packages/houdini-svelte/src/runtime/stores/query.ts and packages/houdini/src/runtime/cache/stuff.ts have been updated. Notice that entries with the same ID in different tenants would probably not work. @AlecAivazis Have you thought anything more about this? I have a proposal: The file packages/houdini/src/runtime/cache/index.ts currently contains a global instance of Cache. This could be replaced with an object that maps tenants (as strings) to Cache instances. Then, getCache() would have to accept a tenant, which it could use to look up the correct Cache instance. This would ensure isolation between tenants from what I can see. However, each time anything is queried, mutated, subscribed to, etc, the tenant would have to be specified (with a default value of course as most users will not have to deal with tenants). It would be quite nice to have some way to set the tenant once (based on what page is loaded, etc) instead of having to set it on each request. One way this might be possible would be to use Svelte's setContext/getContext, which would mean users could set the tenant with setContext when a page is loading and then getCache could get it using getContext so that the right cache could be returned. However, an immediate roadblock to this is that LayoutSessionStore.getConfig calls getCache outside of any component initialization, which causes an exception to be thrown when calling getContext. The configuration could probably be something the Cache instances could share but I haven't investigated how that currently works. Another potential problem with setContext/getContext might be that many stores seem to be created as global instances, which might cause problems. However, to me, most of them seem to be bound to specific pages, which possibly could make it so the correct tenants would be returned from getContext. What’s your opinion on this? Sorry i haven't reponded here @samuel-utbult-oborgen. I've been super busy with work and the 1.0 release. My overall feeling is that your situation is very specific and it's hard to imagine that there would be a universal way to address this as part of houdini's core. I think the best approach would be to put together a custom plugin that does whatever is necessary. One thing you could explore is having a plugin that changes the logic of the generated runtime. That would let you customize the behavior to fit your exact needs. Let me know if that sounds like something you want to explore and we can talk through the options/steps. It's fine. I have been busy with other things as well. My current workaround is to put data-sveltekit-reload on any link that changes which tenant the user is currently using. This effectively means the cache is cleared when switching tenants. As you write, some kind of plugin could be useful as long as it could overwrite the behavior of getCache to maintain separate cache instances for the different tenants. Do you have any plans to introduce support for plugins to the library? I am not a Javascript-expert, but as far as I understand, patterns like monkey-patching can be used to interfere with the behavior of existing functions. I can look into that when I have time. Do you have any plans to introduce support for plugins to the library? Yes! In fact, the svelte bindings are one such plugin to the core houdini infrastructure. There are already a few people in the wild who have custom plugins to fit their needs so it's definitely doable. Only big hurdle is that it's not really documented yet since I want to reserve the ability to change it but if this is something you want to explore I can point you in the right direction (or we can chat on discord if that works better for you). In https://github.com/HoudiniGraphql/houdini/tree/main/packages, you can already see a few plugin examples: houdini-svelte houdini-plugin-svelte-global-stores (not released yet) Thank you. I can look into how those plugins work to see how a tenants plugin could be implemented. Awesome! For a better overview of all of the hooks that a plugin can implement, here is a diagram that should be up to date: https://app.excalidraw.com/l/5mJ4nb9Inaz/7QsnYkLMbK3
gharchive/issue
2022-10-24T11:46:24
2025-04-01T04:55:10.512027
{ "authors": [ "AlecAivazis", "jycouet", "samuel-utbult-oborgen" ], "repo": "HoudiniGraphql/houdini", "url": "https://github.com/HoudiniGraphql/houdini/issues/637", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
192870305
Report all ajax errors to Sentry This helps us debug errors by including more context about any failed requests. cc @gchomatas @jonathanwgoodwin Decided that I don't actually need this anymore
gharchive/pull-request
2016-12-01T15:03:09
2025-04-01T04:55:10.538907
{ "authors": [ "andyhuang91" ], "repo": "HubSpot/Blazar", "url": "https://github.com/HubSpot/Blazar/pull/306", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
300815891
Missing 0.19.0 on docker hub We were working on some integration tests with 0.19.0 and found that there is no release here for the 0.19.0 release: https://hub.docker.com/r/hubspot/singularityservice/tags/ That repo has been very handy for testing things in the past. Should we expect an update now that 0.19.0 is out? Odd, I thought I already pushed those images. Will get those up there in the morning 👍 :+1: thanks so much. Just pushed up the images 👍
gharchive/issue
2018-02-27T22:21:53
2025-04-01T04:55:10.541898
{ "authors": [ "carlf", "ssalinas" ], "repo": "HubSpot/Singularity", "url": "https://github.com/HubSpot/Singularity/issues/1742", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1782832835
Command to run two nodes and a relay fails on second run Overview Attempts to provision for same node twice Steps to Reproduce Run ❯ kurtosis run . '{"action":"setup_relay","relay":{"name":"btp","links": {"src": "icon", "dst": "eth"},"bridge":"false"}}' --enclave btp twice Expected Behavior Should either only attempt to provision if node is not already running, or provision new nodes. Device Information Desktop (please complete the following information): OS: MacOS Monterey Version 12.2.1 Additional Context There was an error validating Starlark code Error while validating instruction upload_files(src="github.com/hugobyte/chain-package/services/jvm/icon/static-files/config/", name="config-files-0"). The instruction can be found at github.com/hugobyte/chain-package/services/jvm/icon/src/node-setup/start_icon_node.star[29:22] Caused by: There was an error validating 'upload_files' as artifact name 'config-files-0' already exists There was an error validating Starlark code Error while validating instruction upload_files(src="github.com/hugobyte/chain-package/services/jvm/icon/static-files/contracts/", name="contracts-0"). The instruction can be found at github.com/hugobyte/chain-package/services/jvm/icon/src/node-setup/start_icon_node.star[30:22] Caused by: There was an error validating 'upload_files' as artifact name 'contracts-0' already exists There was an error validating Starlark code Error while validating instruction add_service(name="icon-node-0", config=ServiceConfig(image="iconloop/goloop-icon:v1.3.5", ports={"rpc": PortSpec(number=9080, transport_protocol="TCP", application_protocol="http")}, public_ports={"rpc": PortSpec(number=8090, transport_protocol="TCP", application_protocol="http")}, files={"/goloop/config/": "config-files-0", "/goloop/contracts/": "contracts-0"}, cmd=["/bin/sh", "-c", "/goloop/config/start-icon-0.sh"], env_vars={"GOLOOP_LOG_LEVEL": "trace", "GOLOOP_P2P": ":8080", "GOLOOP_P2P_LISTEN": ":7080", "GOLOOP_RPC_ADDR": ":9080", "ICON_CONFIG": "/goloop/config/icon_config.json"})). The instruction can be found at github.com/hugobyte/chain-package/services/jvm/icon/src/node-setup/start_icon_node.star[55:50] Caused by: There was an error validating 'add_service' as service 'icon-node-0' already exists Error encountered running Starlark code. Hi @CyrusVorwald , Running the same command twice without cleaning the kurtosis engine will result in this error. Currently the implementation is to manually clean the kurtosis before executing kurtosis run .. command. But this will be automated in next release with cli integration where kurtosis clean will execute before running kurtosis run. Understood, thanks
gharchive/issue
2023-06-30T16:41:33
2025-04-01T04:55:10.557418
{ "authors": [ "CyrusVorwald", "hemz10" ], "repo": "HugoByte/DIVE", "url": "https://github.com/HugoByte/DIVE/issues/23", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
73903711
Rename NodeJS to Node.js This is seriously not a real issue, but rather a little correction I would like to approach. Don't take it too seriously ;) The official name is Node.js, not NodeJS. Reference: http://en.wikipedia.org/wiki/Node.js https://nodejs.org/about/ Best, Moritz Good point. I'll tackle this. That was pretty fast! I'll also fix it right away in the german translation.
gharchive/issue
2015-05-07T08:53:12
2025-04-01T04:55:10.560382
{ "authors": [ "HugoGiraudel", "morkro" ], "repo": "HugoGiraudel/sass-guidelines", "url": "https://github.com/HugoGiraudel/sass-guidelines/issues/196", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
54171606
Respond-to mixin suggestion From my experience, on big projects, you may need custom breakpoint. And I don't want to include that in breakpoints map since you're using it only once. For that reason, I think it's maybe better to leave the place for custom breakpoints in the mixin. So if the mixin doesn't recognise breakpoint from the map, it will add breakpoint you passed to media like it is. I changed example so that you can see what i mean. Once again, it's up to you. :) I am not willing to introduce such a thing as is, but I do want to add a warning about custom breakpoints and everything. I'll update the content. :) Great. Glad I could help. :) There. What do you think? :) It's great. Thanks.
gharchive/pull-request
2015-01-13T10:09:02
2025-04-01T04:55:10.562681
{ "authors": [ "HugoGiraudel", "goschevski" ], "repo": "HugoGiraudel/sass-guidelines", "url": "https://github.com/HugoGiraudel/sass-guidelines/pull/78", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
610157543
Question about train-validation split Hi, thank you very much for your work. Mine is just a question for clarification: can you please explain to me how exactly the train-validation split is done? I cannot really understand what is meant in this line https://github.com/HuguesTHOMAS/KPConv/blob/132fdc628fb4850548e931c8b02c6325e7cac85e/datasets/Semantic3D.py#L145 Many thanks! Hi @Esteban-25, I just separate the training dataset in 6 groups of scenes. Each scene is assigned a group idx between 0 and 5. Then you can choose the group used for validation with: https://github.com/HuguesTHOMAS/KPConv/blob/132fdc628fb4850548e931c8b02c6325e7cac85e/datasets/Semantic3D.py#L146 Thank you very much for such a quick answer. Perhaps dumb question, but is there a reason for it be 15 elements in that array? I am just trying to use a custom dataset that has more training files than Semantic3D and am having troubles with running the training file. The error I get leads me back to the line I sent and was wondering if the number of training files had sth to do with the number of elements in the self.all_splits array. There are 15 training files in Semantic3D, so 15 elements in this array. Just add as many elements as necessary to this array, according to the number of training files you have Thanks a lot for the help.
gharchive/issue
2020-04-30T15:37:54
2025-04-01T04:55:10.566676
{ "authors": [ "Esteban-25", "HuguesTHOMAS" ], "repo": "HuguesTHOMAS/KPConv", "url": "https://github.com/HuguesTHOMAS/KPConv/issues/72", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
427881264
Tabular display of provenance information Provenance information is currently shown as nested JSON. This should be rendered as a table. implemented in fbe3c3c8ea
gharchive/issue
2019-04-01T19:49:09
2025-04-01T04:55:10.610232
{ "authors": [ "apdavison" ], "repo": "HumanBrainProject/hbp_neuromorphic_platform", "url": "https://github.com/HumanBrainProject/hbp_neuromorphic_platform/issues/31", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1367438524
New 'celltype' suggestions We forward these term suggestions that EBRAINS received from data providers: name suggestedTerminology interlex ID Notes definition cortical neuron cell type hippocampal neuron cellType Aneuron located in the hippocampus layer 6 neuron cellType There is cortical layer 6 in UBERONParcellation neurons in the cortical 6 layer zinc containing neurons and glia cellType (can be split into two) Neurons and glia containing zinc. The zinc containing neurons are a subtype of glutamatergic neurons @archgogo thank you for these suggestions. @UlrikeS91 , @tgbugs this issue should be reviewed in respect to the discussion in #149 and #257 @archgogo thank you for being patient with this. Please use for cortical neuron, hippocampal neuron, layer 6 neuron please the combination of 'neuron', in addition with the anatomical region/entity you need (e.g., 'neuron' and 'hippocampus) TODO: add 'zinc releasing neuron' @lzehl, couldn't find any references for 'zinc containing neuron' or 'zinc releasing neuron' on ontology sites. Should I still add it in the 'cellType' @archgogo yes. Just leave the ontology links empty (null). They are on it's way ;) Nonetheless please provide a definition
gharchive/issue
2022-09-09T07:54:39
2025-04-01T04:55:10.616736
{ "authors": [ "archgogo", "lzehl" ], "repo": "HumanBrainProject/openMINDS_controlledTerms", "url": "https://github.com/HumanBrainProject/openMINDS_controlledTerms/issues/297", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
539159979
Remediate High Protobuf vulnerability Snyk reports the following High severity vulnerability in HumanCellAtlas/data-consumer-vignettes. Please remediate by the end of Q1 Milestone 1. Description com.google.protobuf:protobuf-java Suggested Remediation Upgrade com.google.protobuf:protobuf-java to version 3.4.0 or higher. Details com.google.protobuf:protobuf-java is a Google's language-neutral, platform-neutral, extensible mechanism for serializing structured data. Affected versions of this package are vulnerable to Integer Overflow by allowing remote authenticated attackers to cause a heap-based buffer overflow in serialisation process. PR #83 should remediate this vulnerability, please re-open if not
gharchive/issue
2019-12-17T16:00:20
2025-04-01T04:55:10.619235
{ "authors": [ "Lilalamar", "chmreid" ], "repo": "HumanCellAtlas/data-consumer-vignettes", "url": "https://github.com/HumanCellAtlas/data-consumer-vignettes/issues/79", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
377581483
Internal security review [x] Deliver Tech stack documentation for each service to Albano [x] Assessment of the test cases which are run and especially with regards to testing secure coding [x] Set up Codacy (static code analysis) [x] Ready teams for security audit [x] Permissions are removed so that individuals access are based on the principle of least privilege [ ] ~Pen tests are run against both API as well as UI components~ [x] Results of the Audit are shared with the team and action items are created: [x] ~Whitelist accepted URLs to secure bundle_fqids_url input parameter~ DCP Security Tracking Scorecard 11/27/18 Update: Decided to leave URL input parameter unconstrained to preserve usability. Also identified that there is no practical risk of leaving this parameter unconstrained. Pen tests awaiting Broad funding proposal re: HCA before moving forward (per David Bernick)
gharchive/issue
2018-11-05T21:03:44
2025-04-01T04:55:10.633579
{ "authors": [ "calvinnhieu", "parthshahva" ], "repo": "HumanCellAtlas/matrix-service", "url": "https://github.com/HumanCellAtlas/matrix-service/issues/124", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
705362006
更新到新版本后无法使用 在更新到新版本后,无法显示资源网站,左上的框框没有内容。无法搜索资源,右上的框框没有反应。 卸载重装后仍然如此。 我也是 Windows下,请清理 AppData\Roaming\zy文件夹,再重试. C:\Users\Administrator\AppData\Roaming\zy 文件夹删除掉即可
gharchive/issue
2020-09-21T07:32:12
2025-04-01T04:55:10.636484
{ "authors": [ "cuiocean", "diermozu", "easonzhang1992", "jiangfu204" ], "repo": "Hunlongyu/ZY-Player", "url": "https://github.com/Hunlongyu/ZY-Player/issues/231", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
768694975
Korean translation Hello. I'd like to translate this tutorial for my school assignment. Can I simply translate a few things? You don't have to merge.. Hello, You can do what you want with these Notebooks, but you should remember to say where you have the original code from. Some of the Notebooks you are working on are outdated, number 3 on Pretty Tensor and number 3B on the Layers API. I am hosting the Chinese translation in my own github account, because the original translator suddenly disappeared and I didn't have permission to change the repo, when another translator wanted to continue the work. https://github.com/Hvass-Labs/TensorFlow-Tutorials-Chinese I'm not going to merge your pull request. But if you make a github repository in your own account with the Korean translation and name it TensorFlow-Tutorials-Korean, then I can take a look and add a link to the README if the quality looks good. I don't think there's any need to host it in my own github account.
gharchive/issue
2020-12-16T10:34:38
2025-04-01T04:55:10.663013
{ "authors": [ "Hvass-Labs", "woogyeong23" ], "repo": "Hvass-Labs/TensorFlow-Tutorials", "url": "https://github.com/Hvass-Labs/TensorFlow-Tutorials/issues/127", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2610518088
AxiosError: Request failed with status code 404 AxiosError: Request failed with status code 404 at settle (C:\Users\Xuan\AppData\Roaming\Koishi\Desktop\data\instances\default\node_modules\axios\dist\node\axios.cjs:2019:12) at IncomingMessage.handleStreamEnd (C:\Users\Xuan\AppData\Roaming\Koishi\Desktop\data\instances\default\node_modules\axios\dist\node\axios.cjs:3135:11) at IncomingMessage.emit (node:events:530:35) at endReadableNT (node:internal/streams/readable:1696:12) at process.processTicksAndRejections (node:internal/process/task_queues:82:21) at Axios.request (C:\Users\Xuan\AppData\Roaming\Koishi\Desktop\data\instances\default\node_modules\axios\dist\node\axios.cjs:4287:41) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async run (C:\Users\Xuan\AppData\Roaming\Koishi\Desktop\data\instances\default\node_modules\koishi-plugin-yesimbot\lib\index.js:113:22) at async C:\Users\Xuan\AppData\Roaming\Koishi\Desktop\data\instances\default\node_modules\koishi-plugin-yesimbot\lib\index.js:563:22 at async next (C:\Users\Xuan\AppData\Roaming\Koishi\Desktop\data\instances\default\node_modules@koishijs\core\lib\index.cjs:824:16) at async next (C:\Users\Xuan\AppData\Roaming\Koishi\Desktop\data\instances\default\node_modules@koishijs\core\lib\index.cjs:824:16) at async next (C:\Users\Xuan\AppData\Roaming\Koishi\Desktop\data\instances\default\node_modules@koishijs\core\lib\index.cjs:824:16) at async next (C:\Users\Xuan\AppData\Roaming\Koishi\Desktop\data\instances\default\node_modules@koishijs\core\lib\index.cjs:824:16) at async Processor._handleMessage (C:\Users\Xuan\AppData\Roaming\Koishi\Desktop\data\instances\default\node_modules@koishijs\core\lib\index.cjs:835:22) 不清楚是不是网络环境问题?API 用的是 GPT 神 GPT-4o-mini 感谢 版本是什么 给一下 API 相关配置 版本是什么 1.4.0 给一下 API 相关配置 试试 APIType 填 OpenAI,然后 https://api.gptgod.online 试试 APIType 填 OpenAI,然后 https://api.gptgod.online 改报403了😂 AxiosError: Request failed with status code 403 at settle (C:\Users\Xuan\AppData\Roaming\Koishi\Desktop\data\instances\default\node_modules\axios\dist\node\axios.cjs:2019:12) at BrotliDecompress.handleStreamEnd (C:\Users\Xuan\AppData\Roaming\Koishi\Desktop\data\instances\default\node_modules\axios\dist\node\axios.cjs:3135:11) at BrotliDecompress.emit (node:events:530:35) at endReadableNT (node:internal/streams/readable:1696:12) at process.processTicksAndRejections (node:internal/process/task_queues:82:21) at Axios.request (C:\Users\Xuan\AppData\Roaming\Koishi\Desktop\data\instances\default\node_modules\axios\dist\node\axios.cjs:4287:41) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async run (C:\Users\Xuan\AppData\Roaming\Koishi\Desktop\data\instances\default\node_modules\koishi-plugin-yesimbot\lib\index.js:113:22) at async C:\Users\Xuan\AppData\Roaming\Koishi\Desktop\data\instances\default\node_modules\koishi-plugin-yesimbot\lib\index.js:632:22 at async next (C:\Users\Xuan\AppData\Roaming\Koishi\Desktop\data\instances\default\node_modules@koishijs\core\lib\index.cjs:824:16) at async next (C:\Users\Xuan\AppData\Roaming\Koishi\Desktop\data\instances\default\node_modules@koishijs\core\lib\index.cjs:824:16) at async next (C:\Users\Xuan\AppData\Roaming\Koishi\Desktop\data\instances\default\node_modules@koishijs\core\lib\index.cjs:824:16) at async next (C:\Users\Xuan\AppData\Roaming\Koishi\Desktop\data\instances\default\node_modules@koishijs\core\lib\index.cjs:824:16) at async Processor._handleMessage (C:\Users\Xuan\AppData\Roaming\Koishi\Desktop\data\instances\default\node_modules@koishijs\core\lib\index.cjs:835:22) 试试 APIType 填 OpenAI,然后 https://api.gptgod.online 可以了,我将AIModel改成全小写gpt-4o-mini解决了😂
gharchive/issue
2024-10-24T05:33:21
2025-04-01T04:55:10.681508
{ "authors": [ "HydroGest", "XUANHLGG" ], "repo": "HydroGest/YesImBot", "url": "https://github.com/HydroGest/YesImBot/issues/19", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
623322205
Unable to access Product Dashboard, re directing to login page Unable to access any Product Dashboard created, it is always re directing back to login page and doesnt allow me log back in. Logs show this error: ERROR c.c.d.rest.RestApiExceptionHandler - Required request body is missing: public java.lang.Iterable<com.capitalone.dashboard.model.PipelineResponse> com.capitalone.dashboard.rest.PipelineController.searchPipelines(com.capitalone.dashboard.request.PipelineSearchRequest) throws com.capitalone.dashboard.misc.HygieiaException agreed, fix is being worked on. Expect it sometime next week Can you please let us know if the fix has been implemented
gharchive/issue
2020-05-22T16:18:22
2025-04-01T04:55:10.684157
{ "authors": [ "guruprasadsane", "rvema" ], "repo": "Hygieia/Hygieia", "url": "https://github.com/Hygieia/Hygieia/issues/3256", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1885041322
Question about some parameters set in GRUB_CMD_LINE. Hello! I tried to build Hyperenclave for AMD R7-6800H, I changed the GRUB_CMDLINE_LINUX parameter into the following one: memmap=4G\\\$0x100000000 amd_iommu=off mem_encrypt=off intremap=off no5lvl Despite of some crash for general protection fault: 0000 [#1] SMP NOPTI, after the booting process of hyperenclave-driver, I stil successfully ran the sgx-code you offered in docker image with the aid of the hyperenclave. But the fault is still a bit of annoying, so I wonder if you can offer me the specific parameters of GRUB_CMDLINE_LINUX you are using in AMD processors. I've referred the version offered in atc-22 repos, but I don't know why after setting up hyperenclave for a while the crash will happen. So I doubt that maybe I've made some fault as I set the GRUB_CMDLINE_LINUX. Besides, I am currently reading the source code of hyperenclave-driver and got some questions that I can't figure out now. I wonder why we should turn amd_iommu and intremap to off mode, and why we should disable 5-level page mechanism? Could you please further explain the reason behind here? Looking forward to your reply. Besides, in project hyperenclave-driver, I came across a function in function he_cmd_enable in hyperenclave-driver/driver/main.c. num_iomem = get_iomem_num(); /* * memmap region should be removed from iomem regions, * so the max num of mem_regions is iomem_num + nr_rsrv_mem. */ mem_regions = kvmalloc(sizeof(*mem_regions) * (num_iomem + nr_rsrv_mem), GFP_KERNEL); I am a little confused whether the flag amd_iommu=off or intel_iommu=off has some impacts on the function get_iomem_num, therefore triggering a wrong number of mem_regions? int get_iomem_num(void) { int num; struct resource *child; num = 0; child = iomem_resource.child; while (child) { num++; child = child->sibling; } return num; } Looking forward to your reply~ Hello! I tried to build Hyperenclave for AMD R7-6800H, I changed the GRUB_CMDLINE_LINUX parameter into the following one: memmap=4G\\\$0x100000000 amd_iommu=off mem_encrypt=off intremap=off no5lvl Despite of some crash for general protection fault: 0000 [#1] SMP NOPTI, after the booting process of hyperenclave-driver, I stil successfully ran the sgx-code you offered in docker image with the aid of the hyperenclave. But the fault is still a bit of annoying, so I wonder if you can offer me the specific parameters of GRUB_CMDLINE_LINUX you are using in AMD processors. I've referred the version offered in atc-22 repos, but I don't know why after setting up hyperenclave for a while the crash will happen. So I doubt that maybe I've made some fault as I set the GRUB_CMDLINE_LINUX. The GRUB_CMDLINE_LINUX you used is correct. Please provide more log about the hyperenclave setup and the crash. Besides, I am currently reading the source code of hyperenclave-driver and got some questions that I can't figure out now. I wonder why we should turn amd_iommu and intremap to off mode, and why we should disable 5-level page mechanism? Could you please further explain the reason behind here? turn amd_iommu and intremap to off In our design, hyperenclave restricts the physical memory accessed by the peripherals with the support of IOMMU. Hyperenclave is responsible for IOMMU hardware initialization, so we need to turn the amd_iommu and intremap to off for host linux. disable 5-level page Currently, hyperenclave doesn't support 5-level page table. Some hardware platform may use 5-level page table, such as,Intel Ice Lake. So we just add the no5lvl to disable it. Looking forward to your reply. Besides, in project hyperenclave-driver, I came across a function in function he_cmd_enable in hyperenclave-driver/driver/main.c. num_iomem = get_iomem_num(); /* * memmap region should be removed from iomem regions, * so the max num of mem_regions is iomem_num + nr_rsrv_mem. */ mem_regions = kvmalloc(sizeof(*mem_regions) * (num_iomem + nr_rsrv_mem), GFP_KERNEL); I am a little confused whether the flag amd_iommu=off or intel_iommu=off has some impacts on the function get_iomem_num, therefore triggering a wrong number of mem_regions? The IOMMU parameter amd_iommu=off has nothing to do with the function get_iomem_num. Function get_iomem_num is used to get the physical memory information. There may be some other issue triggering the crash. As i said above, we need the hyperenclave setup log and the crash log to figure out the reason. int get_iomem_num(void) { int num; struct resource *child; num = 0; child = iomem_resource.child; while (child) { num++; child = child->sibling; } return num; } Looking forward to your reply~ Thank u for your answer and patience! Here is the log in dmesg, it seems that although it crashes at the beginning, it still successfully handled sgx code somehow: [ 229.301014] hyper_enclave: loading out-of-tree module taints kernel. [ 229.301162] hyper_enclave: module verification failed: signature and/or required key missing - tainting kernel [ 229.312632] HE: cpu_vendor_detect: 39. Vendor ID: AuthenticAMD [ 229.325575] HE: get_convertible_memory: 136. BIOS E820 table from firmware: [0x0000000000000000 -> 0x000000000009f000], type: System RAM [ 229.325577] HE: get_convertible_memory: 136. BIOS E820 table from firmware: [0x000000000009f000 -> 0x00000000000c0000], type: Reserved [ 229.325578] HE: get_convertible_memory: 136. BIOS E820 table from firmware: [0x0000000000100000 -> 0x0000000009b00000], type: System RAM [ 229.325579] HE: get_convertible_memory: 136. BIOS E820 table from firmware: [0x0000000009b00000 -> 0x0000000009e00000], type: Reserved [ 229.325580] HE: get_convertible_memory: 136. BIOS E820 table from firmware: [0x0000000009e00000 -> 0x0000000009f00000], type: System RAM [ 229.325580] HE: get_convertible_memory: 136. BIOS E820 table from firmware: [0x0000000009f00000 -> 0x0000000009f28000], type: ACPI Non-volatile Storage [ 229.325581] HE: get_convertible_memory: 136. BIOS E820 table from firmware: [0x0000000009f28000 -> 0x00000000a07ff000], type: System RAM [ 229.325582] HE: get_convertible_memory: 136. BIOS E820 table from firmware: [0x00000000a07ff000 -> 0x00000000a0800000], type: Reserved [ 229.325582] HE: get_convertible_memory: 136. BIOS E820 table from firmware: [0x00000000a0800000 -> 0x00000000a2364000], type: System RAM [ 229.325583] HE: get_convertible_memory: 136. BIOS E820 table from firmware: [0x00000000a2364000 -> 0x00000000a4564000], type: Reserved [ 229.325584] HE: get_convertible_memory: 136. BIOS E820 table from firmware: [0x00000000a4564000 -> 0x00000000a456d000], type: System RAM [ 229.325584] HE: get_convertible_memory: 136. BIOS E820 table from firmware: [0x00000000a456d000 -> 0x00000000a4570000], type: Reserved [ 229.325585] HE: get_convertible_memory: 136. BIOS E820 table from firmware: [0x00000000a4570000 -> 0x00000000b077f000], type: System RAM [ 229.325586] HE: get_convertible_memory: 136. BIOS E820 table from firmware: [0x00000000b077f000 -> 0x00000000b2f7f000], type: Reserved [ 229.325587] HE: get_convertible_memory: 136. BIOS E820 table from firmware: [0x00000000b2f7f000 -> 0x00000000baf7f000], type: ACPI Non-volatile Storage [ 229.325587] HE: get_convertible_memory: 136. BIOS E820 table from firmware: [0x00000000baf7f000 -> 0x00000000bafff000], type: ACPI Tables [ 229.325588] HE: get_convertible_memory: 136. BIOS E820 table from firmware: [0x00000000bafff000 -> 0x00000000bb000000], type: System RAM [ 229.325589] HE: get_convertible_memory: 136. BIOS E820 table from firmware: [0x00000000bb000000 -> 0x00000000bc000000], type: Reserved [ 229.325589] HE: get_convertible_memory: 136. BIOS E820 table from firmware: [0x00000000bce00000 -> 0x00000000c0000000], type: Reserved [ 229.325590] HE: get_convertible_memory: 136. BIOS E820 table from firmware: [0x00000000fde00000 -> 0x00000000fdf00000], type: Reserved [ 229.325590] HE: get_convertible_memory: 136. BIOS E820 table from firmware: [0x00000000fed80000 -> 0x00000000fed81000], type: Reserved [ 229.325591] HE: get_convertible_memory: 136. BIOS E820 table from firmware: [0x00000000ff000000 -> 0x0000000100000000], type: Reserved [ 229.325592] HE: get_convertible_memory: 136. BIOS E820 table from firmware: [0x0000000100000000 -> 0x000000041e300000], type: System RAM [ 229.325593] HE: get_convertible_memory: 136. BIOS E820 table from firmware: [0x000000041f340000 -> 0x0000000460200000], type: Reserved [ 229.325596] HE: get_convertible_memory: 213. Convertible Memory[ 0]: 0x0000000000000000 -> 0x000000000009f000 [ 229.325596] HE: get_convertible_memory: 213. Convertible Memory[ 1]: 0x0000000000100000 -> 0x0000000009b00000 [ 229.325597] HE: get_convertible_memory: 213. Convertible Memory[ 2]: 0x0000000009e00000 -> 0x0000000009f00000 [ 229.325598] HE: get_convertible_memory: 213. Convertible Memory[ 3]: 0x0000000009f28000 -> 0x00000000a07ff000 [ 229.325599] HE: get_convertible_memory: 213. Convertible Memory[ 4]: 0x00000000a0800000 -> 0x00000000a2364000 [ 229.325599] HE: get_convertible_memory: 213. Convertible Memory[ 5]: 0x00000000a4564000 -> 0x00000000a456d000 [ 229.325600] HE: get_convertible_memory: 213. Convertible Memory[ 6]: 0x00000000a4570000 -> 0x00000000b077f000 [ 229.325600] HE: get_convertible_memory: 213. Convertible Memory[ 7]: 0x00000000bafff000 -> 0x00000000bb000000 [ 229.325601] HE: get_convertible_memory: 213. Convertible Memory[ 8]: 0x0000000100000000 -> 0x000000041e300000 [ 229.325602] HE: get_convertible_memory: 218. Convertible Memory size: 0x3cc4f3000 [ 229.325603] HE: get_valid_rsrv_mem: 285. Reserved Memory[ 0]: 0x100000000 -> 0x200000000 [ 229.325604] HE: get_valid_rsrv_mem: 290. Reserved Memory size: 0x100000000 [ 229.325606] HE: get_sme_mask: 63. CPU does not enable SME [ 229.325649] HE: mem_test: 48. Memory[0x100000000 - 0x200000000] test begin [ 230.196819] HE: mem_test: 78. Memory[0x100000000 - 0x200000000] test pass [ 230.211666] HE: get_hv_heap_size: 375. Hypervisor heap size: 0x43800000 [ 230.211668] HE: get_hv_cmrm_size: 387. Hypervisor cmrm size: 0x62d5000 [ 230.211669] HE: get_hv_frame_size: 400. Hypervisor frame size: 0x1c00000 [ 230.211669] HE: get_hypervisor_size: 413. Hv_core_and_percpu_size: 0xe40000, Hypervisor size: 0x80000000 [ 230.211670] HE: he_cmd_enable: 302. hypervisor size: 0x80000000 [ 230.211672] HE: get_sme_mask: 63. CPU does not enable SME [ 230.394700] HE: he_cmd_enable: 352. config_size: 1860 [ 230.417661] HE: add_epc_pages: 43. total_epc_pages: 0x80000, free_epc_pages: 0x80000 [ 230.417663] HE: init_enclave_page: 317. epc ranges: [0x180000000-0x200000000], 0x80000000 [ 230.417664] HE: init_enclave_page: 333. Initialized EPC ranges size: 0x80000000 [ 230.417665] HE: he_cmd_enable: 383. config_header load_addr: 0xffffff0000e40000 [ 230.417695] HE: he_cmd_enable: 404. mem_region load_addr: 0xffffff0000e40124 [ 230.417696] HE: inspect_tpm: 206. using fake tpm [ 230.417697] HE: he_cmd_enable: 411. tpm mmio type=8,size=0 pa=ffffffff [ 230.574385] HE: init_cmrm: 448. Initialize [0x0 -> 0x41e300000]'s CMRM [ 230.574631] HE: he_cmd_enable: 483. The hyperenclave is opening. [ 240.645555] [0] Activating hypervisor on CPU 0... [ 240.645557] [1] Activating hypervisor on CPU 1... [ 240.645558] [2] Activating hypervisor on CPU 2... [ 240.645559] [3] Activating hypervisor on CPU 3... [ 240.645560] [4] Activating hypervisor on CPU 4... [ 240.645561] [5] Activating hypervisor on CPU 5... [ 240.645561] [6] Activating hypervisor on CPU 6... [ 240.645562] [7] Activating hypervisor on CPU 7... [ 240.645563] [8] Activating hypervisor on CPU 8... [ 240.645564] [9] Activating hypervisor on CPU 9... [ 240.645564] [10] Init HHBox log feature ok [ 240.645565] [10] Init HHBox crash feature ok [ 240.645565] [10] tpm_detect starting.... [ 240.645565] [10] fake tpm is detected and initialized [ 240.645566] [10] FAKE TPM: tpm signing key pub x [ 240.645566] [10] C29974C9F1090FA4A10E9990620E91828B593A7211E2468450E3DC96DD5933FB [ 240.645567] [10] size= :0x20 [ 240.645567] [10] FAKE TPM: tpm signing key pub y [ 240.645568] [10] 402206ECCC5479289F33668EAAB85527ABBBB9F7B41CEB71551027D57AF28267 [ 240.645568] [10] size= :0x20 [ 240.645568] [10] FAKE TPM: root secret is generated and sealed [ 240.645569] [10] FAKE TPM: hypervisor AK pub x= [ 240.645569] [10] 3D9BB7BA028C5F97AC5AB1619336D9ED23E86858DDBDC23B510D5F0EBA8FF338 [ 240.645569] [10] size= :0x20 [ 240.645570] [10] FAKE TPM: hypervisor AK pub y= [ 240.645570] [10] 0B28428BDA30B2800FCB032ABCED81071B5F0DCB1A02B22AFF56B7DD22E52522 [ 240.645571] [10] size= :0x20 [ 240.645571] [10] FAKE TPM: hash of he_ak_pub extended to PCR 13: [ 240.645571] [10] AAA056CA1F030B7BD6C4089C2AEEC36D01173B46E0FD2B4C1BD2C14649B66539 [ 240.645572] [10] size= :0x20 [ 240.645572] [10] HyperEnclave: root of trust initialized! [ 240.645572] [10] Activating hypervisor on CPU 10... [ 240.645573] [11] Activating hypervisor on CPU 11... [ 240.645574] [12] Activating hypervisor on CPU 12... [ 240.645575] [13] Activating hypervisor on CPU 13... [ 240.645575] [14] Activating hypervisor on CPU 14... [ 240.645576] [15] Activating hypervisor on CPU 15... [ 254.042706] general protection fault: 0000 [#1] SMP NOPTI [ 254.042715] CPU: 0 PID: 1595 Comm: upowerd Tainted: G OE 5.4.0-050400-generic #201911242031 [ 254.042717] Hardware name: HONOR GLO-NX6/GLO-NX6-PCB, BIOS 1.10 06/13/2023 [ 254.042724] RIP: 0010:acpi_ex_system_memory_space_handler+0x239/0x2b5 [ 254.042728] Code: 02 00 00 00 00 41 83 fc 20 74 25 77 12 41 83 fc 08 74 17 41 83 fc 10 75 58 41 0f b7 06 eb 14 41 83 fc 40 75 4c 49 8b 06 eb 09 <41> 0f b6 06 eb 03 41 8b 06 49 89 02 eb 3c 41 83 fc 20 74 2d 77 15 [ 254.042730] RSP: 0018:ffffbbb7c231f8a8 EFLAGS: 00010246 [ 254.042733] RAX: ffffbbb7c03f937e RBX: 00000000fe80037e RCX: 0000000000000080 [ 254.042734] RDX: 00000000fe800400 RSI: 00000000000000f4 RDI: 0000000000000033 [ 254.042735] RBP: ffffbbb7c231f8d8 R08: 0000000000000000 R09: ffff9e4ccfa56b00 [ 254.042736] R10: ffffbbb7c231fa08 R11: ffff9e4cc9c95038 R12: 0000000000000008 [ 254.042737] R13: 0000000000000000 R14: ffffbbb7c03f937e R15: ffff9e4ccfa027e0 [ 254.042739] FS: 00007f90743d1080(0000) GS:ffff9e4cd2400000(0000) knlGS:0000000000000000 [ 254.042740] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 254.042742] CR2: 00007fc1bd15a9dc CR3: 00000003da4fa000 CR4: 0000000000740ef0 [ 254.042743] PKRU: 55555554 [ 254.042744] Call Trace: [ 254.042751] acpi_ev_address_space_dispatch+0x2f7/0x39f [ 254.042754] ? acpi_ex_prep_field_value+0x513/0x513 [ 254.042756] acpi_ex_access_region+0x454/0x4ed [ 254.042759] acpi_ex_field_datum_io+0x18a/0x42d [ 254.042762] acpi_ex_extract_from_field+0xff/0x320 [ 254.042764] ? acpi_ev_acquire_global_lock+0x1de/0x1e6 [ 254.042767] ? acpi_ex_acquire_mutex_object+0x115/0x11f [ 254.042769] acpi_ex_read_data_from_field+0x30f/0x361 [ 254.042771] acpi_ex_resolve_node_to_value+0x3a7/0x4dd [ 254.042773] acpi_ex_resolve_to_value+0x3c3/0x472 [ 254.042776] acpi_ds_evaluate_name_path+0xb1/0x169 [ 254.042779] ? acpi_db_single_step+0x1f/0x252 [ 254.042781] acpi_ds_exec_end_op+0x118/0x76b [ 254.042784] acpi_ps_parse_loop+0x84b/0x920 [ 254.042786] acpi_ps_parse_aml+0x1af/0x550 [ 254.042789] acpi_ps_execute_method+0x208/0x2ca [ 254.042791] acpi_ns_evaluate+0x34e/0x4f0 [ 254.042793] acpi_evaluate_object+0x18e/0x3b4 [ 254.042796] acpi_battery_get_state+0x94/0x220 [ 254.042798] acpi_battery_get_property+0x4f/0x3e2 [ 254.042803] power_supply_get_property.part.0+0x15/0x20 [ 254.042805] power_supply_get_property+0x18/0x30 [ 254.042807] power_supply_show_property+0x9d/0x300 [ 254.042811] dev_attr_show+0x1d/0x40 [ 254.042815] sysfs_kf_seq_show+0xa1/0x100 [ 254.042817] kernfs_seq_show+0x27/0x30 [ 254.042820] seq_read+0xdc/0x430 [ 254.042822] kernfs_fop_read+0x35/0x190 [ 254.042826] __vfs_read+0x1b/0x40 [ 254.042828] vfs_read+0xab/0x160 [ 254.042830] ksys_read+0x67/0xe0 [ 254.042832] __x64_sys_read+0x1a/0x20 [ 254.042836] do_syscall_64+0x57/0x190 [ 254.042841] entry_SYSCALL_64_after_hwframe+0x44/0xa9 [ 254.042843] RIP: 0033:0x7f9074f4b3cc [ 254.042846] Code: ec 28 48 89 54 24 18 48 89 74 24 10 89 7c 24 08 e8 89 fc ff ff 48 8b 54 24 18 48 8b 74 24 10 41 89 c0 8b 7c 24 08 31 c0 0f 05 <48> 3d 00 f0 ff ff 77 30 44 89 c7 48 89 44 24 08 e8 bf fc ff ff 48 [ 254.042848] RSP: 002b:00007ffcd424da40 EFLAGS: 00000246 ORIG_RAX: 0000000000000000 [ 254.042850] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f9074f4b3cc [ 254.042851] RDX: 0000000000001000 RSI: 000055ab9cd11f00 RDI: 000000000000000a [ 254.042852] RBP: 000055ab9ccdbb30 R08: 0000000000000000 R09: 0000000000001000 [ 254.042853] R10: 000055ab9cc56010 R11: 0000000000000246 R12: 00007ffcd424db70 [ 254.042854] R13: 0000000000000000 R14: 0000000000000000 R15: 000000000000000a [ 254.042856] Modules linked in: sm3_generic hyper_enclave(OE) rfcomm xt_conntrack xt_MASQUERADE nf_conntrack_netlink nfnetlink xfrm_user xfrm_algo iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 libcrc32c xt_addrtype iptable_filter bpfilter br_netfilter bridge stp llc ccm cmac algif_hash algif_skcipher af_alg overlay bnep kvm_amd ccp joydev kvm irqbypass snd_hda_codec_generic snd_hda_codec_hdmi nls_iso8859_1 snd_hda_intel snd_intel_nhlt rt2800usb snd_hda_codec rt2x00usb rt2800lib snd_hda_core rt2x00lib snd_hwdep crct10dif_pclmul uvcvideo videobuf2_vmalloc ghash_clmulni_intel snd_pcm mac80211 videobuf2_memops videobuf2_v4l2 btusb videobuf2_common btrtl btbcm snd_seq_midi btintel snd_seq_midi_event bluetooth snd_rawmidi cfg80211 aesni_intel snd_seq videodev huawei_wmi crypto_simd snd_seq_device cryptd ecdh_generic ledtrig_audio glue_helper snd_timer mc hid_multitouch libarc4 input_leds ecc sparse_keymap serio_raw wmi_bmof snd soundcore snd_pci_acp3x mac_hid acpi_tad sch_fq_codel [ 254.042902] parport_pc ppdev lp parport ramoops drm reed_solomon efi_pstore ip_tables x_tables autofs4 hid_generic crc32_pclmul nvme i2c_piix4 nvme_core wmi video i2c_hid hid [ 254.042917] ---[ end trace 174b1af698bdf677 ]--- [ 257.158924] RIP: 0010:acpi_ex_system_memory_space_handler+0x239/0x2b5 [ 257.158932] Code: 02 00 00 00 00 41 83 fc 20 74 25 77 12 41 83 fc 08 74 17 41 83 fc 10 75 58 41 0f b7 06 eb 14 41 83 fc 40 75 4c 49 8b 06 eb 09 <41> 0f b6 06 eb 03 41 8b 06 49 89 02 eb 3c 41 83 fc 20 74 2d 77 15 [ 257.158935] RSP: 0018:ffffbbb7c231f8a8 EFLAGS: 00010246 [ 257.158938] RAX: ffffbbb7c03f937e RBX: 00000000fe80037e RCX: 0000000000000080 [ 257.158940] RDX: 00000000fe800400 RSI: 00000000000000f4 RDI: 0000000000000033 [ 257.158941] RBP: ffffbbb7c231f8d8 R08: 0000000000000000 R09: ffff9e4ccfa56b00 [ 257.158942] R10: ffffbbb7c231fa08 R11: ffff9e4cc9c95038 R12: 0000000000000008 [ 257.158943] R13: 0000000000000000 R14: ffffbbb7c03f937e R15: ffff9e4ccfa027e0 [ 257.158945] FS: 00007f90743d1080(0000) GS:ffff9e4cd2400000(0000) knlGS:0000000000000000 [ 257.158947] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 257.158948] CR2: 00007fc1bd15a9dc CR3: 00000003da4fa000 CR4: 0000000000740ef0 [ 257.158949] PKRU: 55555554 [ 261.125550] [0] [WARN][0] #VMEXIT(NPF) @ NptViolationInfo { [ 261.125551] guest_paddr: 0xfe80037e, [ 261.125552] present: false, [ 261.125552] write: false, [ 261.125552] user_mode: true, [ 261.125553] reserved_bits_used: false, [ 261.125553] execute: false, [ 261.125554] shadow_stack_access: false, [ 261.125554] final_translation: true, [ 261.125554] } RIP(0xffffffff9cbf91e5) [ 261.125555] [0] [WARN][0] #VMEXIT handler returned Err([src/arch/x86_64/amd/vmexit.rs:167:9] Function not implemented): [ 261.125555] VmExitInfo { [ 261.125556] exit_code: Ok( [ 261.125556] NPF, [ 261.125556] ), [ 261.125557] exit_info_1: 0x100000004, [ 261.125557] exit_info_2: 0xfe80037e, [ 261.125558] guest_rip: 0xffffffff9cbf91e5, [ 261.125558] } [ 261.125558] Guest State Dump: [ 261.125559] Vcpu { [ 261.125559] guest_regs: GuestRegisters { [ 261.125559] rax: 0xffffbbb7c03f937e, [ 261.125560] rcx: 0x80, [ 261.125560] rdx: 0xfe800400, [ 261.125560] rbx: 0xfe80037e, [ 261.125561] _unused_rsp: 0x0, [ 261.125561] rbp: 0xffffbbb7c231f8d8, [ 261.125561] rsi: 0xf4, [ 261.125562] rdi: 0x33, [ 261.125562] r8: 0x0, [ 261.125562] r9: 0xffff9e4ccfa56b00, [ 261.125563] r10: 0xffffbbb7c231fa08, [ 261.125563] r11: 0xffff9e4cc9c95038, [ 261.125563] r12: 0x8, [ 261.125564] r13: 0x0, [ 261.125564] r14: 0xffffbbb7c03f937e, [ 261.125564] r15: 0xffff9e4ccfa027e0, [ 261.125565] }, [ 261.125565] rip: 0xffffffff9cbf91e5, [ 261.125565] rsp: 0xffffbbb7c231f8a8, [ 261.125566] rflags: INTERRUPT_FLAG | ZERO_FLAG | PARITY_FLAG | 0x0x2, [ 261.125566] cr0: PROTECTED_MODE_ENABLE | MONITOR_COPROCESSOR | NUMERIC_ERROR | WRITE_PROTECT | ALIGNMENT_MASK | PAGING | 0x0x10, [ 261.125567] cr3: 0x3da4fa000, [ 261.125568] cr4: PAGE_SIZE_EXTENSION | PHYSICAL_ADDRESS_EXTENSION | MACHINE_CHECK_EXCEPTION | PAGE_GLOBAL | OSFXSR | OSXMMEXCPT_ENABLE | USER_MODE_INSTRUCTION_PREVENTION | OSXSAVE | SUPERVISOR_MODE_EXECUTION_PROTECTION | SUPERVISOR_MODE_ACCESS_PREVENTION | PROTECTION_KEY, [ 261.125568] cs: VmcbSegment { [ 261.125569] selector: 0x10, [ 261.125569] attr: 0x29b, [ 261.125569] limit: 0xffffffff, [ 261.125570] base: 0x0, [ 261.125570] }, [ 261.125570] } [ 261.125571] [0] [ERROR][0] Failed to handle VM exit, inject fault to guest... [ 261.125571] [src/arch/x86_64/amd/vmexit.rs:167:9] Function not implemented [ 261.125572] [0] [WARN][0] VCPU fault: PerCpu { [ 261.125572] cpu_id: 0x0, [ 261.125572] state: HvEnabled, [ 261.125572] vcpu: Vcpu { [ 261.125573] guest_regs: GuestRegisters { [ 261.125573] rax: 0xffffbbb7c03f937e, [ 261.125574] rcx: 0x80, [ 261.125574] rdx: 0xfe800400, [ 261.125574] rbx: 0xfe80037e, [ 261.125575] _unused_rsp: 0x0, [ 261.125575] rbp: 0xffffbbb7c231f8d8, [ 261.125575] rsi: 0xf4, [ 261.125576] rdi: 0x33, [ 261.125576] r8: 0x0, [ 261.125576] r9: 0xffff9e4ccfa56b00, [ 261.125577] r10: 0xffffbbb7c231fa08, [ 261.125577] r11: 0xffff9e4cc9c95038, [ 261.125577] r12: 0x8, [ 261.125578] r13: 0x0, [ 261.125578] r14: 0xffffbbb7c03f937e, [ 261.125578] r15: 0xffff9e4ccfa027e0, [ 261.125579] }, [ 261.125579] rip: 0xffffffff9cbf91e5, [ 261.125579] rsp: 0xffffbbb7c231f8a8, [ 261.125580] rflags: INTERRUPT_FLAG | ZERO_FLAG | PARITY_FLAG | 0x0x2, [ 261.125580] cr0: PROTECTED_MODE_ENABLE | MONITOR_COPROCESSOR | NUMERIC_ERROR | WRITE_PROTECT | ALIGNMENT_MASK | PAGING | 0x0x10, [ 261.125581] cr3: 0x3da4fa000, [ 261.125582] cr4: PAGE_SIZE_EXTENSION | PHYSICAL_ADDRESS_EXTENSION | MACHINE_CHECK_EXCEPTION | PAGE_GLOBAL | OSFXSR | OSXMMEXCPT_ENABLE | USER_MODE_INSTRUCTION_PREVENTION | OSXSAVE | SUPERVISOR_MODE_EXECUTION_PROTECTION | SUPERVISOR_MODE_ACCESS_PREVENTION | PROTECTION_KEY, [ 261.125582] cs: VmcbSegment { [ 261.125583] selector: 0x10, [ 261.125583] attr: 0x29b, [ 261.125583] limit: 0xffffffff, [ 261.125584] base: 0x0, [ 261.125584] }, [ 261.125584] }, [ 261.125585] enclave_thread: Inactive, [ 261.125585] } [ 472.325810] HE: he_cmd_encl_create: 226. encl: 0xffff9e4cc8094000 [ 472.325837] HE: he_cmd_encl_create: 259. encl: 0xffff9e4cc8094000, encl.start_gva=0x7fd49ec15000, encl_size: 0x1000000 [ 472.427682] HE: shared_memory_destroy: 327. mmu_notifier_unregister [ 472.428057] HE: he_encl_cleanup: 966. nr_free_epc_page: 0x80000, encl: 0xffff9e4cc8094000 [ 875.504580] HE: he_cmd_encl_create: 226. encl: 0xffff9e4c120e4000 [ 875.504594] HE: he_cmd_encl_create: 259. encl: 0xffff9e4c120e4000, encl.start_gva=0x7fb9a4f4e000, encl_size: 0x200000000 [ 917.053466] HE: shared_memory_destroy: 327. mmu_notifier_unregister [ 917.094283] HE: he_encl_cleanup: 966. nr_free_epc_page: 0x80000, encl: 0xffff9e4c120e4000 [ 1097.502777] HE: he_cmd_encl_create: 226. encl: 0xffff9e4c1a218000 [ 1097.502792] HE: he_cmd_encl_create: 259. encl: 0xffff9e4c1a218000, encl.start_gva=0x7f7a57d3e000, encl_size: 0x200000000 [ 1115.746853] HE: shared_memory_destroy: 327. mmu_notifier_unregister [ 1115.788011] HE: he_encl_cleanup: 966. nr_free_epc_page: 0x80000, encl: 0xffff9e4c1a218000 Thank u for your answer and patience! Here is the log in dmesg, it seems that although it crashes at the beginning, it still successfully handled sgx code somehow: [ 229.301014] hyper_enclave: loading out-of-tree module taints kernel. [ 229.301162] hyper_enclave: module verification failed: signature and/or required key missing - tainting kernel [ 229.312632] HE: cpu_vendor_detect: 39. Vendor ID: AuthenticAMD [ 229.325575] HE: get_convertible_memory: 136. BIOS E820 table from firmware: [0x0000000000000000 -> 0x000000000009f000], type: System RAM [ 229.325577] HE: get_convertible_memory: 136. BIOS E820 table from firmware: [0x000000000009f000 -> 0x00000000000c0000], type: Reserved [ 229.325578] HE: get_convertible_memory: 136. BIOS E820 table from firmware: [0x0000000000100000 -> 0x0000000009b00000], type: System RAM [ 229.325579] HE: get_convertible_memory: 136. BIOS E820 table from firmware: [0x0000000009b00000 -> 0x0000000009e00000], type: Reserved [ 229.325580] HE: get_convertible_memory: 136. BIOS E820 table from firmware: [0x0000000009e00000 -> 0x0000000009f00000], type: System RAM [ 229.325580] HE: get_convertible_memory: 136. BIOS E820 table from firmware: [0x0000000009f00000 -> 0x0000000009f28000], type: ACPI Non-volatile Storage [ 229.325581] HE: get_convertible_memory: 136. BIOS E820 table from firmware: [0x0000000009f28000 -> 0x00000000a07ff000], type: System RAM [ 229.325582] HE: get_convertible_memory: 136. BIOS E820 table from firmware: [0x00000000a07ff000 -> 0x00000000a0800000], type: Reserved [ 229.325582] HE: get_convertible_memory: 136. BIOS E820 table from firmware: [0x00000000a0800000 -> 0x00000000a2364000], type: System RAM [ 229.325583] HE: get_convertible_memory: 136. BIOS E820 table from firmware: [0x00000000a2364000 -> 0x00000000a4564000], type: Reserved [ 229.325584] HE: get_convertible_memory: 136. BIOS E820 table from firmware: [0x00000000a4564000 -> 0x00000000a456d000], type: System RAM [ 229.325584] HE: get_convertible_memory: 136. BIOS E820 table from firmware: [0x00000000a456d000 -> 0x00000000a4570000], type: Reserved [ 229.325585] HE: get_convertible_memory: 136. BIOS E820 table from firmware: [0x00000000a4570000 -> 0x00000000b077f000], type: System RAM [ 229.325586] HE: get_convertible_memory: 136. BIOS E820 table from firmware: [0x00000000b077f000 -> 0x00000000b2f7f000], type: Reserved [ 229.325587] HE: get_convertible_memory: 136. BIOS E820 table from firmware: [0x00000000b2f7f000 -> 0x00000000baf7f000], type: ACPI Non-volatile Storage [ 229.325587] HE: get_convertible_memory: 136. BIOS E820 table from firmware: [0x00000000baf7f000 -> 0x00000000bafff000], type: ACPI Tables [ 229.325588] HE: get_convertible_memory: 136. BIOS E820 table from firmware: [0x00000000bafff000 -> 0x00000000bb000000], type: System RAM [ 229.325589] HE: get_convertible_memory: 136. BIOS E820 table from firmware: [0x00000000bb000000 -> 0x00000000bc000000], type: Reserved [ 229.325589] HE: get_convertible_memory: 136. BIOS E820 table from firmware: [0x00000000bce00000 -> 0x00000000c0000000], type: Reserved [ 229.325590] HE: get_convertible_memory: 136. BIOS E820 table from firmware: [0x00000000fde00000 -> 0x00000000fdf00000], type: Reserved [ 229.325590] HE: get_convertible_memory: 136. BIOS E820 table from firmware: [0x00000000fed80000 -> 0x00000000fed81000], type: Reserved [ 229.325591] HE: get_convertible_memory: 136. BIOS E820 table from firmware: [0x00000000ff000000 -> 0x0000000100000000], type: Reserved [ 229.325592] HE: get_convertible_memory: 136. BIOS E820 table from firmware: [0x0000000100000000 -> 0x000000041e300000], type: System RAM [ 229.325593] HE: get_convertible_memory: 136. BIOS E820 table from firmware: [0x000000041f340000 -> 0x0000000460200000], type: Reserved [ 229.325596] HE: get_convertible_memory: 213. Convertible Memory[ 0]: 0x0000000000000000 -> 0x000000000009f000 [ 229.325596] HE: get_convertible_memory: 213. Convertible Memory[ 1]: 0x0000000000100000 -> 0x0000000009b00000 [ 229.325597] HE: get_convertible_memory: 213. Convertible Memory[ 2]: 0x0000000009e00000 -> 0x0000000009f00000 [ 229.325598] HE: get_convertible_memory: 213. Convertible Memory[ 3]: 0x0000000009f28000 -> 0x00000000a07ff000 [ 229.325599] HE: get_convertible_memory: 213. Convertible Memory[ 4]: 0x00000000a0800000 -> 0x00000000a2364000 [ 229.325599] HE: get_convertible_memory: 213. Convertible Memory[ 5]: 0x00000000a4564000 -> 0x00000000a456d000 [ 229.325600] HE: get_convertible_memory: 213. Convertible Memory[ 6]: 0x00000000a4570000 -> 0x00000000b077f000 [ 229.325600] HE: get_convertible_memory: 213. Convertible Memory[ 7]: 0x00000000bafff000 -> 0x00000000bb000000 [ 229.325601] HE: get_convertible_memory: 213. Convertible Memory[ 8]: 0x0000000100000000 -> 0x000000041e300000 [ 229.325602] HE: get_convertible_memory: 218. Convertible Memory size: 0x3cc4f3000 [ 229.325603] HE: get_valid_rsrv_mem: 285. Reserved Memory[ 0]: 0x100000000 -> 0x200000000 [ 229.325604] HE: get_valid_rsrv_mem: 290. Reserved Memory size: 0x100000000 [ 229.325606] HE: get_sme_mask: 63. CPU does not enable SME [ 229.325649] HE: mem_test: 48. Memory[0x100000000 - 0x200000000] test begin [ 230.196819] HE: mem_test: 78. Memory[0x100000000 - 0x200000000] test pass [ 230.211666] HE: get_hv_heap_size: 375. Hypervisor heap size: 0x43800000 [ 230.211668] HE: get_hv_cmrm_size: 387. Hypervisor cmrm size: 0x62d5000 [ 230.211669] HE: get_hv_frame_size: 400. Hypervisor frame size: 0x1c00000 [ 230.211669] HE: get_hypervisor_size: 413. Hv_core_and_percpu_size: 0xe40000, Hypervisor size: 0x80000000 [ 230.211670] HE: he_cmd_enable: 302. hypervisor size: 0x80000000 [ 230.211672] HE: get_sme_mask: 63. CPU does not enable SME [ 230.394700] HE: he_cmd_enable: 352. config_size: 1860 [ 230.417661] HE: add_epc_pages: 43. total_epc_pages: 0x80000, free_epc_pages: 0x80000 [ 230.417663] HE: init_enclave_page: 317. epc ranges: [0x180000000-0x200000000], 0x80000000 [ 230.417664] HE: init_enclave_page: 333. Initialized EPC ranges size: 0x80000000 [ 230.417665] HE: he_cmd_enable: 383. config_header load_addr: 0xffffff0000e40000 [ 230.417695] HE: he_cmd_enable: 404. mem_region load_addr: 0xffffff0000e40124 [ 230.417696] HE: inspect_tpm: 206. using fake tpm [ 230.417697] HE: he_cmd_enable: 411. tpm mmio type=8,size=0 pa=ffffffff [ 230.574385] HE: init_cmrm: 448. Initialize [0x0 -> 0x41e300000]'s CMRM [ 230.574631] HE: he_cmd_enable: 483. The hyperenclave is opening. [ 240.645555] [0] Activating hypervisor on CPU 0... [ 240.645557] [1] Activating hypervisor on CPU 1... [ 240.645558] [2] Activating hypervisor on CPU 2... [ 240.645559] [3] Activating hypervisor on CPU 3... [ 240.645560] [4] Activating hypervisor on CPU 4... [ 240.645561] [5] Activating hypervisor on CPU 5... [ 240.645561] [6] Activating hypervisor on CPU 6... [ 240.645562] [7] Activating hypervisor on CPU 7... [ 240.645563] [8] Activating hypervisor on CPU 8... [ 240.645564] [9] Activating hypervisor on CPU 9... [ 240.645564] [10] Init HHBox log feature ok [ 240.645565] [10] Init HHBox crash feature ok [ 240.645565] [10] tpm_detect starting.... [ 240.645565] [10] fake tpm is detected and initialized [ 240.645566] [10] FAKE TPM: tpm signing key pub x [ 240.645566] [10] C29974C9F1090FA4A10E9990620E91828B593A7211E2468450E3DC96DD5933FB [ 240.645567] [10] size= :0x20 [ 240.645567] [10] FAKE TPM: tpm signing key pub y [ 240.645568] [10] 402206ECCC5479289F33668EAAB85527ABBBB9F7B41CEB71551027D57AF28267 [ 240.645568] [10] size= :0x20 [ 240.645568] [10] FAKE TPM: root secret is generated and sealed [ 240.645569] [10] FAKE TPM: hypervisor AK pub x= [ 240.645569] [10] 3D9BB7BA028C5F97AC5AB1619336D9ED23E86858DDBDC23B510D5F0EBA8FF338 [ 240.645569] [10] size= :0x20 [ 240.645570] [10] FAKE TPM: hypervisor AK pub y= [ 240.645570] [10] 0B28428BDA30B2800FCB032ABCED81071B5F0DCB1A02B22AFF56B7DD22E52522 [ 240.645571] [10] size= :0x20 [ 240.645571] [10] FAKE TPM: hash of he_ak_pub extended to PCR 13: [ 240.645571] [10] AAA056CA1F030B7BD6C4089C2AEEC36D01173B46E0FD2B4C1BD2C14649B66539 [ 240.645572] [10] size= :0x20 [ 240.645572] [10] HyperEnclave: root of trust initialized! [ 240.645572] [10] Activating hypervisor on CPU 10... [ 240.645573] [11] Activating hypervisor on CPU 11... [ 240.645574] [12] Activating hypervisor on CPU 12... [ 240.645575] [13] Activating hypervisor on CPU 13... [ 240.645575] [14] Activating hypervisor on CPU 14... [ 240.645576] [15] Activating hypervisor on CPU 15... [ 254.042706] general protection fault: 0000 [#1] SMP NOPTI [ 254.042715] CPU: 0 PID: 1595 Comm: upowerd Tainted: G OE 5.4.0-050400-generic #201911242031 [ 254.042717] Hardware name: HONOR GLO-NX6/GLO-NX6-PCB, BIOS 1.10 06/13/2023 [ 254.042724] RIP: 0010:acpi_ex_system_memory_space_handler+0x239/0x2b5 [ 254.042728] Code: 02 00 00 00 00 41 83 fc 20 74 25 77 12 41 83 fc 08 74 17 41 83 fc 10 75 58 41 0f b7 06 eb 14 41 83 fc 40 75 4c 49 8b 06 eb 09 <41> 0f b6 06 eb 03 41 8b 06 49 89 02 eb 3c 41 83 fc 20 74 2d 77 15 [ 254.042730] RSP: 0018:ffffbbb7c231f8a8 EFLAGS: 00010246 [ 254.042733] RAX: ffffbbb7c03f937e RBX: 00000000fe80037e RCX: 0000000000000080 [ 254.042734] RDX: 00000000fe800400 RSI: 00000000000000f4 RDI: 0000000000000033 [ 254.042735] RBP: ffffbbb7c231f8d8 R08: 0000000000000000 R09: ffff9e4ccfa56b00 [ 254.042736] R10: ffffbbb7c231fa08 R11: ffff9e4cc9c95038 R12: 0000000000000008 [ 254.042737] R13: 0000000000000000 R14: ffffbbb7c03f937e R15: ffff9e4ccfa027e0 [ 254.042739] FS: 00007f90743d1080(0000) GS:ffff9e4cd2400000(0000) knlGS:0000000000000000 [ 254.042740] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 254.042742] CR2: 00007fc1bd15a9dc CR3: 00000003da4fa000 CR4: 0000000000740ef0 [ 254.042743] PKRU: 55555554 [ 254.042744] Call Trace: [ 254.042751] acpi_ev_address_space_dispatch+0x2f7/0x39f [ 254.042754] ? acpi_ex_prep_field_value+0x513/0x513 [ 254.042756] acpi_ex_access_region+0x454/0x4ed [ 254.042759] acpi_ex_field_datum_io+0x18a/0x42d [ 254.042762] acpi_ex_extract_from_field+0xff/0x320 [ 254.042764] ? acpi_ev_acquire_global_lock+0x1de/0x1e6 [ 254.042767] ? acpi_ex_acquire_mutex_object+0x115/0x11f [ 254.042769] acpi_ex_read_data_from_field+0x30f/0x361 [ 254.042771] acpi_ex_resolve_node_to_value+0x3a7/0x4dd [ 254.042773] acpi_ex_resolve_to_value+0x3c3/0x472 [ 254.042776] acpi_ds_evaluate_name_path+0xb1/0x169 [ 254.042779] ? acpi_db_single_step+0x1f/0x252 [ 254.042781] acpi_ds_exec_end_op+0x118/0x76b [ 254.042784] acpi_ps_parse_loop+0x84b/0x920 [ 254.042786] acpi_ps_parse_aml+0x1af/0x550 [ 254.042789] acpi_ps_execute_method+0x208/0x2ca [ 254.042791] acpi_ns_evaluate+0x34e/0x4f0 [ 254.042793] acpi_evaluate_object+0x18e/0x3b4 [ 254.042796] acpi_battery_get_state+0x94/0x220 [ 254.042798] acpi_battery_get_property+0x4f/0x3e2 [ 254.042803] power_supply_get_property.part.0+0x15/0x20 [ 254.042805] power_supply_get_property+0x18/0x30 [ 254.042807] power_supply_show_property+0x9d/0x300 [ 254.042811] dev_attr_show+0x1d/0x40 [ 254.042815] sysfs_kf_seq_show+0xa1/0x100 [ 254.042817] kernfs_seq_show+0x27/0x30 [ 254.042820] seq_read+0xdc/0x430 [ 254.042822] kernfs_fop_read+0x35/0x190 [ 254.042826] __vfs_read+0x1b/0x40 [ 254.042828] vfs_read+0xab/0x160 [ 254.042830] ksys_read+0x67/0xe0 [ 254.042832] __x64_sys_read+0x1a/0x20 [ 254.042836] do_syscall_64+0x57/0x190 [ 254.042841] entry_SYSCALL_64_after_hwframe+0x44/0xa9 [ 254.042843] RIP: 0033:0x7f9074f4b3cc [ 254.042846] Code: ec 28 48 89 54 24 18 48 89 74 24 10 89 7c 24 08 e8 89 fc ff ff 48 8b 54 24 18 48 8b 74 24 10 41 89 c0 8b 7c 24 08 31 c0 0f 05 <48> 3d 00 f0 ff ff 77 30 44 89 c7 48 89 44 24 08 e8 bf fc ff ff 48 [ 254.042848] RSP: 002b:00007ffcd424da40 EFLAGS: 00000246 ORIG_RAX: 0000000000000000 [ 254.042850] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f9074f4b3cc [ 254.042851] RDX: 0000000000001000 RSI: 000055ab9cd11f00 RDI: 000000000000000a [ 254.042852] RBP: 000055ab9ccdbb30 R08: 0000000000000000 R09: 0000000000001000 [ 254.042853] R10: 000055ab9cc56010 R11: 0000000000000246 R12: 00007ffcd424db70 [ 254.042854] R13: 0000000000000000 R14: 0000000000000000 R15: 000000000000000a [ 254.042856] Modules linked in: sm3_generic hyper_enclave(OE) rfcomm xt_conntrack xt_MASQUERADE nf_conntrack_netlink nfnetlink xfrm_user xfrm_algo iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 libcrc32c xt_addrtype iptable_filter bpfilter br_netfilter bridge stp llc ccm cmac algif_hash algif_skcipher af_alg overlay bnep kvm_amd ccp joydev kvm irqbypass snd_hda_codec_generic snd_hda_codec_hdmi nls_iso8859_1 snd_hda_intel snd_intel_nhlt rt2800usb snd_hda_codec rt2x00usb rt2800lib snd_hda_core rt2x00lib snd_hwdep crct10dif_pclmul uvcvideo videobuf2_vmalloc ghash_clmulni_intel snd_pcm mac80211 videobuf2_memops videobuf2_v4l2 btusb videobuf2_common btrtl btbcm snd_seq_midi btintel snd_seq_midi_event bluetooth snd_rawmidi cfg80211 aesni_intel snd_seq videodev huawei_wmi crypto_simd snd_seq_device cryptd ecdh_generic ledtrig_audio glue_helper snd_timer mc hid_multitouch libarc4 input_leds ecc sparse_keymap serio_raw wmi_bmof snd soundcore snd_pci_acp3x mac_hid acpi_tad sch_fq_codel [ 254.042902] parport_pc ppdev lp parport ramoops drm reed_solomon efi_pstore ip_tables x_tables autofs4 hid_generic crc32_pclmul nvme i2c_piix4 nvme_core wmi video i2c_hid hid [ 254.042917] ---[ end trace 174b1af698bdf677 ]--- [ 257.158924] RIP: 0010:acpi_ex_system_memory_space_handler+0x239/0x2b5 [ 257.158932] Code: 02 00 00 00 00 41 83 fc 20 74 25 77 12 41 83 fc 08 74 17 41 83 fc 10 75 58 41 0f b7 06 eb 14 41 83 fc 40 75 4c 49 8b 06 eb 09 <41> 0f b6 06 eb 03 41 8b 06 49 89 02 eb 3c 41 83 fc 20 74 2d 77 15 [ 257.158935] RSP: 0018:ffffbbb7c231f8a8 EFLAGS: 00010246 [ 257.158938] RAX: ffffbbb7c03f937e RBX: 00000000fe80037e RCX: 0000000000000080 [ 257.158940] RDX: 00000000fe800400 RSI: 00000000000000f4 RDI: 0000000000000033 [ 257.158941] RBP: ffffbbb7c231f8d8 R08: 0000000000000000 R09: ffff9e4ccfa56b00 [ 257.158942] R10: ffffbbb7c231fa08 R11: ffff9e4cc9c95038 R12: 0000000000000008 [ 257.158943] R13: 0000000000000000 R14: ffffbbb7c03f937e R15: ffff9e4ccfa027e0 [ 257.158945] FS: 00007f90743d1080(0000) GS:ffff9e4cd2400000(0000) knlGS:0000000000000000 [ 257.158947] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 257.158948] CR2: 00007fc1bd15a9dc CR3: 00000003da4fa000 CR4: 0000000000740ef0 [ 257.158949] PKRU: 55555554 [ 261.125550] [0] [WARN][0] #VMEXIT(NPF) @ NptViolationInfo { [ 261.125551] guest_paddr: 0xfe80037e, [ 261.125552] present: false, [ 261.125552] write: false, [ 261.125552] user_mode: true, [ 261.125553] reserved_bits_used: false, [ 261.125553] execute: false, [ 261.125554] shadow_stack_access: false, [ 261.125554] final_translation: true, [ 261.125554] } RIP(0xffffffff9cbf91e5) [ 261.125555] [0] [WARN][0] #VMEXIT handler returned Err([src/arch/x86_64/amd/vmexit.rs:167:9] Function not implemented): [ 261.125555] VmExitInfo { [ 261.125556] exit_code: Ok( [ 261.125556] NPF, [ 261.125556] ), [ 261.125557] exit_info_1: 0x100000004, [ 261.125557] exit_info_2: 0xfe80037e, [ 261.125558] guest_rip: 0xffffffff9cbf91e5, [ 261.125558] } [ 261.125558] Guest State Dump: [ 261.125559] Vcpu { [ 261.125559] guest_regs: GuestRegisters { [ 261.125559] rax: 0xffffbbb7c03f937e, [ 261.125560] rcx: 0x80, [ 261.125560] rdx: 0xfe800400, [ 261.125560] rbx: 0xfe80037e, [ 261.125561] _unused_rsp: 0x0, [ 261.125561] rbp: 0xffffbbb7c231f8d8, [ 261.125561] rsi: 0xf4, [ 261.125562] rdi: 0x33, [ 261.125562] r8: 0x0, [ 261.125562] r9: 0xffff9e4ccfa56b00, [ 261.125563] r10: 0xffffbbb7c231fa08, [ 261.125563] r11: 0xffff9e4cc9c95038, [ 261.125563] r12: 0x8, [ 261.125564] r13: 0x0, [ 261.125564] r14: 0xffffbbb7c03f937e, [ 261.125564] r15: 0xffff9e4ccfa027e0, [ 261.125565] }, [ 261.125565] rip: 0xffffffff9cbf91e5, [ 261.125565] rsp: 0xffffbbb7c231f8a8, [ 261.125566] rflags: INTERRUPT_FLAG | ZERO_FLAG | PARITY_FLAG | 0x0x2, [ 261.125566] cr0: PROTECTED_MODE_ENABLE | MONITOR_COPROCESSOR | NUMERIC_ERROR | WRITE_PROTECT | ALIGNMENT_MASK | PAGING | 0x0x10, [ 261.125567] cr3: 0x3da4fa000, [ 261.125568] cr4: PAGE_SIZE_EXTENSION | PHYSICAL_ADDRESS_EXTENSION | MACHINE_CHECK_EXCEPTION | PAGE_GLOBAL | OSFXSR | OSXMMEXCPT_ENABLE | USER_MODE_INSTRUCTION_PREVENTION | OSXSAVE | SUPERVISOR_MODE_EXECUTION_PROTECTION | SUPERVISOR_MODE_ACCESS_PREVENTION | PROTECTION_KEY, [ 261.125568] cs: VmcbSegment { [ 261.125569] selector: 0x10, [ 261.125569] attr: 0x29b, [ 261.125569] limit: 0xffffffff, [ 261.125570] base: 0x0, [ 261.125570] }, [ 261.125570] } [ 261.125571] [0] [ERROR][0] Failed to handle VM exit, inject fault to guest... [ 261.125571] [src/arch/x86_64/amd/vmexit.rs:167:9] Function not implemented [ 261.125572] [0] [WARN][0] VCPU fault: PerCpu { [ 261.125572] cpu_id: 0x0, [ 261.125572] state: HvEnabled, [ 261.125572] vcpu: Vcpu { [ 261.125573] guest_regs: GuestRegisters { [ 261.125573] rax: 0xffffbbb7c03f937e, [ 261.125574] rcx: 0x80, [ 261.125574] rdx: 0xfe800400, [ 261.125574] rbx: 0xfe80037e, [ 261.125575] _unused_rsp: 0x0, [ 261.125575] rbp: 0xffffbbb7c231f8d8, [ 261.125575] rsi: 0xf4, [ 261.125576] rdi: 0x33, [ 261.125576] r8: 0x0, [ 261.125576] r9: 0xffff9e4ccfa56b00, [ 261.125577] r10: 0xffffbbb7c231fa08, [ 261.125577] r11: 0xffff9e4cc9c95038, [ 261.125577] r12: 0x8, [ 261.125578] r13: 0x0, [ 261.125578] r14: 0xffffbbb7c03f937e, [ 261.125578] r15: 0xffff9e4ccfa027e0, [ 261.125579] }, [ 261.125579] rip: 0xffffffff9cbf91e5, [ 261.125579] rsp: 0xffffbbb7c231f8a8, [ 261.125580] rflags: INTERRUPT_FLAG | ZERO_FLAG | PARITY_FLAG | 0x0x2, [ 261.125580] cr0: PROTECTED_MODE_ENABLE | MONITOR_COPROCESSOR | NUMERIC_ERROR | WRITE_PROTECT | ALIGNMENT_MASK | PAGING | 0x0x10, [ 261.125581] cr3: 0x3da4fa000, [ 261.125582] cr4: PAGE_SIZE_EXTENSION | PHYSICAL_ADDRESS_EXTENSION | MACHINE_CHECK_EXCEPTION | PAGE_GLOBAL | OSFXSR | OSXMMEXCPT_ENABLE | USER_MODE_INSTRUCTION_PREVENTION | OSXSAVE | SUPERVISOR_MODE_EXECUTION_PROTECTION | SUPERVISOR_MODE_ACCESS_PREVENTION | PROTECTION_KEY, [ 261.125582] cs: VmcbSegment { [ 261.125583] selector: 0x10, [ 261.125583] attr: 0x29b, [ 261.125583] limit: 0xffffffff, [ 261.125584] base: 0x0, [ 261.125584] }, [ 261.125584] }, [ 261.125585] enclave_thread: Inactive, [ 261.125585] } [ 472.325810] HE: he_cmd_encl_create: 226. encl: 0xffff9e4cc8094000 [ 472.325837] HE: he_cmd_encl_create: 259. encl: 0xffff9e4cc8094000, encl.start_gva=0x7fd49ec15000, encl_size: 0x1000000 [ 472.427682] HE: shared_memory_destroy: 327. mmu_notifier_unregister [ 472.428057] HE: he_encl_cleanup: 966. nr_free_epc_page: 0x80000, encl: 0xffff9e4cc8094000 [ 875.504580] HE: he_cmd_encl_create: 226. encl: 0xffff9e4c120e4000 [ 875.504594] HE: he_cmd_encl_create: 259. encl: 0xffff9e4c120e4000, encl.start_gva=0x7fb9a4f4e000, encl_size: 0x200000000 [ 917.053466] HE: shared_memory_destroy: 327. mmu_notifier_unregister [ 917.094283] HE: he_encl_cleanup: 966. nr_free_epc_page: 0x80000, encl: 0xffff9e4c120e4000 [ 1097.502777] HE: he_cmd_encl_create: 226. encl: 0xffff9e4c1a218000 [ 1097.502792] HE: he_cmd_encl_create: 259. encl: 0xffff9e4c1a218000, encl.start_gva=0x7f7a57d3e000, encl_size: 0x200000000 [ 1115.746853] HE: shared_memory_destroy: 327. mmu_notifier_unregister [ 1115.788011] HE: he_encl_cleanup: 966. nr_free_epc_page: 0x80000, encl: 0xffff9e4c1a218000 It seems the normal vm access the memory with no npt mapping and then trigger the general protection fault. Please provide the physical memory layout of your machine with the following command: sudo cat /proc/iomem Thanks. Thank u 4 u reply~ The physical memory layout is shown below. 00000000-00000fff : Reserved 00001000-0009efff : System RAM 0009f000-000bffff : Reserved 000a0000-000bffff : PCI Bus 0000:00 000c0000-000c3fff : PCI Bus 0000:00 000c4000-000c7fff : PCI Bus 0000:00 000c8000-000cbfff : PCI Bus 0000:00 000cc000-000cffff : PCI Bus 0000:00 000d0000-000d3fff : PCI Bus 0000:00 000d4000-000d7fff : PCI Bus 0000:00 000d8000-000dbfff : PCI Bus 0000:00 000dc000-000dffff : PCI Bus 0000:00 000e0000-000e3fff : PCI Bus 0000:00 000e4000-000e7fff : PCI Bus 0000:00 000e8000-000ebfff : PCI Bus 0000:00 000ec000-000effff : PCI Bus 0000:00 000f0000-000fffff : System ROM 00100000-09afffff : System RAM 09b00000-09dfffff : Reserved 09e00000-09efffff : System RAM 09f00000-09f27fff : ACPI Non-volatile Storage 09f28000-a07fefff : System RAM a07ff000-a07fffff : Reserved a0800000-a0d1c017 : System RAM a0d1c018-a0d26e57 : System RAM a0d26e58-a2363fff : System RAM a2364000-a4563fff : Reserved a4564000-a456cfff : System RAM a456d000-a456ffff : Reserved a4570000-b077efff : System RAM b077f000-b2f7efff : Reserved b1677000-b16c1fff : AMDI0100:00 b2ed2000-b2ed5fff : MSFT0101:00 b2ed2000-b2ed5fff : MSFT0101:00 b2ed6000-b2ed9fff : MSFT0101:00 b2ed6000-b2ed9fff : MSFT0101:00 b2f7f000-baf7efff : ACPI Non-volatile Storage baf7f000-baffefff : ACPI Tables bafff000-baffffff : System RAM bb000000-bbffffff : Reserved bce00000-bfffffff : Reserved c0000000-dfffffff : PCI Bus 0000:00 c0000000-c01fffff : PCI Bus 0000:01 c0000000-c01fffff : 0000:01:00.0 c0200000-c04fffff : PCI Bus 0000:04 c0200000-c02fffff : 0000:04:00.0 c0200000-c02fffff : xhci-hcd c0300000-c03fffff : 0000:04:00.3 c0300000-c03fffff : xhci-hcd c0400000-c04fffff : 0000:04:00.4 c0400000-c04fffff : xhci-hcd c0500000-c08fffff : PCI Bus 0000:03 c0500000-c05fffff : 0000:03:00.3 c0500000-c05fffff : xhci-hcd c0600000-c06fffff : 0000:03:00.4 c0600000-c06fffff : xhci-hcd c0700000-c07fffff : 0000:03:00.2 c0800000-c087ffff : 0000:03:00.0 c0880000-c08bffff : 0000:03:00.5 c08c0000-c08c7fff : 0000:03:00.6 c08c0000-c08c7fff : ICH HD audio c08c8000-c08cbfff : 0000:03:00.1 c08c8000-c08cbfff : ICH HD audio c08cc000-c08cdfff : 0000:03:00.2 c0900000-c09fffff : PCI Bus 0000:02 c0900000-c0903fff : 0000:02:00.0 c0900000-c0903fff : nvme f0000000-fdc00000 : PCI Bus 0000:00 fde00000-fdefffff : Reserved fde00000-fdefffff : pnp 00:00 fde10510-fde1053f : MSFT0101:00 fec00000-fec003ff : IOAPIC 0 fec01000-fec013ff : IOAPIC 1 fed00000-fed003ff : HPET 0 fed00000-fed003ff : PNP0103:00 fed45000-fed814ff : PCI Bus 0000:00 fed80000-fed80fff : Reserved fed81500-fed818ff : AMDI0030:00 fed81900-fed81fff : PCI Bus 0000:00 fedc0000-fedc0fff : PCI Bus 0000:00 fedc2000-fedc2fff : AMDI0010:00 fedc2000-fedc2fff : AMDI0010:00 fedc6000-fedc6fff : PCI Bus 0000:00 fee00000-fee00fff : Local APIC fee00000-fee00fff : pnp 00:00 ff000000-1ffffffff : Reserved 200000000-41e2fffff : System RAM 2cfe00000-2d0c00e90 : Kernel code 2d0c00e91-2d1655dff : Kernel data 2d1913000-2d1dfffff : Kernel bss 41e300000-41f33ffff : RAM buffer 41f340000-4601fffff : Reserved 460200000-7effffffff : PCI Bus 0000:00 7ee0000000-7ef01fffff : PCI Bus 0000:03 7ee0000000-7eefffffff : 0000:03:00.0 7ee0000000-7ee0bf3fff : efifb 7ef0000000-7ef01fffff : 0000:03:00.0 BTW: I am running hyperenclave in a machine with dual system, I don't know whether this might have some impact on the result especially for the crash, though it sounds so bizarre. Looking forward to your reply~ The physical memory 0xfe800400 accessed by the power management is not in the e820 table. It may be the bios bug, i suggest you check if there is newer bios and upgrade the bios to the latest version. Thank u for u reply! So maybe there is something wrong with the bios of my HONOR magicbook 14. LMAO. Besides, the fault only occur when start hyperenclave, right? Yes. The bug only occur when I start Hyperenclave. Hi Unik-lif, about the crash issue, we propose another solution, which is to add mappings for the accessed memory not in the e820 table. Hi Unik-lif, about the crash issue, we propose another solution, which is to add mappings for the accessed memory not in the e820 table. You can use the hyperenclave patch 0001-Support-handling-NPF-for-linux-vm.patch to confirm if the patch solves the issue. Looking forward to your test result. Thanks Ok, recently I am bit of busy. so long as I got some spare time, I will reply as soon as possible. Hi @Unik-lif , This repository has been archived, and moved to a new location: https://github.com/asterinas/hyperenclave. If you still have such issue, please copy it there: https://github.com/asterinas/hyperenclave/issues and discuss there.
gharchive/issue
2023-09-07T03:11:58
2025-04-01T04:55:10.713182
{ "authors": [ "Bonjourz", "Unik-lif", "cz-chenzhou" ], "repo": "HyperEnclave/hyperenclave", "url": "https://github.com/HyperEnclave/hyperenclave/issues/6", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
679803525
Infrequent NoSuchMethodError I'm running MC 1.16.2 and eldritch-mobs-1.3.1 and things will seem to be running OK and I do get "eldritch" mobs spawning... but every once in a while I get a crash like this: java.lang.NoSuchMethodError: net.minecraft.class_1299.method_5899(Lnet/minecraft/class_1937;Lnet/minecraft/class_2487;Lnet/minecraft/class_2561;Lnet/minecraft/class_1657;Lnet/minecraft/class_2338;Lnet/minecraft/class_3730;ZZ)Lnet/minecraft/class_1297; at net.hyper_pigeon.eldritch_mobs.mod_components.modifiers.DuplicatorComponent.useAbility(DuplicatorComponent.java:27) at net.hyper_pigeon.eldritch_mobs.mod_components.modifiers.ModifierComponent.useAbility(ModifierComponent.java:259) at net.hyper_pigeon.eldritch_mobs.EldritchMobsMod.useAbility(EldritchMobsMod.java:37) at net.minecraft.class_1308.handler$zfk000$ability_try(class_1308.java:1827) at net.minecraft.class_1308.method_5773(class_1308.java) at net.minecraft.class_3218.method_18762(class_3218.java:616) at net.minecraft.class_1937.method_18472(class_1937.java:561) at net.minecraft.class_3218.method_18765(class_3218.java:406) at net.minecraft.server.MinecraftServer.method_3813(MinecraftServer.java:868) at net.minecraft.server.MinecraftServer.method_3748(MinecraftServer.java:808) at net.minecraft.class_1132.method_3748(class_1132.java:92) at net.minecraft.server.MinecraftServer.method_29741(MinecraftServer.java:667) at net.minecraft.server.MinecraftServer.method_29739(MinecraftServer.java:254) at java.lang.Thread.run(Unknown Source) Not really sure what is the cause. If there's more information you need, just let me know. I'm using fabric-api-0.18.0+build.397-1.16 and Cardinal-Components-API-2.5.0 if that matters. Eldritch Mobs has not been updated to 1.16.2 yet, this is most likely the cause of the crash. Ah okay you're probably right. I disabled the Duplicator in the config and haven't had any crashes since. It does seem to generally work thankfully, though I haven't verified every single feature actually functions as expected. Sorry for opening on an unsupported version!
gharchive/issue
2020-08-16T18:36:33
2025-04-01T04:55:10.718843
{ "authors": [ "HyperPigeon", "nephatrine" ], "repo": "HyperPigeon/Eldritch-Mobs", "url": "https://github.com/HyperPigeon/Eldritch-Mobs/issues/6", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
529819096
revolution3 - 타이틀 화면, 옵션 화면, 그리고 출력 화면의 UI 개선 기존에 개선된 부분에선 아직 문장들의 정렬이 이루어져 있지 않고, 출력 구간 사이의 구분이 없었기 때문에 게임이 진행될 때 읽기 어렵다는 문제가 있었다. 이를 해결하기 위해 페이지의 전환이 있을 때(타이틀 화면에서 옵션 메뉴로 넘어갈 때) 구분선을 출력하는 printRow 함수를 추가하였으며, 맥락이 맞지 않는 문자의 변경이 이루어졌다. merge 찬성합니다
gharchive/pull-request
2019-11-28T10:17:59
2025-04-01T04:55:10.723532
{ "authors": [ "ParkJae-sung", "gusdn3477" ], "repo": "HyperTech99/OSS-MineSweeper", "url": "https://github.com/HyperTech99/OSS-MineSweeper/pull/27", "license": "BSD-2-Clause", "license_type": "permissive", "license_source": "github-api" }
312277734
Not Working The bot no longer works it guesses the image's filename which is now just a random mess of letters and numbers i.e. : p!catch UcdyiCKAZMY instead of : p!catch vullaby Yes pretty much. So the selfbot's autocatcher mode is down for now. You can continue using the other features of the bot. More improvements and new features coming soon. Stay tuned.
gharchive/issue
2018-04-08T06:37:05
2025-04-01T04:55:10.724945
{ "authors": [ "EndingNight", "Hyperclaw79" ], "repo": "Hyperclaw79/PokeBall-SelfBot", "url": "https://github.com/Hyperclaw79/PokeBall-SelfBot/issues/4", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
231585294
add spotify I've added a spotify status plugin: Close this in favor of #90
gharchive/pull-request
2017-05-26T10:19:11
2025-04-01T04:55:10.725982
{ "authors": [ "jgsqware" ], "repo": "Hyperline/hyperline", "url": "https://github.com/Hyperline/hyperline/pull/88", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1559321122
Upgrade pip packages Fixes #3127 Upgrade following pip packages: Production requirements Sentry = 1.9.5 - 1.14.0 Babel = 2.9.1 - 2.11.0 boto3 = 1.26.27 - 1.26.57 celery = 5.2.2 - 5.2.7 django-anymail = 8.4 - 9.0 django-bleach = 3.0.0 - 3.0.1 django-countries = 7.2.1 - 7.5 django-extensions = 3.1.5 - 3.2.1 django-file-form = 3.4.1 - 3.4.3 django-fsm = 2.8.0 - 2.8.1 django-hijack = 3.1.4 - 3.2.6 django-redis = 5.1.0 - 5.2.0 django-salesforce = 4.0 - 4.1 django-select2 = 7.9.0 - 8.0 django-slack = 5.17.7 - 5.18.0 django-storages = 1.12.3 - 1.13.2 django-tables2 = 2.4.1 - 2.5.1 django-tinymce = 3.4.0 - 3.5.0 djangorestframework-api-key = 2.2.0 - 2.3.0 djangorestframework = 3.12.4 - 3.14.0 drf-yasg = 1.20.0 - 1.21.4 mailchimp3 = 3.0.16 - 3.0.17 mistune = 2.0.3 - 2.0.4 more-itertools = 8.12.0 - 9.0.0 phonenumberslite = 8.12.39 - 8.13.4 Pillow = 9.3.0 - 9.4.0 tablib = 3.2.1 - 3.3.0 xmltodict = 0.12.0 - 0.13.0 Dev requirements django-debug-toolbar = 3.6.0 - 3.8.1 dslr = 0.3.1 - 0.4.0 ruff = 0.0.206 - 0.0.236 model-bakery = 1.7.0 - 1.10.1 pytest-xdist[psutil] = 2.5.0 - 3.1.0 responses = 0.21.0 - 0.22.0 Overall looks good to me, most of these are minor/patch releases. We can try to upgrade Django once wagtail 4 (https://github.com/HyphaApp/hypha/pull/3045 ) is merged.
gharchive/pull-request
2023-01-27T07:51:21
2025-04-01T04:55:10.735168
{ "authors": [ "sandeepsajan0", "theskumar" ], "repo": "HyphaApp/hypha", "url": "https://github.com/HyphaApp/hypha/pull/3130", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
2445671301
🛑 I-GUIDE JupyterHub Platform is down In 3975299, I-GUIDE JupyterHub Platform (https://jupyter.iguide.illinois.edu/hub/login) was down: HTTP code: 0 Response time: 0 ms Resolved: I-GUIDE JupyterHub Platform is back up in ce12f89 after 28 minutes.
gharchive/issue
2024-08-02T19:52:11
2025-04-01T04:55:10.746647
{ "authors": [ "YunfanKang" ], "repo": "I-GUIDE/Status-Monitoring-with-Upptime", "url": "https://github.com/I-GUIDE/Status-Monitoring-with-Upptime/issues/57", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1090837335
Doesn't work Whenever i try to put my discord token in, it says the syntax is wrong Hello @jangodev, Which software are you referring to? :D
gharchive/issue
2021-12-30T02:29:07
2025-04-01T04:55:10.753217
{ "authors": [ "I2rys", "jangodev" ], "repo": "I2rys/ODiscord", "url": "https://github.com/I2rys/ODiscord/issues/7", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2664588829
horizon find and cartesian grid dump dumping data on Cartesian grid with application to horizon finder I am not sure why we would want to convert the tracker list to a list of pointers, which is what is being done here. I mainly want to use the std::vector because of the easiness of accessing the elements. For the std::list I need an iterator making it more lengthy to pass the position of puncture into the horizon dump object.
gharchive/pull-request
2024-11-16T16:15:38
2025-04-01T04:55:10.754447
{ "authors": [ "HengruiZhu99" ], "repo": "IAS-Astrophysics/athenak", "url": "https://github.com/IAS-Astrophysics/athenak/pull/616", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
1394340651
Links not Working Description Here the links to Twitter and Facebook are not Working and are redirecting us to the same page. Hey @IAmTamal can you please assign this issue to me, I would love to fix it for HACKTOBERFEST. Screenshots No response Additional information No response It's great having you contribute to Milan ! This is an auto-generated textThe maintainers/owner will look into the issue soon, add relevant tags and proceed further, please have paitence. Meanwhile feel free to support star the repo and share it with your friends. :nerd_face: :rocket: A good issue , i want you do the following changes in that case : Link Twitter to https://twitter.com/mrTamall Remove Facebook and GitHub which links to this repo Change their colors to #9ac2fe Hey @IAmTamal can you assign to me also? Hey @IAmTamal can you assign to me also? @Himanshi2511 can u please find some other issues I have already made corrections and sent PR
gharchive/issue
2022-10-03T08:24:45
2025-04-01T04:55:10.765408
{ "authors": [ "DhruvRathi2001", "Himanshi2511", "IAmTamal" ], "repo": "IAmTamal/Milan", "url": "https://github.com/IAmTamal/Milan/issues/346", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
273764051
Updated to Swift 4.0.2 Removed character reference and replaced with String(describing:). Updated Travis to Xcode 9.1, however this may not come with Swift 4.0.2. The current beta of Xcode is 9.2 beta 1, which Travis can't use for testing as its pre-release, and I have a suspicion this version of Xcode comes with 4.0.2. Linux build should grab 4.0.2 because of the .swift-version update. Compiles on my Mac on Swift 4.0.2. Codecov Report Merging #64 into master will not change coverage. The diff coverage is 100%. @@ Coverage Diff @@ ## master #64 +/- ## ======================================= Coverage 90.09% 90.09% ======================================= Files 6 6 Lines 323 323 ======================================= Hits 291 291 Misses 32 32 Flag Coverage Δ #CloudFoundryEnv 90.09% <100%> (ø) :arrow_up: Impacted Files Coverage Δ Sources/CloudFoundryEnv/AppEnv.swift 91.8% <100%> (ø) :arrow_up: Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update b58de4a...431d87c. Read the comment docs.
gharchive/pull-request
2017-11-14T11:48:38
2025-04-01T04:55:10.786939
{ "authors": [ "KyeMaloy97", "codecov-io" ], "repo": "IBM-Swift/Swift-cfenv", "url": "https://github.com/IBM-Swift/Swift-cfenv/pull/64", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1020699083
Review logging due to flagged security reviews. Review: https://sonarcloud.io/project/security_hotspots?id=compliance-trestle&hotspots=AXxeNG6mGpm9WPPYPiGO Done
gharchive/issue
2021-10-08T05:01:20
2025-04-01T04:55:10.791374
{ "authors": [ "butler54" ], "repo": "IBM/compliance-trestle", "url": "https://github.com/IBM/compliance-trestle/issues/768", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1439541532
Update operand version for 3.19.7 and updated go version to 1.18.8 https://github.ibm.com/IBMPrivateCloud/roadmap/issues/55874 https://github.ibm.com/IBMPrivateCloud/roadmap/issues/55268 /lgtm
gharchive/pull-request
2022-11-08T05:57:57
2025-04-01T04:55:10.794584
{ "authors": [ "PRTamilanban" ], "repo": "IBM/ibm-auditlogging-operator", "url": "https://github.com/IBM/ibm-auditlogging-operator/pull/256", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
852842798
ubi updates ubi updates update ubi to latest - 8.3-298 update configmapwatcher tag /lgtm
gharchive/pull-request
2021-04-07T21:42:01
2025-04-01T04:55:10.795708
{ "authors": [ "swati-nair", "tthorpe2" ], "repo": "IBM/ibm-cert-manager-operator", "url": "https://github.com/IBM/ibm-cert-manager-operator/pull/119", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
309161265
Guestbook Preliminary scripts to meetup/presentation that fetches image from registry.ng.bluemix.net/ossdemo/guestbook:v1 and runs the meet up Lab1 and Lab2. @duglin please review LGTM
gharchive/pull-request
2018-03-27T22:50:07
2025-04-01T04:55:10.797088
{ "authors": [ "brahmaroutu", "duglin" ], "repo": "IBM/kube101", "url": "https://github.com/IBM/kube101/pull/8", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
737885996
fix: Switching Carbon tabs can cause content to scroll off-viewport carbon-components-react 7.23.0 (versus 7.22.0) had a few DOM-altering changes: a -> button for Tab child [aria-hidden=true] => [hidden] This required a few changes to CSS rules and test selectors. Fixes #6014 Description of what you did: My PR is a: [ ] 💥 Breaking change [x] 🐛 Bug fix [ ] 💅 Enhancement [ ] 🚀 New feature Please confirm that your PR fulfills these requirements [x] Multiple commits are squashed into one commit. [x] The commit message follows Conventional Commits, which allows us to autogenerate release notes; e.g. fix(plugins/plugin-k8s): fixed annoying bugs [x] All npm dependencies are pinned. sigh, carbon 7.23 deprecated Tab props, but itself persisted in using those props. https://github.com/carbon-design-system/carbon/issues/7237 who knows when that fix will be released...
gharchive/pull-request
2020-11-06T16:23:05
2025-04-01T04:55:10.801672
{ "authors": [ "starpit" ], "repo": "IBM/kui", "url": "https://github.com/IBM/kui/pull/6126", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
871558780
chore: move publishing from bintray to maven PR summary Bintray support is going away, publishing needs to be done directly to maven Fixes: <! -- link to issue --> PR Checklist Please make sure that your PR fulfills the following requirements: [ ] The commit message follows the Angular Commit Message Guidelines. [ ] Tests for the changes have been added (for bug fixes / features) [ ] Docs have been added / updated (for bug fixes / features) PR Type [ ] Bugfix [ ] Feature [ ] Code style update (formatting, local variables) [ ] Refactoring (no functional changes, no api changes) [ ] New tests [X ] Build/CI related changes [ ] Documentation content changes [ ] Other (please describe) What is the current behavior? What is the new behavior? Does this PR introduce a breaking change? [ ] Yes [ ] No Other information :tada: This PR is included in version 0.16.0 :tada: The release is available on GitHub release Your semantic-release bot :package::rocket:
gharchive/pull-request
2021-04-29T21:39:53
2025-04-01T04:55:10.808012
{ "authors": [ "ajay-malhotra1", "kennburger" ], "repo": "IBM/networking-java-sdk", "url": "https://github.com/IBM/networking-java-sdk/pull/26", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }