id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
1048655130
|
Expose Array.id() and Dictionary.id()
Also, now ids are int64_t, which integrate better with Variant/scripting.
This is for allowing non-deep comparison of arrays and dictionaries, like this: a.id() == b.id()
I think I'm dropping this for now.
|
gharchive/pull-request
| 2021-11-09T14:12:51 |
2025-04-01T04:34:23.326574
|
{
"authors": [
"RandomShaper"
],
"repo": "godotengine/godot",
"url": "https://github.com/godotengine/godot/pull/54804",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1111334959
|
Fix default input port hints for some modes in visual shader
Renamed internal method get_input_port_default_hint to is_input_port_default, changed it's type to boolean for unification, fixes more places with incorrect default hints on certain modes.
Thanks!
|
gharchive/pull-request
| 2022-01-22T08:14:10 |
2025-04-01T04:34:23.327609
|
{
"authors": [
"Chaosus",
"akien-mga"
],
"repo": "godotengine/godot",
"url": "https://github.com/godotengine/godot/pull/57056",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1165855815
|
Fix documentation about depth and width of Height map
Fix wrong documentation part of #58933 in 3.x branch.
Thanks!
Cherry-picked for 3.4.4.
|
gharchive/pull-request
| 2022-03-11T00:05:55 |
2025-04-01T04:34:23.328669
|
{
"authors": [
"Sauermann",
"akien-mga"
],
"repo": "godotengine/godot",
"url": "https://github.com/godotengine/godot/pull/59004",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1188476531
|
Fix Android double-tap not working.
#37158 added the event's button state to Android's onDoubleTap gesture processing. However, the button state is always zero. This broke double tapping (#8151, #46100, #46101).
As per the documentation, the onDoubleTap notification is "Triggered on the down event of second tap." Furthermore, since double-clicks are only detected on the left mouse button, even if the button state was correct, we can safely assume that the event pressed parameter will be true, button_index will be MouseButton::LEFT and button_mask will be MASK_LEFT.
Therefore this PR reverts #37158's addition of the button state to the Android onDoubleTap gesture processing.
Fixes #8151.
Provides a partial fix for #46100 and #46101 in as much as the output on Android will be
Event doubleclick = False, pressed = True, button_index = 1
Event doubleclick = False, pressed = False, button_index = 1
Event doubleclick = True, pressed = True, button_index = 1
Event doubleclick = False, pressed = True, button_index = 1
Event doubleclick = False, pressed = False, button_index = 1
vs the desktop output of:
Event doubleclick = False, pressed = True, button_index = 1
Event doubleclick = False, pressed = False, button_index = 1
Event doubleclick = True, pressed = True, button_index = 1
Event doubleclick = False, pressed = False, button_index = 1
In other words there is still an extra event, but at least there is a correct doubleclick event that can be detected appropriately.
Does this PR supersede https://github.com/godotengine/godot/pull/54225?
Does this PR supersede https://github.com/godotengine/godot/pull/54225?
Yes.
@madmiraal Looks good; just need to be rebased prior to merging.
I've tested the current logic on a stock Android 12 on a Pixel 5, and despite what the documentation states, double clicks from a connected bluetooth mouse is also handled by the GestureHandler.
This PR doesn't prevent double-clicks from working. It just assumes all double-clicks are left mouse button clicks.
I think it's a bit presumptuous to close this issue before #65434 has been approved and merged.
|
gharchive/pull-request
| 2022-03-31T18:33:16 |
2025-04-01T04:34:23.334456
|
{
"authors": [
"Calinou",
"m4gr3d",
"madmiraal"
],
"repo": "godotengine/godot",
"url": "https://github.com/godotengine/godot/pull/59760",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1193493576
|
Fix Project Manager hard crashes due to invalid access to Editor Nodes
Resolves: https://github.com/godotengine/godot/issues/59869
Fixes incorrect usages in project manager referencing editor nodes from this PR: https://github.com/godotengine/godot/pull/59495
Unfortunately, while EditorFileSystem is a singleton, it is also a node that is supposed to be attached to the editor node tree. Its responsibility is more of a file system watch for the editor (should probably be renamed to avoid ambiguity).
Also did some minor cleanup for unnecessary string copies.
Could you squash the commits?
Thanks!
|
gharchive/pull-request
| 2022-04-05T17:18:31 |
2025-04-01T04:34:23.336710
|
{
"authors": [
"akien-mga",
"marstaik"
],
"repo": "godotengine/godot",
"url": "https://github.com/godotengine/godot/pull/59920",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1251912529
|
Portals - force full check on adding moving object
Moving objects being added during instance_moving_create() were incorrectly not forcing a full check to find which room they were within. This could result in moving objects being re-added not correctly identifying their current room, and thus culling incorrectly. This PR forces a full check on calling instance_moving_create.
Fixes #61447
Notes
This is a fairly simple bug, I had missed this case.
The p_force_reinsert was being set from _load_finalize_roaming() which was correctly setting the room for roaming objects that were present at start.
However this also needed to be set for the case of moving objects that were added during gameplay (or re-added), rather than present at initial room conversion.
This bug may not have always shown previously because provided the object was moved enough, if would have moved outside the expanded bound and done a full check anyway.
Thanks!
Cherry-picked for 3.4.5.
|
gharchive/pull-request
| 2022-05-29T15:02:46 |
2025-04-01T04:34:23.339838
|
{
"authors": [
"akien-mga",
"lawnjelly"
],
"repo": "godotengine/godot",
"url": "https://github.com/godotengine/godot/pull/61523",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1457008073
|
GDScript: Cache scripts after parse error
@Rindbee pointed out that #68374 doesn't cache scripts after parsing errors like the previous logic did. This fixes that
not a functionality regression as far as i can tell, but it's incorrect as-is
Thanks!
|
gharchive/pull-request
| 2022-11-20T18:55:21 |
2025-04-01T04:34:23.340919
|
{
"authors": [
"akien-mga",
"rune-scape"
],
"repo": "godotengine/godot",
"url": "https://github.com/godotengine/godot/pull/68927",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2000786128
|
Add methods to draw ellipses
Closes godotengine/godot-proposals#8461.
This simply modifies the formulas for draw_circle and draw_arc to generalize to ellipses and elliptical arcs.
Rebased, there were some changes in draw_circle that I replicated for ellipses.
Side note: while updating the class documentation, I noticed a typo in several draw_* methods, should I include this here as well? (only draw_circle is relevant to this PR)
Side note: while updating the class documentation, I noticed a typo in several draw_* methods, should I include this here as well? (only draw_circle is relevant to this PR)
I'd open a separate PR as it's an independent change.
|
gharchive/pull-request
| 2023-11-19T10:16:07 |
2025-04-01T04:34:23.343277
|
{
"authors": [
"Calinou",
"Cykyrios"
],
"repo": "godotengine/godot",
"url": "https://github.com/godotengine/godot/pull/85080",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2063148738
|
new version v2.6.1 is not compatible with version v2.5.6
What version of Go and system type/arch are you using?
go1.20.4 linux/amd64
What version of GoFrame are you using?
2.5.6升级2.6.1
Can this bug be re-produced with the latest release?
1、升级后POST表单请求(application/x-www-form-urlencoded 或 multipart/form-data)进入控制器后,r.GetBody(),r.GetBodyString()和r.Body都读取不到数据,只能增加中间件设置r.MakeBodyRepeatableRead(true)。这与2.5.6版本不兼容,请问后续会做兼容,还是特意改动。
2、gconv.Map()与MapDeep()方法也与前版本不兼容。特别是Map()方法参数已经改动,MapDeep()方法理应也做改动,可却并未看到。升级后filter := gconv.MapDeep(req.Filter)现在得写成这样filter := gconv.Map(req.Filter, gconv.MapOption{Deep: true, OmitEmpty: true}),且gconv.Map()在2.5.6之前默认OmitEmpty=true,现在却是false。
@JB-fy Fixed, but the OmitEmpty: true for map converting should be manually set if necessary in new version from v2.6.2.
关于GetBodyString()的bug仍然有,测试版本2.6.3
1、当raw当时POST时,GetBodyString()可以正常拿到raw body
2、当application/x-www-form-urlencoded提交请求时,在处理请求的controller(handler)内、后置中间件里,GetBodyString()都拿不到数据;但是,如果添加一个前置中间件,在前置中间件里,调用一次GetBodyString(),可以取到数据,并且此时controller和后置中间件也能取到数据了。
|
gharchive/issue
| 2024-01-03T02:01:32 |
2025-04-01T04:34:23.384406
|
{
"authors": [
"JB-fy",
"gqcn",
"l12ab"
],
"repo": "gogf/gf",
"url": "https://github.com/gogf/gf/issues/3237",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
127931561
|
Clone over HTTPS: HEAD refers to nonexistent ref, unable to checkout
If I check out any repository over HTTPS from Gogs on my server, I get
Checking connectivity... done.
warning: remote HEAD refers to nonexistent ref, unable to checkout.
Could this be a configuration issue?
[server]
DOMAIN = ssh.mydomain.com
HTTP_PORT = 80
ROOT_URL = https://mydomain.com/git
DISABLE_SSH = false
SSH_PORT = 22
OFFLINE_MODE = false
How did you setup the HTTPS? Revere proxy?
Close due to lack of feedback.
|
gharchive/issue
| 2016-01-21T14:08:32 |
2025-04-01T04:34:23.406911
|
{
"authors": [
"NiklasRosenstein",
"Unknwon"
],
"repo": "gogits/gogs",
"url": "https://github.com/gogits/gogs/issues/2451",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
105560609
|
Add allow attribute class to <code> tag
PR for #1608
我这里没有Mac环境没法用codekit,还需要修改highlight.js的配置为
hljs.configure({
classPrefix: 'language-'
})
Don't know when this regression happened but there were already code to allow this.
https://github.com/gogits/gogs/pull/1442/files#diff-88e948dbecd728e2513ba9bafe876faaR30
I don't suggest to use this code but the one from the link I provide here. The code provided in this pull request is not safe.
Don't know when this regression
No regression the feature/highlight branch must be not merged
I'm interested in the reason which will cause a class attribute exploit. Can you provide a link about it?
I think the best way to do language syntax highlight is to do it at the browser, not backend. It has more features and easy to modified. I also want to change the current way of rendering markdown to front-end way.
Maybe it's worth to make a discussion?
If you allow the class to be anything you can just use something like this as value:
"></code>And here I can add anything that wont be sanitized<code class="
input :
"></code><script>alert("test");</script><code class="
output (.AllowAttrs("class").OnElements("code")) :
"></code>
output:
"></code>
I can't provide you with a working XSS example. But if bluemonday developers on their readme prevents you againts XSS attribute injection should be better in my opinion not to ignore them.
And I'm not sure what you have tested there. What you should test would be something like this.
<code class=""></code><script>alert("test");</script><code class=""></code>
Which is not invalid html at all. Probably would be sanitized correctly but not sure if it is the correct test to do.
I would take the advice of bluemonday developers and use a regex filter to only allow safe inputs. In this case as really the only valid input should be "language-something", it can be restricted even more than with the suggested bluemonday regex.
Though I believe there is no exist way to exploit at class attribute, it based on bluemonday using a parser to analyze html code. I agreed that we'd better restrict the input format.
So there begins a new discussion, is making highlight better at browser-side?
Thank you both!
But there is a confusion here for you @denghongcai :
The hilightjs is now only used in the new Semantic UI pages(right now, only one page is using it: the webhook history). So currently change its configuration does not help on anything.
I also want to change the current way of rendering markdown to front-end way.
This is not likely going to happen unless the JS lib support context-related rendering, like auto-render issue, @ mention, commit ID, etc.
I agree with @manfer about using https://github.com/gogits/gogs/pull/1442/files#diff-88e948dbecd728e2513ba9bafe876faaR30 for allowing code class attribute.
If the regex filter is added I would suggest merging this pull request as probably current prettyprint highlighter would be enhanced too if the code tags include the language class. I have not tested but it should work better. Not sure if something else would be needed but at most minor changes to the code that handles prettyprint.
A huge confusion :sob:
We can just use js lib to highlight code, because it's easy and easy to add more language support.
Thanks for @manfer , I will do some tests tomorrow and update progress here. Have a nice night.
@manfer yes, for current stage, merging this PR is 100% OK with me, actually.
@denghongcai we have not yet use highlightjs for code blocks, keep this PR as it is the the best thing right now.
@denghongcai hmm... my bad, actually you use chain operation:
bluemonday.UGCPolicy().Allowxxx...
cleaned my code :smile:
Gogs.renderMarkdown = function() {
var $md = $('.markdown');
var $pre = $md.find('pre > code').parent();
$pre.addClass('prettyprint');
prettyPrint();
// Set anchor.
var headers = {};
$md.find('h1, h2, h3, h4, h5, h6').each(function() {
var node = $(this);
var val = encodeURIComponent(node.text().toLowerCase().replace(/[^\w\- ]/g, '').replace(/[ ]/g, '-'));
var name = val;
if (headers[val] > 0) {
name = val + '-' + headers[val];
}
if (headers[val] == undefined) {
headers[val] = 1;
} else {
headers[val] += 1;
}
node = node.wrap('<div id="' + name + '" class="anchor-wrap" ></div>');
node.append('<a class="anchor" href="#' + name + '"><span class="octicon octicon-link"></span></a>');
});
};
code-prettify was used here. I think it's easy to change it to highlight.js
another way is add some extensions to code-prettify, because the lib used here works on
The comments in prettify.js are authoritative but the lexer should work on a number of languages including C and friends, Java, Python, Bash, SQL, HTML, XML, CSS, Javascript, Makefiles, and Rust.
It works passably on Ruby, PHP, VB, and Awk and a decent subset of Perl and Ruby, but, because of commenting conventions, doesn't work on Smalltalk, OCaml, etc. without a language extension.
https://github.com/google/code-prettify#for-which-languages-does-it-work
Thanks again, merging...
|
gharchive/pull-request
| 2015-09-09T09:39:00 |
2025-04-01T04:34:23.419117
|
{
"authors": [
"Unknwon",
"denghongcai",
"manfer"
],
"repo": "gogits/gogs",
"url": "https://github.com/gogits/gogs/pull/1609",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
112138081
|
Fix import path
Hello,
After re-compiling gogs with Redis from the master branch today, I wasn't able to run gogs. The error message was "panic: cache: unknown adapter 'redis'(forgot to import?)"
It's a minor issue with new go-macaron path.
Kenno
Thanks!
No, thank you.
|
gharchive/pull-request
| 2015-10-19T12:53:22 |
2025-04-01T04:34:23.421013
|
{
"authors": [
"Unknwon",
"kenno"
],
"repo": "gogits/gogs",
"url": "https://github.com/gogits/gogs/pull/1803",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
216467986
|
Only first 6 seconds of video are saved to storage
I tried on both the Nexus 5X (7.1) and the LG K8 (6.0).
I do exactly as in the example:
cameraView.startRecordingVideo();
cameraView.postDelayed(new Runnable() {
@Override
public void run() {
cameraView.stopRecordingVideo();
}
}, 30*1000);
onVideoTaken gets called after 30 seconds, just like I want it. But regardless, only the first 6 seconds of the video are saved to storage.
having the same issue here, only the first few seconds are recorded for playback.
any updates or workarounds for this?
this is what the logs look like while recording:
03-30 21:47:14.121 28884-28884 I/ViewRootImpl: ViewRoot's Touch Event : ACTION_DOWN
03-30 21:47:14.125 28884-28884 V/MediaRecorder: constructor
03-30 21:47:14.133 28884-28884 D/MediaRecorder: mAudioZoomEnable = 1
03-30 21:47:14.133 28884-28884 V/MediaRecorder: doCleanUp
03-30 21:47:14.135 28884-28884 V/MediaRecorder: setListener
03-30 21:47:14.135 28884-28884 V/MediaRecorder: setClientName
03-30 21:47:14.139 28884-28884 V/MediaRecorder: setCamera(0x78b16fa800,0x78b16fae00)
03-30 21:47:14.142 28884-28884 V/MediaRecorder: setVideoSource(1)
03-30 21:47:14.142 28884-28884 V/MediaRecorder: Call init() since the media recorder is not initialized yet
03-30 21:47:14.142 28884-28884 V/MediaRecorder: init
03-30 21:47:14.149 28884-28884 V/MediaRecorder: setAudioSource(5)
03-30 21:47:14.155 28884-28884 V/MediaRecorder: setOutputFormat(2)
03-30 21:47:14.155 28884-28884 V/MediaRecorder: setVideoFrameRate(30)
03-30 21:47:14.156 28884-28884 V/MediaRecorder: setVideoSize(720, 480)
03-30 21:47:14.157 28884-28884 V/MediaRecorder: setVideoEncoder(2)
03-30 21:47:14.161 28884-28884 V/MediaRecorder: setAudioEncoder(3)
03-30 21:47:14.178 28884-28884 D/MediaRecorder: _setOutputFile E
03-30 21:47:14.178 28884-28884 V/MediaRecorder: setOutputFile(152, 0, 0)
03-30 21:47:14.180 28884-28884 D/MediaRecorder: _setOutputFile X
03-30 21:47:14.180 28884-28884 V/MediaRecorder: prepare
03-30 21:47:14.182 28884-28884 V/MediaRecorder: start
03-30 21:47:14.919 28884-28884 I/Choreographer: Skipped 47 frames! The application may be doing too much work on its main thread.
03-30 21:47:21.753 28884-29024 V/MediaRecorder: message received msg=2, ext1=801, ext2=0
03-30 21:47:21.753 28884-29024 V/MediaRecorder: callback application
03-30 21:47:21.754 28884-29024 V/MediaRecorder: back from callback
03-30 21:47:21.754 28884-29024 V/MediaRecorder: message received msg=101, ext1=268436456, ext2=0
03-30 21:47:21.754 28884-29024 V/MediaRecorder: callback application
03-30 21:47:21.754 28884-29024 V/MediaRecorder: back from callback
03-30 21:47:21.783 28884-28898 V/MediaRecorder: message received msg=101, ext1=536871912, ext2=0
03-30 21:47:21.783 28884-28898 V/MediaRecorder: callback application
03-30 21:47:21.784 28884-28898 V/MediaRecorder: back from callback
03-30 21:47:26.807 28884-28884 I/ViewRootImpl: ViewRoot's Touch Event : ACTION_UP
03-30 21:47:26.808 28884-28884 V/MediaRecorder: stop
03-30 21:47:27.309 28884-28884 V/MediaRecorder: doCleanUp
at around 6 seconds highlighted above the callback is called. even though the recording keeps going for 12 seconds.
Hi, problem is in https://github.com/gogopop/CameraKit-Android/blob/master/camerakit/src/main/api16/com/flurgle/camerakit/Camera1.java. Max file size is set to be 5MB and max duration is set to 20sec.
mMediaRecorder.setMaxDuration(20000);
mMediaRecorder.setMaxFileSize(5000000);
This is fixed in 0.9.16 coming later today. Thanks for the issue!
|
gharchive/issue
| 2017-03-23T15:25:12 |
2025-04-01T04:34:23.432390
|
{
"authors": [
"dwillmc",
"johntzan",
"mbernr",
"mishasrb"
],
"repo": "gogopop/CameraKit-Android",
"url": "https://github.com/gogopop/CameraKit-Android/issues/40",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
255795135
|
RSS/Atom support for data-driven content
Being able to use RSS and Atom feeds for data sources (i.e., a getFeed function) could be extremely useful, especially when combined with a cron job to trigger a build/deploy.
It looks like there are a few go libraries which could handle this:
https://github.com/mmcdole/gofeed
https://github.com/SlyMarbo/rss
https://github.com/ungerik/go-rss
would be extremely useful
For what?
Hugo is and will be a static generator, I think. There is another issue about somehow generating pages "on the fly" (and I assume also not writing to disk) based on data files etc., but that will go into the normal Hugo "pipeline", which would end up in whatever output format you want (including RSS if you want).
I can think of lots of use cases off the top of my head. Building a news aggregator is pretty obvious. Or adding a "News" section to a site which pulls in articles off of Medium or from a Wordpress install. Or adding a bit of content to a sidebar instead of just a static link. Or list videos from your YouTube channel, or pictures from Instagram, etc. Maybe a company would want a "Careers" page which pulls their entries from Monster.com. Or pulling in the titles of recently active topics from a community forum.
And that's just the general purpose stuff. A lot of the uses will very be specific to the topic at hand. If you're making a page for a programming language, maybe you want to display the latest packages published to NPM/Crates/CPAN/PEAR/etc., or StackOverflow posts. Maybe a site for a musician would want to pull in a list of upcoming concerts from StubHub/TicketMaster/etc, or songs from SoundCloud. The more interesting use cases are almost always the stuff you don't anticipate.
Sure, this is stuff you could do manually by adding stuff to content/, but that requires specialized knowledge. Why force people to learn how a specific Hugo site works when adding content could be as easy as just adding a video to YouTube, a picture to Instagram, a blog entry to Medium, etc.? Besides, if you only have to add content in one place you don't have to worry about keeping it in sync.
I'm not sure why being a static generator would be a restriction. Just run it however often you want to check for new content. As an example, Planet Planet is a static generator. You just set up a cron job to automatically kick off a build… Travis CI makes it pretty easy without having to do anything locally, as long as daily is often enough, or if you prefer Netlify you can just GET a url from a cron job to kick off a build.
I'm not sure why being a static generator would be a restriction.
No, but it sets a scope for "what Hugo is". There is a saying about "doing one thing well..."
No, but it sets a scope for "what Hugo is". There is a saying about "doing one thing well..."
But this doesn't really alter the scope of Hugo. It's still a static site generator. All I'm suggesting is to add a getFeed function alongside the existing getJSON and getCSV functions.
getJSON and getCSV is already pushing it, and they fall into a more general domain than the getFeed thing. I don't want to maintain more getRemote something.
I don't think maintenance would be much of an issue; the infrastructure for retrieving remote content is obviously already there (for getJSON/getCSV), and any bugs in the feed parsing belong in a library you don't maintain, so they wouldn't be your responsibility. It seems like this should only add a few lines of code to Hugo, since the hard parts are either already part of Hugo or in the feed library.
I can understand not wanting to pull in another library and bloat the executable, but IMHO the benefits from this vastly outweigh the costs there.
Since Hugo is your project obviously the final decision is yours, but I would suggest that you ask the community (on discourse, I guess) for opinions before closing this. If people are against it, or generally don't care, it's probably safe to close it, but I have a feeling a lot of people would be extremely interested in this.
|
gharchive/issue
| 2017-09-07T02:22:17 |
2025-04-01T04:34:23.483404
|
{
"authors": [
"bep",
"jetwash"
],
"repo": "gohugoio/hugo",
"url": "https://github.com/gohugoio/hugo/issues/3862",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
400887961
|
Support Apache MultiViews language negotiation file naming
It would be nice if Hugo could support the multilingual file naming convention described here:
https://www.w3.org/International/questions/qa-apache-lang-neg#naming
such as:
example.html.en, example.html.fr, example.html.de (language extention after .html),
or
example.en.html, example.fr.html, example.de.html (language extension before .html)
such that Hugo can be a drop-in replacement for websites that use such Apache MultiViews approach for language negotiation. For example, https://www.debian.org/
Content negotiation is not specific to Apache by the way. It’s “the right way” to serve multilangual pages, and it’s sad that hugo currently recommends using language specific paths or domains instead.
|
gharchive/issue
| 2019-01-18T21:03:35 |
2025-04-01T04:34:23.487317
|
{
"authors": [
"afranke",
"anthonyfok"
],
"repo": "gohugoio/hugo",
"url": "https://github.com/gohugoio/hugo/issues/5618",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
502395578
|
After should accept 0 as an index
Issue Description
Currently after blows up if you pass in 0 as the index, which feels like a valid usage that should not crash things. The line that causes this is here.
If this gets maintainer approval I'm happy to make the change myself, given that its only a line or two :).
What version of Hugo are you using (hugo version)?
$ hugo version
Hugo Static Site Generator v0.58.1-24277B92 linux/amd64 BuildDate: 2019-09-06T09:19:04Z
Does this issue reproduce with the latest release?
Yes
If this gets maintainer approval I'm happy to make the change myself,
Approved, but please also add a test case.
|
gharchive/issue
| 2019-10-04T02:48:35 |
2025-04-01T04:34:23.489957
|
{
"authors": [
"bep",
"gnalck"
],
"repo": "gohugoio/hugo",
"url": "https://github.com/gohugoio/hugo/issues/6388",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
576442147
|
Consider not deprecating Mmark support
Math support (both MathJax or KaTeX) and other enhanced extensions have been working great by taking advantage of .mmark files since circa 2015. Many people are relying on Mmark for their posts due to that. As per https://github.com/gohugoio/hugo/issues/6544#issuecomment-595323813, since release 0.60, Mmark has been marked deprecated, with potential future removal. In the release notes there's no rationale for the removal of Mmark, it's happening as some side effect for the replacement of Blackfriday for Goldmark, which serves only plain Markdown, not the Mmark dialect.
Given that Mmark has been available besides Blackfriday all this time, wouldn't it be nicer to keep operating in the same model and leave Mmark besides Goldmark and break almost no users instead of breaking all Mmark users?
In case removal is going to happen regardless, it would be good to have the reason published. I have not yet found a note why Mmark should go due to Blackfriday, given that packages are distinct.
The rationale:
The package is marked as deprecated upstream, so any bugs etc. will not be fixed.
Reduce package/binary size.
To reduce maintenance. We have a test suite that covers mmark that we need to maintain and check when we make changes elsewhere. Also, we have do support here and on the forum (answer questions, handle bug requests).
All of the above may not bee too much for mmark alone, but it adds up in the long run. We're not staffed like Microsoft, so to speak, and you should have a look at my "notification board" related to Hugo ... I need less work, not more.
The package is marked as deprecated upstream, so any bugs etc. will not be fixed.
FWIW, it's marked deprecated, but for the old one (for a good time already), because there's a newer one, which I thought Hugo was making use of by now.
which I thought Hugo was making use of by now.
We don't. And it's not a "new package", it's a totally "new thing".
I don't get your tone, as it's a new package for sure (by the same author, for the same purpose, with same math features). The main issue here is not what package is used, but dropping support for the dialect, which is what's user facing, the most sad part of all this is not to be able to author in Mmark anymore. The usual approach in these situations is to stick to the old dependency until a migration to the new one finally lands at some point. I do get you were not involved in the addition of Mmark when it was added (I started using Hugo when it was still under spf13's repository) and seems not to care about it enough to not mind breaking the users that have been relying on it these years.
I know this is open source and you now may be managing the project on your free time for free. Still, to have a user base broken this way, requiring all their files to be rewritten and file extensions changed, strongly points that the project isn't sanely reliable anymore.
I don't get your tone,
I'm sorry that I was a little short. My inbox is long, which kind of illustrates the core of this problem.
The usual approach in these situations is to stick to the old dependency until a migration to the new one t lands at some point.
The new dependency is something totally new, something completely different. If added, it should be added with the identifier "mmark2" or something. But that discussion needs to be raised in another issue.
|
gharchive/issue
| 2020-03-05T18:15:17 |
2025-04-01T04:34:23.496936
|
{
"authors": [
"bep",
"oblitum"
],
"repo": "gohugoio/hugo",
"url": "https://github.com/gohugoio/hugo/issues/7022",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2244019981
|
Update watchtestscripts.sh
slight grammar change
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it.
|
gharchive/pull-request
| 2024-04-15T15:45:01 |
2025-04-01T04:34:23.499455
|
{
"authors": [
"CLAassistant",
"broughtupsy"
],
"repo": "gohugoio/hugo",
"url": "https://github.com/gohugoio/hugo/pull/12377",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
710224190
|
New Theme : Bigspring Hugo Startup Theme
Theme submission
Please make sure that you have read the theme submission guidelines before submitting a theme. The guidelines provide all relevant information and requirements that have to be fulfilled before the submission. We strongly suggest that you test your theme with the Hugo Themes Build Script before submitting the theme for review. If a submission does not meet the criteria mentioned in the README it will be closed. You may re-submit once you fix the problems with your submission. However, please note that we have limited resources and can not provide help for general web development issues. For Hugo support questions please refer to the dedicated support forum.
Please tick the relevant boxes for your theme in the checklist below:
**Link to my theme repository:https://github.com/themefisher/bigspring-hugo-startup-theme
I made sure that...
[x] the repository contains a good README.md describing my theme
[x] an open source license has been added to LICENSE.md
[x] all metadata have been added to theme.toml
[x] screenshots have been added in the images/ folder with the required dimensions
[x] in case I'm using a customized demo via the exampleSite folder that
[x] https://example.com is set as base url in exampleSite/config.{toml, yaml, json} to avoid the abuse of unused domains
[x] I tested that my theme's demo works with the content directory of gohugoio/HugoBasicExample
[x] I tested my theme against the gohugoio/HugoBasicExample
[x] I've checked the developer tools' console in my browser for error messages
[x] in case my theme is using Hugo Pipes features like toCSS and PostCSS that I have committed the /resources directory with all generated assets, for my theme to work in the basic version of Hugo
N.B. By submitting a theme to the Hugo Themes Showcase you understand that you need to maintain your theme. If a theme demo breaks and remains broken then at some point it will be removed from the list without prior warning. If you no longer wish to maintain a theme please let us know.
New themes will usually be promoted on Hugo's official Twitter account. If you would like to be mentioned in the tweet please add your Twitter username to this submission.
**Link to my Twitter account (optional): http://twitter.com/themefisher
Feel free to ask questions. We're glad to help.
Hello Mehedi,
thanks for your / Themefisher's continuous contributions to the Hugo community 👍
While starting the review I noticed that the screenshots at images/ are saved JPEG, not PNG. Please convert the images, otherwise the build script can't find them.
cc: @somratpro
Hey @digitalcraftsman
Thanks for your quick reply, I am very sorry about it. I replaced those images.
Thanks for fixing the images. Your theme is already live on Hugo's theme site and has been promoted on Hugo's official Twitter account.
Good to see our themes in the Hugo directory. So the list is now 19 templates and its growing. May be you know we changed all our themes licenses , and all of them now released under MIT license. Here is the announcement https://discourse.gohugo.io/t/themefisher-s-hugo-themes-are-now-licensed-under-mit/28528 .
I have a question when we release a new theme can we post it as an announcement in the Hugo forum?
I have a question when we release a new theme can we post it as an announcement in the Hugo forum?
There's a dedicated announcement section in the forum that could be used. But I've not been active in the forum for a while, so you might want to contact a forum moderator for futher questions.
|
gharchive/issue
| 2020-09-28T12:31:31 |
2025-04-01T04:34:23.511016
|
{
"authors": [
"developer-evan",
"digitalcraftsman",
"mehedi-sharif"
],
"repo": "gohugoio/hugoThemes",
"url": "https://github.com/gohugoio/hugoThemes/issues/924",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1600051726
|
Add Arberia Theme for Hugo
Add Arberia Theme for Hugo.
Demo: https://arberiatheme.netlify.app/
Repository: https://github.com/antedoro/arberia
Hi @antedoro. I am closing this pull request (PR) for now.
Feel free to submit another PR, after you have fixed the issue.
We will be glad to review and add your theme.
|
gharchive/pull-request
| 2023-02-26T13:04:12 |
2025-04-01T04:34:23.513234
|
{
"authors": [
"antedoro",
"hugo-sid"
],
"repo": "gohugoio/hugoThemesSiteBuilder",
"url": "https://github.com/gohugoio/hugoThemesSiteBuilder/pull/273",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
161842050
|
piglow: presence of both Shutdown and Close confuses
Isn't it possible to do the shutdown in Close? Is there a reason why Shutdown has to be a separate method?
I was wondering the same. Is there value in shutting down without closing?
Shutdown puts the PiGlow into a "software shutdown" mode, but it does not sever the connection to the PiGlow. Most of the methods in the PiGlow package are mapped directly to their register functions according to the api specification.
@zankich can you think of a good reason why someone would want to shutdown and not close the device? If you do, we should probably keep both methods, but either way, I think that we should always shutdown during closing tho, what do you think?
@mattetti I can imagine a case where you want to set a persistent color on your PiGlow, but do not want to have a program running in the background. Some process may wake up and set the colors to a specific pattern and then exit, or if you have a more complex system a routine which opens a connection to the PiGlow and and then closes the connection when it's finished. If you sent the PiGlow a Shutdown, the lights would turn off.
Hmm that's very interesting. That's a totally realistic use case IMHO and does justify the 2 methods. Maybe some documentation might clarify why there are two methods. After all both Burcu and I were surprised.
Yeah the documentation could be clearer for sure. I took a first stab at it with my PR, but there is definitely room to improve.
I can imagine a case where you want to set a persistent color on your PiGlow, but do not want to have a program running in the background.
You can close the file descriptor and it will keep using displaying the latest state. These devices are state machines, not going to stop working when you close the I2C connection through devfs.
|
gharchive/issue
| 2016-06-23T05:19:42 |
2025-04-01T04:34:23.517048
|
{
"authors": [
"mattetti",
"rakyll",
"zankich"
],
"repo": "goiot/devices",
"url": "https://github.com/goiot/devices/issues/25",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1025946876
|
Add backward compatibility check in pyfuncserver
What this PR does / why we need it:
Add backward compatibility in pyfuncserver if user specified certain version of merlin-sdk as pyfunc dependency
Which issue(s) this PR fixes:
Does this PR introduce a user-facing change?:
None
Checklist
[x] Added unit test, integration, and/or e2e tests
[x] Tested locally
[ ] Updated documentation
[ ] Update Swagger spec if the PR introduce API changes
[ ] Regenerated Golang and Python client if the PR introduce API changes
Can you also add an end to end test case for this?
|
gharchive/pull-request
| 2021-10-14T04:55:42 |
2025-04-01T04:34:23.520586
|
{
"authors": [
"pradithya",
"tiopramayudi"
],
"repo": "gojek/merlin",
"url": "https://github.com/gojek/merlin/pull/189",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
162820293
|
Questions about the function recursion example on the page 123 of book "the go programming language"
I am reading the book "the go programming language" and get confused by the example on the page 123 which traverse a html document tree to illustrate the concept of function recursion. In the example, a slice "stack" is passed to the callee and the book explains that "the callee receives a copy of stack and will not modify the initial stack". Based on my understanding, slice should be a reference type and should be modified by the callee.
Thanks in advance.
Not the right place to ask about this. golang.org/wiki/Questions
|
gharchive/issue
| 2016-06-29T01:06:01 |
2025-04-01T04:34:23.533593
|
{
"authors": [
"Almodovar",
"adg"
],
"repo": "golang/gddo",
"url": "https://github.com/golang/gddo/issues/419",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
98121033
|
api: go1.5beta3 API check exit status 1
on Mac OS X 10.10.4:
##### API check
Error running API checker: exit status 1
Go version is "go1.5beta3", ignoring -next /Users/ajstarks/go/api/next.txt
+pkg encoding/json, method (*Decoder) More() bool
+pkg encoding/json, method (*Decoder) Token() (Token, error)
+pkg encoding/json, method (Delim) String() string
+pkg encoding/json, type Delim int32
+pkg encoding/json, type Token interface {}
+pkg runtime, type MemStats struct, GCCPUFraction float64
exit status 1
2015/07/30 05:14:15 Failed: exit status 1
https://go-review.googlesource.com/12769
|
gharchive/issue
| 2015-07-30T09:17:35 |
2025-04-01T04:34:23.534935
|
{
"authors": [
"ajstarks",
"bradfitz"
],
"repo": "golang/go",
"url": "https://github.com/golang/go/issues/11935",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
122410198
|
net/http: flake in TestHijackAfterCloseNotifier?
Seen here: http://build.golang.org/log/fb09069fbdc77389a13a2e191d70eb8ce01b84e9 (linux/arm64) but also locally.
CC @bradfitz
I can reproduce at least on Linux/amd64.
Notes:
0 to 3 in 2000 runs fails for me.
creating a dedicated Transport just for this test (e.g. with newClientServerTest) doesn't fix it.
disabling the CloseNotify calls doesn't fix it.
closing Response.Bodies doesn't fix it.
copying the bytes from Response.Body to ioutil.Discard (always zero, always nil io.Copy error) doesn't fix it.
no races detected
even a http.Handler of this single line still fails:
w.Header().Set("X-Addr", r.RemoteAddr)
So this has nothing to do with either CloseNotifier, nor Hijack.
There are other tests which use this same pattern of doing two requests and verifying they report the same RemoteAddr as a proxy for determining whether the connection was re-used. One example is TestHandlerSetsBodyNil_h1 and _h2. It looks identical about I can't get it to flake.
I'm very confused.
Ah, the difference between TestHijackAfterCloseNotifier and TestHandlerSetsBodyNil_h1 is that the latter writes non-zero response bytes. And indeed, that seems to be the cause of the flakiness: if there are zero response bytes and a Content-Length of 0, the Transport sometimes creates a new connection rather than re-using the one it just replied with.
Sent https://go-review.googlesource.com/#/c/17890/1
Thanks for the quick fix :-)
|
gharchive/issue
| 2015-12-16T01:57:53 |
2025-04-01T04:34:23.540147
|
{
"authors": [
"bradfitz",
"ianlancetaylor",
"mwhudson"
],
"repo": "golang/go",
"url": "https://github.com/golang/go/issues/13633",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
128672388
|
tour: [Grammar error]
Context: https://tour.golang.org/methods/8
If I'm not mistaken the word "to" from the following line should be removed.
"In general, all methods on a given type to should have either value or pointer receivers, but not a mixture of both. (We'll see why over the next few pages.)".
Fixed in #13951.
|
gharchive/issue
| 2016-01-25T23:23:37 |
2025-04-01T04:34:23.541921
|
{
"authors": [
"broady",
"mvescovo"
],
"repo": "golang/go",
"url": "https://github.com/golang/go/issues/14091",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
142543818
|
cmd/compile: remove bounds checking for sub-slices
Please answer these questions before submitting your issue. Thanks!
What version of Go are you using (go version)?
go version devel +259b7ed 2016-03-22 00:18:31 +0000 linux/amd64
What operating system and processor architecture are you using (go env)?
Linux Ava 4.4.5-1-ARCH #1 SMP PREEMPT Thu Mar 10 07:38:19 CET 2016 x86_64 GNU/Linux
What did you do?
func a(in []byte) uint64 {
return uint64(in[0]) | uint64(in[1])<<8 | uint64(in[2])<<16 | uint64(in[3])<<24 |
uint64(in[4])<<32 | uint64(in[5])<<40 | uint64(in[6])<<48 | uint64(in[7])<<56
}
func b(p []byte) uint64 {
p = p[:16]
return a(p[8:8])
}
What did you expect to see?
No check for p[8:8]
What did you see instead?
0x001a 00026 (blah.go:10) CMPQ CX, $8
0x001e 00030 (blah.go:10) JCS $0, 39
On a side note, by inlining that function by hand, some of my code is now faster than using unsafe, so there's that.
Amazing job with the SSA branch guys.
CC @randall77 @dr2chase
I think you mean p[8:16], not p[8:8]. p[8:8] is guaranteed to panic when given to a.
But I think your general issue is still there. p[8:16] looks ok, but p[7:15] has the same extra comparison in there that p[8:8] has. After we do cap(p)>=16, we know to get rid of another cap(p)>=16, but not a cap(p)>=15.
@brtzsnr
It should be p = p[:16:len(p)] otherwise you extend p over the original length. This is binary.BigEndian.Uint64(), right? Why not use that instead, we might be able to do something faster your example.
There is some more opportunity here to optimize:
v9 = Const64 <int> [16]
v8 = SliceLen <int> v7
v17 = IsSliceInBounds <bool> v9 v8
So we have v8 >= v9. Later
v25 = Const64 <int> [8]
v35 = Eq64 <bool> v25 v8
v35 cannot be true. Related to #14900.
I think this is fixed after https://go-review.googlesource.com/#/c/21008/
Yep, I just confirmed, closing.
|
gharchive/issue
| 2016-03-22T04:12:59 |
2025-04-01T04:34:23.548506
|
{
"authors": [
"OneOfOne",
"brtzsnr",
"ianlancetaylor",
"randall77"
],
"repo": "golang/go",
"url": "https://github.com/golang/go/issues/14905",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
182027324
|
Spec inaccuracy: Handling panics
The return value of recover is nil if any of the following conditions holds:
* panic's argument was nil;
* the goroutine is not panicking;
* recover was not called directly by a deferred function.
In fact, the third condition is just a special case of the second one. I don't know this description is ok or not.
The third condition is not a special case of the second condition. Consider
package main
import "fmt"
func F() {
fmt.Println(recover())
}
func G() {
defer F()
panic(1)
}
func H() {
defer func() {
F()
}()
panic(2)
}
func main() {
G()
H()
}
Do you want to prove the recover in H will return nil? It return 2 instead.
@golang101, you are mistaken. https://play.golang.org/p/jabvSNqPrZ
ok, get it. Never know this.
But why so?
For questions, see https://golang.org/wiki/Questions
@ianlancetaylor, @spenczar,
I modify the example a little, by also deferring the inner F() calling.
But the recover calling in this F still returns nil.
The 3 conditions described in spec don't cover this condition.
package main
import "fmt"
func F() {
fmt.Println(recover())
}
func G() {
defer F()
panic(1)
}
func H() {
defer func() {
defer F() // I modify this line
}()
panic(2)
}
func main() {
fmt.Print("G(): ")
G()
fmt.Print("H(): ")
H()
}
This is expected. Please take discussion of how panic and recover work to a forum, not the issue tracker. See https://golang.org/wiki/Questions . Thanks.
|
gharchive/issue
| 2016-10-10T14:21:55 |
2025-04-01T04:34:23.552978
|
{
"authors": [
"bradfitz",
"golang101",
"ianlancetaylor",
"spenczar"
],
"repo": "golang/go",
"url": "https://github.com/golang/go/issues/17399",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
223698976
|
Dynamic type assertion for interface
Please answer these questions before submitting your issue. Thanks!
What version of Go are you using (go version)?
go version go1.6.4 windows/amd64
What operating system and processor architecture are you using (go env)?
set GOARCH=amd64
set GOBIN=C:\Projects\Go\bin
set GOEXE=.exe
set GOHOSTARCH=amd64
set GOHOSTOS=windows
set GOOS=windows
set GOPATH=C:\Projects\Go
set GORACE=
set GOROOT=C:\Go
set GOTOOLDIR=C:\Go\pkg\tool\windows_amd64
set GO15VENDOREXPERIMENT=1
set CC=gcc
set GOGCCFLAGS=-m64 -mthreads -fmessage-length=0
set CXX=g++
set CGO_ENABLED=1
What did you do?
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
Please do note that the structs in this example is a sample.My struct is complex than this.
https://play.golang.org/p/_t7gx3javb
Here I have a function where i will be using lot of times.Each time i will be passing object of different struct types.So i use the parameter arguments as interface{}.
So to solve the error i need to do type assertion.But here the interface can be of any struct type.So i am looking if there is any possible way to do the dynamic type assertion based on the reflect.TypeOf value.
What did you expect to see?
Copy of the struct
What did you see instead?
Error
For questions about Go, see https://golang.org/wiki/Questions.
oh sorry and thanks for your info....New to golang....I will post it in the golang wiki.
No, the wiki is not the place to ask questions. See the link above.
|
gharchive/issue
| 2017-04-24T04:01:14 |
2025-04-01T04:34:23.560133
|
{
"authors": [
"RajeshKumar1990",
"bradfitz"
],
"repo": "golang/go",
"url": "https://github.com/golang/go/issues/20093",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
317506744
|
cmd/link: include DWARF declaration position for types
The linker should include the declaration position for types so that we can use this information in editors and disambiguate where the types come from.
The easiest to replicate the confusion is to have a project which has multiple main packages and to try to go to a specific type declared in both those main package. In this case, the type contains only the main.TypeName information so an editor would be forced to implement additional logic to infer where the main package is located in the binary.
Thank you.
Thank you for your quick reply. I realize that this is not necessarily a priority in the big picture. Would producing the package import path, for example github.com/dlsniper/demo/cmd/pkg.Type be a viable workaround? This would allow to have consistency with all other package names/paths.
This isn't my area of expertise, but I believe that main is special and doesn't actually have an import path at all, since it's never imported. If it did that would be a good solution.
Does DW_AT_compilation_dir help at all?
|
gharchive/issue
| 2018-04-25T07:22:37 |
2025-04-01T04:34:23.562774
|
{
"authors": [
"dlsniper",
"heschik"
],
"repo": "golang/go",
"url": "https://github.com/golang/go/issues/25064",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
376052251
|
semver versioning of tools used in the build
Modules solves the versioning issue for dependencies that are packages
There does not seem to be a solution for tools used in the build process.
For example if I create build scripts that use go2xunit I must still use go get to install it.
If a new version of those projects is released it could break my build if "go get" is part of the build script.
If it is not part of the build script then it must be part of the build environment which makes builds environmentally sensitive. This defers but does not eliminate the problem. My build might not be reproducible if I set up a new environment in the future (using go get at that point).
Also raised on stack overflow though that is seeking a practical solution for an earlier version of go. This ticket is for go itself.
Dup of https://github.com/golang/go/issues/25922 and https://github.com/golang/go/issues/27653
|
gharchive/issue
| 2018-10-31T16:30:44 |
2025-04-01T04:34:23.565485
|
{
"authors": [
"KantarBruceAdams",
"myitcv"
],
"repo": "golang/go",
"url": "https://github.com/golang/go/issues/28512",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
394763787
|
path/filepath: EvalSymlinks fails when target is file
What version of Go are you using (go version)?
go version devel +d459962967 Fri Dec 28 22:14:11 2018 +0000 windows/amd64
Does this issue reproduce with the latest release?
No. This bug is not present in go1.11.
What operating system and processor architecture are you using (go env)?
go env Output
set GOARCH=amd64
set GOBIN=
set GOCACHE=C:\Users\Alex\AppData\Local\go-build
set GOEXE=.exe
set GOFLAGS=
set GOHOSTARCH=amd64
set GOHOSTOS=windows
set GOOS=windows
set GOPATH=c:\users\alex\dev
set GOPROXY=
set GORACE=
set GOROOT=c:\users\alex\dev\go
set GOTMPDIR=
set GOTOOLDIR=c:\users\alex\dev\go\pkg\tool\windows_amd64
set GCCGO=gccgo
set CC=gcc
set CXX=g++
set CGO_ENABLED=1
set GOMOD=
set CGO_CFLAGS=-g -O2
set CGO_CPPFLAGS=
set CGO_CXXFLAGS=-g -O2
set CGO_FFLAGS=-g -O2
set CGO_LDFLAGS=-g -O2
set PKG_CONFIG=pkg-config
set GOGCCFLAGS=-m64 -mthreads -fmessage-length=0 -fdebug-prefix-map=C:\Users\Alex\AppData\Local\Temp\go-build894807034=/tmp/go-build -gno-record-gcc-switches
What did you do?
I run this test
package main_test
import (
"io/ioutil"
"os"
"os/exec"
"path/filepath"
"strings"
"testing"
)
func TestNTNamespaceSymlink(t *testing.T) {
tmpdir, err := ioutil.TempDir("", "TestNTNamespaceSymlink")
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(tmpdir)
tmpfile := filepath.Join(tmpdir, "file")
err = ioutil.WriteFile(tmpfile, []byte(""), 0666)
if err != nil {
t.Fatal(err)
}
vol := filepath.VolumeName(tmpdir)
output, err := exec.Command("cmd", "/c", "mountvol", vol, "/L").CombinedOutput()
if err != nil {
t.Fatalf("failed to run mountvol %v /L: %v %q", vol, err, output)
}
target := strings.Trim(string(output), " \n\r")
target = target + tmpfile[3:]
link := filepath.Join(tmpdir, "link")
output, err = exec.Command("cmd", "/c", "mklink", link, target).CombinedOutput()
if err != nil {
t.Fatalf("failed to run mklink %v %v: %v %q", link, target, err, output)
}
got, err := filepath.EvalSymlinks(link)
if err != nil {
t.Fatal(err)
}
if want := tmpfile; got != want {
t.Errorf(`EvalSymlinks(%q): got %q, want %q`, link, got, want)
}
}
What did you expect to see?
I expect the test to PASS.
What did you see instead?
=== RUN TestNTNamespaceSymlink
--- FAIL: TestNTNamespaceSymlink (0.08s)
a_test.go:41: The system cannot find the path specified.
FAIL
I can make the test pass, if I use commit before b32ee0a3c004d4ef79d92bd63200008456da50f3.
/cc @Gnouc and @ianlancetaylor
Alex
Can you make sure mklink created a valid link?
Oh, it looks like link to me. If I remove defer os.RemoveAll(tmpdir) line, then I can look around and everything:
C:\>cd %TMP%
C:\Users\Alex\AppData\Local\Temp>dir
Volume in drive C has no label.
Volume Serial Number is 9012-A870
Directory of C:\Users\Alex\AppData\Local\Temp
29/12/2018 05:43 PM <DIR> .
29/12/2018 05:43 PM <DIR> ..
29/12/2018 10:16 AM <DIR> 3548C99F-693C-4805-99DA-8F47DEEEC977
0 File(s) 0 bytes
3 Dir(s) 299,225,292,800 bytes free
C:\Users\Alex\AppData\Local\Temp>u:\test
--- FAIL: TestNTNamespaceSymlink (0.09s)
a_test.go:41: The system cannot find the path specified.
FAIL
C:\Users\Alex\AppData\Local\Temp>dir TestNTNamespaceSymlink535467779
Volume in drive C has no label.
Volume Serial Number is 9012-A870
Directory of C:\Users\Alex\AppData\Local\Temp\TestNTNamespaceSymlink535467779
29/12/2018 05:43 PM <DIR> .
29/12/2018 05:43 PM <DIR> ..
29/12/2018 05:43 PM 0 file
29/12/2018 05:43 PM <SYMLINK> link [\\?\Volume{ea961e77-0000-0000-0000-501f00000000}\Users\Alex\AppData\Local\Temp\TestNTNamespaceSymlink535467779\file]
2 File(s) 0 bytes
2 Dir(s) 299,224,088,576 bytes free
C:\Users\Alex\AppData\Local\Temp>type TestNTNamespaceSymlink535467779\link
C:\Users\Alex\AppData\Local\Temp>
The error message indicate that the error is not syscall.ENOTDIR
The error message is syscall.ENOTDIR.
If you look in $GOPATH/src/syscall/zerrors_windows.go file, you will see that syscall.ENOTDIR is actually Windows ERROR_PATH_NOT_FOUND. And ERROR_PATH_NOT_FOUND error text is The system cannot find the path specified. Search for ERROR_PATH_NOT_FOUND in https://docs.microsoft.com/en-us/windows/desktop/debug/system-error-codes--0-499-
Alex
@alexbrainman I tested myself, and as I said above, the link generated is invalid. I don't have Go setup environment on Windows, so I build this file and run on VM:
package main
import (
"io/ioutil"
"log"
"os/exec"
"path/filepath"
"strings"
)
func main() {
tmpdir, err := ioutil.TempDir("", "TestNTNamespaceSymlink")
if err != nil {
log.Fatal(err)
}
println(tmpdir)
tmpfile := filepath.Join(tmpdir, "file")
err = ioutil.WriteFile(tmpfile, []byte(""), 0666)
if err != nil {
log.Fatal(err)
}
vol := filepath.VolumeName(tmpdir)
output, err := exec.Command("cmd", "/c", "mountvol", vol, "/L").CombinedOutput()
if err != nil {
log.Fatalf("failed to run mountvol %v /L: %v %q", vol, err, output)
}
target := strings.Trim(string(output), " \n\r")
println(tmpfile)
target = target + tmpfile[3:]
link := filepath.Join(tmpdir, "link")
output, err = exec.Command("cmd", "/c", "mklink", link, target).CombinedOutput()
if err != nil {
log.Fatalf("failed to run mklink %v %v: %v %q", link, target, err, output)
}
got, err := filepath.EvalSymlinks(link)
if err != nil {
log.Fatal(err)
}
if want := tmpfile; got != want {
log.Fatalf(`EvalSymlinks(%q): got %q, want %q`, link, got, want)
}
}
When open explorer, double click to the link, a popup shown, said that missing shortcut.
@alexbrainman I think the best way is fixing evalSymlinksUsingGetFinalPathNameByHandle to return an error with path like C;\path\to\existing_file\. So we can remove symlinkOrDir hack.
If yes, then if we remove symlinkOrDir checking, how can we solve problem with C:\path\to\existing_file\ case?
I don't have solution at this moment. Maybe leave Windows out of your https://go-review.googlesource.com/c/go/+/155597 altogether. Maybe revert CL 155597 symlink_windows.go changes and move TestIssue29372 into non Windows test file.
I think we should just revert CL 155597 for now.
Alex
@alexbrainman I did it in CL 155997, builder failed due to timeout problem
@ianlancetaylor gentle ping
@ianlancetaylor With a path like /path/to/existing_file/, walkSymlinks will process each component of the path, first from /path, then /path/to and so on.
If any component of the part [is not symlink or directory],(https://github.com/golang/go/blob/master/src/path/filepath/symlink.go#L81) then the part is broken.
Now, on Windows, if walkSymlinks return an error, then evalSymlinksUsingGetFinalPathNameByHandle is used to check the path again. I'm not sure why we do that (I'm not familiar with Windows), but internally, evalSymlinksUsingGetFinalPathNameByHandle uses GetFinalPathNameByHandle, which somehow normalize /path/to/existing_file/ to become /path/to/existing_file, and return success.
I just check from cmd and Powershell on Windows, if you did:
C:\ ls C:\existing_dir\existing_file\
You will get path not found error.
The point is if walkSymlinks return syscall.ENOTDIR, then nothing to be done anymore. In case of other errors, using evalSymlinksUsingGetFinalPathNameByHandle as before.
@Gnouc OK, thanks.
Is there some way that we can fix 155997 so that we don't have to add more special cases for Windows? If we need a special test for "/." (and perhaps "/..") then let's do that.
@ianlancetaylor
Is there some way that we can fix 155997 so that we don't have to add more special cases for Windows?
I don't think there's any other ways, except for checking walkSymlinks returns syscall.ENOTDIR or not.
If we don't want special cases for Windows, then just left the Windows behavior as before 155597. If we went that way, we must except that EvalSymlinks("/path/to/existing_file/") returns error on *nix, but EvalSymlinks("C:\path\to\existing_file\") success on Windows.
Can we simply check explicitly for the cases that make a difference? Like check for a trailing slash and see whether it is a file?
Can we simply check explicitly for the cases that make a difference? Like check for a trailing slash and see whether it is a file?
Good idea. Here is my attempt https://go-review.googlesource.com/c/go/+/156398
Alex
@alexbrainman Your CL does the same thing with mine: If error from walkSymlinks is syscall.ENOTDIR, just return, other error is passed to evalSymlinksUsingGetFinalPathNameByHandle as before.
Also why named slashAfterFilePathError instead of plain syscall.ENOTDIR?
Because slashAfterFilePathError error for path like /path/to/existing_file/foo/bar looks weird.
|
gharchive/issue
| 2018-12-29T05:44:53 |
2025-04-01T04:34:23.582334
|
{
"authors": [
"Gnouc",
"alexbrainman",
"ianlancetaylor"
],
"repo": "golang/go",
"url": "https://github.com/golang/go/issues/29449",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
494821879
|
cmd/go: 'go build -compiler=gccgo' fails silently when the package path is 'm'
This program silently fails to emit output:
cd m
go build -compiler=gccgo
-- m/go.mod --
module m
-- m/hello.go --
package main
func main() { println("hello") }
Changing the module path to (seemingly) anything other than m causes the test to fail with the symptom in #30344.
If the source file is appropriately located within GOPATH/src, varying the value of GO111MODULE does not seem to make a difference.
CC @ianlancetaylor @thanm @cherrymui @jayconrod
This is happening because of the code in internal/goroot.(*gccgoDirs).isStandard that tries to determine whether a path exists in the standard library. The typical set of search directories for Go packages will include /usr/lib/x86_64-linux-gnu. Typically the file libm.so will exist in that directory. That will be enough for isStandard to decide that that standard library package exists.
The code there is a replica of what gccgo does internally. But gccgo goes on to actually open the file and look for gccgo export data. If it doesn't find any, it moves on. I think that for purposes of this code it will suffice to only look for .gox files.
|
gharchive/issue
| 2019-09-17T19:57:08 |
2025-04-01T04:34:23.585663
|
{
"authors": [
"bcmills",
"ianlancetaylor"
],
"repo": "golang/go",
"url": "https://github.com/golang/go/issues/34358",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
512775463
|
x/build: frequent timeouts running js-wasm TryBots
I've noticed that the js-wasm TryBots on my recent CLs seem to start with one or more spurious failures before finally running successfully.
I suspect that that increases load on the TryBots generally, and may increase overall TryBot latency in some circumstances.
A log from a recent example is here (https://farmer.golang.org/temporarylogs?name=js-wasm&rev=9516a47489b92b64e9cf5633e3ab7e37def3353f&st=0xc00ab39b80):
builder: js-wasm
rev: 9516a47489b92b64e9cf5633e3ab7e37def3353f
buildlet: (nil *buildlet.Client)
started: 2019-10-26 01:37:51.137407213 +0000 UTC m=+287652.939853079
ended: 2019-10-26 01:38:21.20458221 +0000 UTC m=+287683.007028020
success: false
Events:
2019-10-26T01:37:51Z ask_maintner_has_ancestor
2019-10-26T01:38:21Z finish_ask_maintner_has_ancestor after 30s; err=context deadline exceeded
Build log:
Error checking whether commit 9516a47489b92b64e9cf5633e3ab7e37def3353f includes ancestor 3dced519cbabc213df369d9112206986e62687fa: context deadline exceeded
Error: context deadline exceeded
CC @dmitshur @toothrot @bradfitz
One change was that the builder was recently bumped to Node 13.
I think that's unrelated. When I've seen this, the error is also as Bryan quoted above: a timeout doing a gRPC call to maintner, asking whether we should even do this build. We used to need to skip builds for js/wasm if the git history of the rev to be built didn't include js-wasm support. But now that we've supported js/wasm for a number of releases (since Go 1.11) and we don't even support Go 1.10 (or even Go 1.11) any more, we can just remove that condition on that builder.
But also: maintner shouldn't be so slow at that query.
Not very scientific, but the has-ancestor RPCs are taking only 100ms for me. (includes TCP+TLS setup, etc) Not anywhere near 30 seconds.
$ time maintq has-ancestor 0ae9389609f23dc905c58fc2ad7bcc16b770f337 3dced519cbabc213df369d9112206986e62687fa
has_ancestor:true
real 0m0.105s
user 0m0.076s
sys 0m0.028s
@dmitshur - Would you be able to check if this is still happening ?
@dmitshur Do you know if this is still an issue?
I feel like I've seen some discussion about this still happening occasionally quite recently, more so than the comments in this issue would suggest. Perhaps it was in another thread. I'll keep an eye out on this.
Yes, it is still happening. Last week I caught the js-wasm trybot taking 25 minutes when most were done in 15,
and just now I caught it taking 34 minutes when everything else was done in 16.
That one was https://go-review.googlesource.com/c/go/+/266357 and I gave up waiting and just submitted the CL.
Any advice about what to do when js-wasm looks like it is stuck/very slow and how to debug further would be greatly appreciated.
Any advice about what to do when js-wasm looks like it is stuck/very slow and how to debug further would be greatly appreciated.
I suggest trying to determine if there is a specific place where the build progresses more slowly than others. I.e., is it perhaps that time to first test is delayed, then it's fast, or is one of the package tests very slow, while the rest is fast, or is slowness equally distributed across all packages?
I'll also try to look into logs for the two occurrences you've shared and see if I can determine it from that.
From CL 266374 where I recently started trybots:
In this instance, the js-wasm builder is in waiting_for_machine state for over 10 minutes. That is very unexpected—it's a GCP builder that we should be able to spin up without running out of physical machines. Perhaps we're running out of some quota. Not sure why only js-wasm and not other GCP builders. But this is a lead.
In several occurrences that I saw, the js-wasm TryBot started running and partially completed, then descheduled and started over again from the beginning. I think there is some sort of failure being buried by retry logic, rather than just a slow test.
If we don't have the bandwidth to diagnose this at the moment, perhaps we should temporarily demote js-wasm to an opt-in SlowBot?
I'm also experiencing js-wasm slowdowns for my trybot runs. Here is an example:
Log (https://farmer.golang.org/temporarylogs?name=js-wasm&rev=ff065bd219cd7a9df01466e338b28f1891671da3&st=0xc0184e91e0):
builder: js-wasm
rev: ff065bd219cd7a9df01466e338b28f1891671da3
buildlet: (nil *buildlet.Client)
started: 2020-10-30 16:46:48.188839196 +0000 UTC m=+727935.299326955
status: still running
Events:
2020-10-30T16:46:48Z checking_for_snapshot
2020-10-30T16:46:48Z finish_checking_for_snapshot after 23.9ms
2020-10-30T16:46:48Z get_buildlet
+2093.2s (now)
Build log:
(buildlet still starting; no live streaming. reload manually to see status)
Seems as though it is not getting past "get buildlet"
I don't know much about farmer, but on https://farmer.golang.org/#sched it seems to me as if multiple trybot runs are waiting on the js-wasm tests, but I can only see very few buildlets with the prefix GCE VM: buildlet-js-wasm-. Is something preventing these buildlets from spawning?
Is something preventing these buildlets from spawning?
It seems so. I don't have an idea about what it might be yet. It may be related to #42285. I'm going to look more later today.
I extracted some raw data from some Wasm trybot runs, which I was going to analyze more and post something here, but then this work got pre-empted by other work, and by now I've lost it due to a forced browser update.
From memory, the high-level summary was that the Wasm trybot runs are generally quick, with two components that contribute a fairly significant proportion of the overall time:
test directory
reboot test
There's a commonly used dist-test adjust policy fasterTrybots that skips precisely those two components during pre-submit tests (but not during post-submit tests), which is an option available to speed up Wasm trybots.
However, it seems the root problem here was with the scheduling of the Wasm builders and starting the trybot runs, not the length of trybot test execution itself under a typical "happy" run. There are many possible places to look next. As @bcmills also pointed out in #42699, there is a problem of stalls sometimes resulting in extended test executions that eventually instead of what should be failures. Stalls could be due to something Wasm-specific (e.g., #31282) or something more general. We also have some possible blindspots on the side of coordinator (e.g., #39349, #39665). As I've recently experienced during https://github.com/golang/go/issues/42379#issuecomment-722054437, it's possible for stalls to happen when a test execution is sharded and not happen during a sequential all.bash test run.
All this is to say that more work needs to be done to get closer to finding and resolving the root issue(s) here. I think we have confirmation that this problem is intermittent and skipping a deterministically slow test is not an available option.
Is anyone still observing this issue?
Is something preventing these buildlets from spawning?
A new idea by now is that it might've been hitting the GCE VM quota. Some of those quotas are visible in the first line of https://farmer.golang.org/#pools.
If there are no frequent timeouts happening by now, perhaps we should close this.
Sounds good, closing since I haven't seen reports of this in a long while, can reopen or file another issue if this comes up again.
From looking at the initial report, there's also a chance that the same problem as in issue #55947 (fixed as of Oct 2022) may have contributed here too.
|
gharchive/issue
| 2019-10-26T01:43:16 |
2025-04-01T04:34:23.602176
|
{
"authors": [
"agnivade",
"bcmills",
"bradfitz",
"dmitshur",
"neelance",
"rsc",
"thanm"
],
"repo": "golang/go",
"url": "https://github.com/golang/go/issues/35170",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
561011308
|
encoding/json: Encoder adds new line at the end of response which is different from how Marshal works
What version of Go are you using (go version)?
$ go version
1.13.5
Issue:
Encoder and Marshal behaves differently. Refer
There is a newline added for encoder. Refer to this code.
Where does this break:
In cases where content length is asserted on. Eg. In gin, context.JSON uses Encoder that adds the new line to response thus making the content length of response one more than what was in the actual response. The convention is broken as many libraries dont read the newline. Hence read content length will be one lesser than content length sent in the response.
This fails in HTTP clients due to a difference in the actual content length and the Content-Length header
Refer this issue
What did you expect to see?
Encoder and Marshal should result in the exact same JSON string.
What did you see instead?
An additional line added in response when encoder is used to convert object to json.
cc @dineshba @kaushikneelichetty @kishaningithub
This is working as documented.
https://golang.org/pkg/encoding/json/#Encoder.Encode
Encode writes the JSON encoding of v to the stream, followed by a newline character.
@AlexRouSg Yes, issue is not that its not documented, but rather that the difference breaks in the scenario mentioned. I dont mind the newline if nothing breaks but since it does, is it actually needed?
@cagedmantis can I raise a PR for this?
Please see https://golang.org/doc/go1compat for why this cannot be changed.
The most you can do is propose new API but the bar for that is very high, see https://golang.org/doc/faq#x_in_std
and https://github.com/golang/proposal
cc @rsc @dsnet @bradfitz @mvdan as per owners
To give some context, the newline isn't there for pretty output. If that were the case, it would be coupled with Indent, but it isn't.
The reason is so that you can write a stream of JSON objects delimited by lines, instead of having to write an entire list of objects all at once. This is a well known way to stream objects in JSON. For example, go test -json uses this, so that it can show you test results as they happen, instead of dumping all the output at the end of the program.
Similarly, the decoder supports this kind of streaming of objects.
What is your suggestion here? Like @AlexRouSg said, we can't change the behavior, even if we wanted to - we would break too many existing programs.
|
gharchive/issue
| 2020-02-06T13:25:54 |
2025-04-01T04:34:23.611015
|
{
"authors": [
"AlexRouSg",
"jpninanjohn",
"mvdan"
],
"repo": "golang/go",
"url": "https://github.com/golang/go/issues/37083",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
675693218
|
gollvm, libbacktrace: CMake's checks on _alloca, __alloca
From my CMakeError.log file:
Determining if the function _alloca exists failed with the following output:
Change Dir: D:/workarea/test.rel/CMakeFiles/CMakeTmp
Run Build Command(s):D:/Python38-32/Scripts/ninja.exe cmTC_44264 && [1/2] Building C object CMakeFiles\cmTC_44264.dir\CheckFunctionExists.c.obj
FAILED: CMakeFiles/cmTC_44264.dir/CheckFunctionExists.c.obj
D:\LLVM\bin\clang-cl.exe /nologo /DWIN32 /D_WINDOWS /W3 -DCHECK_FUNCTION_EXISTS=_alloca -Werror=unguarded-availability-new /MDd /Zi /Ob0 /Od /RTC1 /showIncludes /FoCMakeFiles\cmTC_44264.dir\CheckFunctionExists.c.obj /FdCMakeFiles\cmTC_44264.dir\ -c D:\CMake\share\cmake-3.18\Modules\CheckFunctionExists.c
D:\CMake\share\cmake-3.18\Modules\CheckFunctionExists.c(7,3): error: conflicting types for '_alloca'
CHECK_FUNCTION_EXISTS(void);
^
(6,31): note: expanded from here
#define CHECK_FUNCTION_EXISTS _alloca
^
D:\CMake\share\cmake-3.18\Modules\CheckFunctionExists.c(7,3): *note: '_alloca' is a builtin with type 'void (unsigned long long)'
(6,31): note: expanded from here
#define CHECK_FUNCTION_EXISTS _alloca
^
D:\CMake\share\cmake-3.18\Modules\CheckFunctionExists.c(17,25): error: too few arguments to function call, expected 1, have 0
CHECK_FUNCTION_EXISTS();
2 errors generated.
ninja: build stopped: subcommand failed.
I see that it is present within Microsoft's include directories/headers: https://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/alloca?view=vs-2019
And this function shouldn't be searched, on Windows, since it never had to exist (within any header file):
Determining if the function __alloca exists failed with the following output:
Change Dir: D:/workarea/test.rel/CMakeFiles/CMakeTmp
Run Build Command(s):D:/Python38-32/Scripts/ninja.exe cmTC_8b6e0 && [1/2] Building C object CMakeFiles\cmTC_8b6e0.dir\CheckFunctionExists.c.obj
[2/2] Linking C executable cmTC_8b6e0.exe
FAILED: cmTC_8b6e0.exe
cmd.exe /C "cd . && D:\CMake\bin\cmake.exe -E vs_link_exe --intdir=CMakeFiles\cmTC_8b6e0.dir --rc="C:\Program Files (x86)\WINDOW~1\10\bin\100190~1.0\x64\rc.exe" --mt="C:\Program Files (x86)\WINDOW~1\10\bin\100190~1.0\x64\mt.exe" --manifests -- D:\LLVM\bin\lld-link.exe /nologo CMakeFiles\cmTC_8b6e0.dir\CheckFunctionExists.c.obj /out:cmTC_8b6e0.exe /implib:cmTC_8b6e0.lib /pdb:cmTC_8b6e0.pdb /version:0.0 /machine:x64 /debug /INCREMENTAL /subsystem:console kernel32.lib user32.lib gdi32.lib winspool.lib shell32.lib ole32.lib oleaut32.lib uuid.lib comdlg32.lib advapi32.lib && cd ."
LINK Pass 1: command "D:\LLVM\bin\lld-link.exe /nologo CMakeFiles\cmTC_8b6e0.dir\CheckFunctionExists.c.obj /out:cmTC_8b6e0.exe /implib:cmTC_8b6e0.lib /pdb:cmTC_8b6e0.pdb /version:0.0 /machine:x64 /debug /INCREMENTAL /subsystem:console kernel32.lib user32.lib gdi32.lib winspool.lib shell32.lib ole32.lib oleaut32.lib uuid.lib comdlg32.lib advapi32.lib /MANIFEST /MANIFESTFILE:CMakeFiles\cmTC_8b6e0.dir/intermediate.manifest CMakeFiles\cmTC_8b6e0.dir/manifest.res" failed (exit code 1) with the following output:
lld-link: error: undefined symbol: __alloca
referenced by D:\CMake\share\cmake-3.18\Modules\CheckFunctionExists.c:17
CMakeFiles\cmTC_8b6e0.dir\CheckFunctionExists.c.obj:(main)
ninja: build stopped: subcommand failed.
What did you expect to see?
-- Looking for _alloca
-- Looking for _alloca - found
-- Looking for __alloca
-- Looking for __alloca - not found
or, if being more precise:
-- Looking for _alloca
-- Looking for _alloca - found
What did you see instead?
-- Looking for _alloca
-- Looking for _alloca - not found
-- Looking for __alloca
-- Looking for __alloca - not found
Ivan
Currently gollvm does not support Windows, so it is expected not to work. If you want to work on it, that is great. Otherwise there is no need to file a bug report, as it is technically not a bug. Thanks.
|
gharchive/issue
| 2020-08-09T12:39:41 |
2025-04-01T04:34:23.623763
|
{
"authors": [
"advancedwebdeveloper",
"cherrymui"
],
"repo": "golang/go",
"url": "https://github.com/golang/go/issues/40658",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
740101745
|
net: expose LOWER_UP flag in NICs
What version of Go are you using (go version)?
$ go version 1.15.4
Does this issue reproduce with the latest release?
Yes
What operating system and processor architecture are you using (go env)?
go env Output
$ go env
GO111MODULE=""
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/x1unix/.cache/go-build"
GOENV="/home/x1unix/.config/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOINSECURE=""
GOMODCACHE="/home/x1unix/go/pkg/mod"
GONOPROXY=""
GONOSUMDB=""
GOOS="linux"
GOPATH="/home/x1unix/go"
GOPRIVATE=""
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/usr/lib/go"
GOSUMDB="off"
GOTMPDIR=""
GOTOOLDIR="/usr/lib/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
AR="ar"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS="-w"
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build478690309=/tmp/go-build -gno-record-gcc-switches"
What did you do?
Hello. I want to be able to detect if network interface is physical or virtual.
In Linux, there is a special LOWER_UP flag that indicates that NIC is physical, not virtual.
Currently, net package exposes only a few flags, but not LOWER_UP flag:
const (
FlagUp Flags = 1 << iota // interface is up
FlagBroadcast // interface supports broadcast access capability
FlagLoopback // interface is a loopback interface
FlagPointToPoint // interface belongs to a point-to-point link
FlagMulticast // interface supports multicast access capability
)
LOWER_UP is described as IFF_LOWER_UP at include/uapi/linux/if.h
What did you expect to see?
Can you add please LowerUp flag to net package?
What did you see instead?
cc @odeke-em
/cc @bradfitz @ianlancetaylor
LOWER_UP is described as IFF_LOWER_UP at include/uapi/linux/if.h
Hi, the Uroot team also has an ip implementation at https://github.com/u-root/u-root/tree/master/cmds/core/ip which at this point uses net.Flags. This means we're restricted to displaying only the 5 flags currently supported. While we could read the raw flags directly and do all the parsing ourselves, it seems like it would be much nicer to have the official net package support it.
I'm not clear if this should be in the net package or the x/net package.
I don't see any difficulty to adding it to the x/net package.
The existing type is in the net package, so I think any additions would have to go there: https://cs.opensource.google/go/go/+/master:src/net/interface.go;l=30-46;drc=58e381b0b22352dda355f6d95fa101b773766c72?ss=go
Interface could also potentially have a RawFlags method that returns the underlying OS-specific flags, though I'm not sure if that would be useful to folks, or if having the OS-neutral flags is the main benefit here.
@prattmic is it possible to introduce a flag with OS-neutral name but with the same purposes?
Windows also allows to check if NIC is virtual or physical, so this solution can be cross-platform.
@x1unix I'm not really the best person to answer that question, as I'm not particularly involved in net. That sounds reasonable to me (net.Flags is in fact an OS-neutral type). I don't think it would work for @GanShun's use case, which IIUC, depends on net.Flags.String() returning the Linux name, but perhaps that is just out of scope.
|
gharchive/issue
| 2020-11-10T17:25:59 |
2025-04-01T04:34:23.637066
|
{
"authors": [
"GanShun",
"cagedmantis",
"ianlancetaylor",
"networkimprov",
"prattmic",
"x1unix"
],
"repo": "golang/go",
"url": "https://github.com/golang/go/issues/42488",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
741421441
|
net/http: bundled x/net/http2.responseWriter hangs forever
What version of Go are you using (go version)?
go version go1.15.4 linux/amd64
Does this issue reproduce with the latest release?
Yes
What operating system and processor architecture are you using (go env)?
go env Output
GO111MODULE=""
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/dionysius/.cache/go-build"
GOENV="/home/dionysius/.config/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOINSECURE=""
GOMODCACHE="/home/dionysius/go/pkg/mod"
GONOPROXY=""
GONOSUMDB=""
GOOS="linux"
GOPATH="/home/dionysius/go"
GOPRIVATE=""
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/usr/lib/go"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/usr/lib/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
AR="ar"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build977072123=/tmp/go-build -gno-record-gcc-switches"
What did you do?
So, let's begin
High-Level Problem Description
I'm creating reverse proxy with Caddy Server and my own plugin. I use http/2 and Server Push. Sometimes requests hang forever. Here is screenshot from Chrome DevTools:
Low-Level Problem Description
So, I started to debug this situation. I found that my code execution stuck at (http.responseWriter).Write(), which is an instance of http2responseWriter.
With help of pprof I found that lockup happens in two functions: http2serverConn.writeHeaders and http2serverConn.writeDataFromHandler - endless waiting of data from done channel.
Here is an illustration from pprof:
Next I built go from source with adding some debug messages and start to dive deeper.
I found a problem with frames are sent to output. At this line N frames were pushed: https://github.com/golang/go/blob/go1.15.4/src/net/http/h2_bundle.go#L4692. After push-function scheduleFrameWrite-function is called. I watched into it and found that it often exit here: https://github.com/golang/go/blob/go1.15.4/src/net/http/h2_bundle.go#L4817. And only M (M < N) frames were popped from queue here: https://github.com/golang/go/blob/go1.15.4/src/net/http/h2_bundle.go#L4837
Pushed Frames
2020/11/12 12:12:35 push writeFrame for streamID=0 size=0
2020/11/12 12:12:35 push writeFrame for streamID=119 size=0
2020/11/12 12:12:35 push writeFrame for streamID=26 size=0
2020/11/12 12:12:35 push writeFrame for streamID=119 size=0
2020/11/12 12:12:35 push writeFrame for streamID=26 size=15169
2020/11/12 12:12:35 push writeFrame for streamID=28 size=0
2020/11/12 12:12:35 push writeFrame for streamID=119 size=0
2020/11/12 12:12:35 push writeFrame for streamID=26 size=0
2020/11/12 12:12:35 push writeFrame for streamID=28 size=5566
2020/11/12 12:12:35 push writeFrame for streamID=30 size=0
2020/11/12 12:12:35 push writeFrame for streamID=119 size=0
2020/11/12 12:12:35 push writeFrame for streamID=28 size=0
2020/11/12 12:12:35 push writeFrame for streamID=30 size=77162
2020/11/12 12:12:35 push writeFrame for streamID=32 size=0
2020/11/12 12:12:35 push writeFrame for streamID=119 size=0
2020/11/12 12:12:35 push writeFrame for streamID=32 size=354
2020/11/12 12:12:35 push writeFrame for streamID=30 size=0
2020/11/12 12:12:35 push writeFrame for streamID=119 size=0
2020/11/12 12:12:35 push writeFrame for streamID=34 size=0
2020/11/12 12:12:35 push writeFrame for streamID=119 size=0
2020/11/12 12:12:35 push writeFrame for streamID=34 size=483
2020/11/12 12:12:35 push writeFrame for streamID=119 size=0
2020/11/12 12:12:35 push writeFrame for streamID=38 size=0
2020/11/12 12:12:35 push writeFrame for streamID=36 size=0
2020/11/12 12:12:35 push writeFrame for streamID=38 size=27485
2020/11/12 12:12:35 push writeFrame for streamID=40 size=0
2020/11/12 12:12:35 push writeFrame for streamID=119 size=0
2020/11/12 12:12:35 push writeFrame for streamID=36 size=6726
2020/11/12 12:12:35 push writeFrame for streamID=40 size=6293
2020/11/12 12:12:35 push writeFrame for streamID=119 size=0
2020/11/12 12:12:35 push writeFrame for streamID=42 size=0
2020/11/12 12:12:35 push writeFrame for streamID=119 size=0
2020/11/12 12:12:35 push writeFrame for streamID=119 size=0
2020/11/12 12:12:35 push writeFrame for streamID=44 size=0
2020/11/12 12:12:35 push writeFrame for streamID=42 size=11249
2020/11/12 12:12:35 push writeFrame for streamID=46 size=0
2020/11/12 12:12:35 push writeFrame for streamID=40 size=0
2020/11/12 12:12:35 push writeFrame for streamID=48 size=0
2020/11/12 12:12:35 push writeFrame for streamID=119 size=0
2020/11/12 12:12:35 push writeFrame for streamID=44 size=9293
2020/11/12 12:12:35 push writeFrame for streamID=46 size=2626
2020/11/12 12:12:35 push writeFrame for streamID=48 size=346
2020/11/12 12:12:35 push writeFrame for streamID=119 size=6381
2020/11/12 12:12:35 push writeFrame for streamID=42 size=0
2020/11/12 12:12:35 push writeFrame for streamID=44 size=0
2020/11/12 12:12:35 push writeFrame for streamID=36 size=0
2020/11/12 12:12:35 push writeFrame for streamID=119 size=0
2020/11/12 12:12:35 push writeFrame for streamID=38 size=0
2020/11/12 12:12:36 push writeFrame for streamID=121 size=0
2020/11/12 12:12:36 push writeFrame for streamID=121 size=1021
2020/11/12 12:12:36 push writeFrame for streamID=123 size=0
2020/11/12 12:12:36 push writeFrame for streamID=125 size=0
2020/11/12 12:12:36 push writeFrame for streamID=127 size=0
2020/11/12 12:12:36 push writeFrame for streamID=129 size=0
2020/11/12 12:12:36 push writeFrame for streamID=131 size=0
2020/11/12 12:12:36 push writeFrame for streamID=133 size=0
2020/11/12 12:12:36 push writeFrame for streamID=135 size=0
2020/11/12 12:12:36 push writeFrame for streamID=137 size=0
2020/11/12 12:12:36 push writeFrame for streamID=139 size=0
2020/11/12 12:12:36 push writeFrame for streamID=141 size=0
2020/11/12 12:12:36 push writeFrame for streamID=143 size=0
2020/11/12 12:12:36 push writeFrame for streamID=145 size=0
2020/11/12 12:12:36 push writeFrame for streamID=147 size=0
2020/11/12 12:12:36 push writeFrame for streamID=149 size=0
2020/11/12 12:12:36 push writeFrame for streamID=151 size=0
2020/11/12 12:12:36 push writeFrame for streamID=153 size=0
2020/11/12 12:12:36 push writeFrame for streamID=155 size=0
2020/11/12 12:12:36 push writeFrame for streamID=157 size=0
2020/11/12 12:12:36 push writeFrame for streamID=159 size=0
2020/11/12 12:12:36 push writeFrame for streamID=161 size=0
2020/11/12 12:12:36 push writeFrame for streamID=163 size=0
2020/11/12 12:12:36 push writeFrame for streamID=165 size=0
2020/11/12 12:12:36 push writeFrame for streamID=167 size=0
2020/11/12 12:12:36 push writeFrame for streamID=169 size=0
2020/11/12 12:12:36 push writeFrame for streamID=171 size=0
2020/11/12 12:12:36 push writeFrame for streamID=173 size=0
2020/11/12 12:12:36 push writeFrame for streamID=175 size=0
2020/11/12 12:12:36 push writeFrame for streamID=201 size=0
2020/11/12 12:12:36 push writeFrame for streamID=179 size=0
2020/11/12 12:12:36 push writeFrame for streamID=181 size=0
2020/11/12 12:12:36 push writeFrame for streamID=183 size=0
2020/11/12 12:12:36 push writeFrame for streamID=185 size=0
2020/11/12 12:12:36 push writeFrame for streamID=187 size=0
2020/11/12 12:12:36 push writeFrame for streamID=207 size=0
2020/11/12 12:12:36 push writeFrame for streamID=203 size=0
2020/11/12 12:12:36 push writeFrame for streamID=209 size=0
2020/11/12 12:12:36 push writeFrame for streamID=205 size=0
2020/11/12 12:12:36 push writeFrame for streamID=189 size=0
2020/11/12 12:12:36 push writeFrame for streamID=213 size=0
2020/11/12 12:12:36 push writeFrame for streamID=211 size=0
2020/11/12 12:12:36 push writeFrame for streamID=221 size=0
2020/11/12 12:12:36 push writeFrame for streamID=193 size=0
2020/11/12 12:12:36 push writeFrame for streamID=191 size=0
2020/11/12 12:12:36 push writeFrame for streamID=197 size=0
2020/11/12 12:12:36 push writeFrame for streamID=199 size=0
2020/11/12 12:12:36 push writeFrame for streamID=195 size=0
2020/11/12 12:12:36 push writeFrame for streamID=215 size=0
2020/11/12 12:12:36 push writeFrame for streamID=217 size=0
2020/11/12 12:12:36 push writeFrame for streamID=223 size=0
2020/11/12 12:12:36 push writeFrame for streamID=219 size=0
2020/11/12 12:12:36 push writeFrame for streamID=225 size=0
2020/11/12 12:12:36 push writeFrame for streamID=227 size=0
2020/11/12 12:12:36 push writeFrame for streamID=123 size=3464
2020/11/12 12:12:36 push writeFrame for streamID=225 size=312351
2020/11/12 12:12:36 push writeFrame for streamID=225 size=0
2020/11/12 12:12:36 push writeFrame for streamID=229 size=0
2020/11/12 12:12:36 push writeFrame for streamID=231 size=0
2020/11/12 12:12:37 push writeFrame for streamID=177 size=0
Popped Frames
2020/11/12 12:12:35 sched pop writeFrame for streamID=0 size=0
2020/11/12 12:12:35 sched pop writeFrame for streamID=119 size=0
2020/11/12 12:12:35 sched pop writeFrame for streamID=26 size=0
2020/11/12 12:12:35 sched pop writeFrame for streamID=119 size=0
2020/11/12 12:12:35 sched pop writeFrame for streamID=26 size=15169
2020/11/12 12:12:35 sched pop writeFrame for streamID=28 size=0
2020/11/12 12:12:35 sched pop writeFrame for streamID=119 size=0
2020/11/12 12:12:35 sched pop writeFrame for streamID=26 size=0
2020/11/12 12:12:35 sched pop writeFrame for streamID=28 size=5566
2020/11/12 12:12:35 sched pop writeFrame for streamID=30 size=0
2020/11/12 12:12:35 sched pop writeFrame for streamID=119 size=0
2020/11/12 12:12:35 sched pop writeFrame for streamID=28 size=0
2020/11/12 12:12:35 sched pop writeFrame for streamID=30 size=16384
2020/11/12 12:12:35 sched pop writeFrame for streamID=30 size=16384
2020/11/12 12:12:35 sched pop writeFrame for streamID=30 size=16384
2020/11/12 12:12:35 sched pop writeFrame for streamID=30 size=16384
2020/11/12 12:12:35 sched pop writeFrame for streamID=30 size=11626
2020/11/12 12:12:35 sched pop writeFrame for streamID=32 size=0
2020/11/12 12:12:35 sched pop writeFrame for streamID=119 size=0
2020/11/12 12:12:35 sched pop writeFrame for streamID=32 size=354
2020/11/12 12:12:35 sched pop writeFrame for streamID=30 size=0
2020/11/12 12:12:35 sched pop writeFrame for streamID=119 size=0
2020/11/12 12:12:35 sched pop writeFrame for streamID=34 size=0
2020/11/12 12:12:35 sched pop writeFrame for streamID=119 size=0
2020/11/12 12:12:35 sched pop writeFrame for streamID=119 size=0
2020/11/12 12:12:35 sched pop writeFrame for streamID=38 size=0
2020/11/12 12:12:35 sched pop writeFrame for streamID=34 size=483
2020/11/12 12:12:35 sched pop writeFrame for streamID=36 size=0
2020/11/12 12:12:35 sched pop writeFrame for streamID=119 size=0
2020/11/12 12:12:35 sched pop writeFrame for streamID=40 size=0
2020/11/12 12:12:35 sched pop writeFrame for streamID=38 size=16384
2020/11/12 12:12:35 sched pop writeFrame for streamID=119 size=0
2020/11/12 12:12:35 sched pop writeFrame for streamID=40 size=6293
2020/11/12 12:12:35 sched pop writeFrame for streamID=119 size=0
2020/11/12 12:12:35 sched pop writeFrame for streamID=42 size=0
2020/11/12 12:12:35 sched pop writeFrame for streamID=119 size=0
2020/11/12 12:12:35 sched pop writeFrame for streamID=44 size=0
2020/11/12 12:12:35 sched pop writeFrame for streamID=42 size=11249
2020/11/12 12:12:35 sched pop writeFrame for streamID=46 size=0
2020/11/12 12:12:35 sched pop writeFrame for streamID=119 size=0
2020/11/12 12:12:35 sched pop writeFrame for streamID=48 size=0
2020/11/12 12:12:35 sched pop writeFrame for streamID=44 size=9293
2020/11/12 12:12:35 sched pop writeFrame for streamID=48 size=346
2020/11/12 12:12:35 sched pop writeFrame for streamID=46 size=2626
2020/11/12 12:12:35 sched pop writeFrame for streamID=36 size=6726
2020/11/12 12:12:35 sched pop writeFrame for streamID=119 size=6381
2020/11/12 12:12:35 sched pop writeFrame for streamID=38 size=11101
2020/11/12 12:12:35 sched pop writeFrame for streamID=40 size=0
2020/11/12 12:12:35 sched pop writeFrame for streamID=42 size=0
2020/11/12 12:12:35 sched pop writeFrame for streamID=44 size=0
2020/11/12 12:12:35 sched pop writeFrame for streamID=36 size=0
2020/11/12 12:12:35 sched pop writeFrame for streamID=119 size=0
2020/11/12 12:12:35 sched pop writeFrame for streamID=38 size=0
2020/11/12 12:12:36 sched pop writeFrame for streamID=121 size=0
2020/11/12 12:12:36 sched pop writeFrame for streamID=121 size=1021
2020/11/12 12:12:36 sched pop writeFrame for streamID=123 size=0
2020/11/12 12:12:36 sched pop writeFrame for streamID=225 size=0
2020/11/12 12:12:36 sched pop writeFrame for streamID=123 size=3464
2020/11/12 12:12:36 sched pop writeFrame for streamID=225 size=16384
2020/11/12 12:12:36 sched pop writeFrame for streamID=225 size=16384
2020/11/12 12:12:36 sched pop writeFrame for streamID=225 size=16384
2020/11/12 12:12:36 sched pop writeFrame for streamID=225 size=16384
2020/11/12 12:12:36 sched pop writeFrame for streamID=225 size=16384
2020/11/12 12:12:36 sched pop writeFrame for streamID=225 size=16384
2020/11/12 12:12:36 sched pop writeFrame for streamID=225 size=16384
2020/11/12 12:12:36 sched pop writeFrame for streamID=225 size=16384
2020/11/12 12:12:36 sched pop writeFrame for streamID=225 size=16384
2020/11/12 12:12:36 sched pop writeFrame for streamID=225 size=16384
2020/11/12 12:12:36 sched pop writeFrame for streamID=225 size=16384
2020/11/12 12:12:36 sched pop writeFrame for streamID=225 size=16384
2020/11/12 12:12:36 sched pop writeFrame for streamID=225 size=16384
2020/11/12 12:12:36 sched pop writeFrame for streamID=225 size=16384
2020/11/12 12:12:36 sched pop writeFrame for streamID=225 size=16384
2020/11/12 12:12:36 sched pop writeFrame for streamID=225 size=16384
2020/11/12 12:12:36 sched pop writeFrame for streamID=225 size=16384
2020/11/12 12:12:36 sched pop writeFrame for streamID=225 size=16384
2020/11/12 12:12:36 sched pop writeFrame for streamID=225 size=16384
2020/11/12 12:12:36 sched pop writeFrame for streamID=225 size=1055
2020/11/12 12:12:36 sched pop writeFrame for streamID=225 size=0
What did you expect to see?
No lockups.
What did you see instead?
Random lockups.
Possible related issue https://github.com/golang/go/issues/23559
cc @fraenkel
@dtelyukh We are going to need something that can reproduce the issue. It would also help to enable http2 debug and a thread dump when it hangs.
Your title says the bundled version of http2 has this issue. Are you implying that if you use the latest x/net/http2 you dont?
@dtelyukh We are going to need something that can reproduce the issue. It would also help to enable http2 debug and a thread dump when it hangs.
It's not easy to prepare code, which can reproduce this problem with 100% guarantee, but I think I could try.
Here is debug.log for GODEBUG=http2debug=2
http2.debug.log
and goroutines dump with pprof
goroutine.dump.log
Your title says the bundled version of http2 has this issue. Are you implying that if you use the latest x/net/http2 you dont?
h2_bundle.go used by third-party code. I didn't try to use x/net/http2 directly. Do you mean that I should do that?
Don't worry. I am going to need something that can reproduce this issue.
I can see why nothing is making progress but I don't know why.
The debug log is incomplete or slightly broken, but from what I do see there is an oddity.
2020/11/13 10:44:00 http2: Framer 0xc000af61c0: read HEADERS flags=END_STREAM|END_HEADERS|PRIORITY stream=1511 len=21
2020/11/13 10:44:00 http2: Framer 0xc000af61c0: wrote PUSH_PROMISE flags=END_HEADERS stream=1511 len=293
2020/11/13 10:44:00 http2: Framer 0xc000af61c0: read PRIORITY stream=314 len=5
2020/11/13 10:44:00 http2: Framer 0xc000af61c0: wrote PUSH_PROMISE flags=END_HEADERS stream=1511 len=33
2020/11/13 10:44:00 http2: Framer 0xc000af61c0: wrote HEADERS flags=END_HEADERS stream=314 len=120
2020/11/13 10:44:00 http2: Framer 0xc000af61c0: wrote PUSH_PROMISE flags=END_HEADERS stream=1511 len=44
2020/11/13 10:44:00 http2: Framer 0xc000af61c0: wrote HEADERS flags=END_HEADERS stream=316 len=115
Notice the stream for the PUSH is 1511 but the above is the first time I see that Framer. And the rest are in the 300s. I don't exactly see how this happened.
There are multiple PUSH_PROMISE all with the same stream id which is also odd.
I was truncated log-file after each success request until it was hanged. Maybe that is why the log-file was broken.
I attach here other log-file, which was made when I caught problem from the first time. This log-file was never truncated.
http2.debug.log
@fraenkel, we prepared test application for problem reproducing. My apologies for so complicated app. We cannot extract some small piece of code, because we don't know where is the problem exactly.
I sent credentials to michael.fraenkel@gmail.com.
Point Chromium-based browser to https://cardonecapital.hc04.dorofeev.me/.
Press F12 and check "Disable cache" box.
Press F5 until request will hangs.
To have more chances to catch the problem it should to remove proxy cache:
Stop Caddy Server kill -SIGTERM <caddy process id>
rm -fR /home/user/caddy-cache
./caddy run&
To patch or debug server:
Custom cache plugin code is here: /home/user/smart-cache
Caddy Server code is here: /home/user/go/src/github.com/caddyserver/caddy
To rebuild Caddy:
cd /home/user/go/src/github.com/caddyserver/caddy/cmd/caddy
CGO_ENABLED=0 go build
mv ./caddy ~/caddy
cd
sudo setcap 'cap_net_bind_service=+ep' ./caddy
kill -SIGTERM <caddy process id>
./caddy run&
@dtelyukh I did find a way to cause the hang locally, from my machine it would never happen.
for i in {1..1000}; do echo $i; nghttp https://cardonecapital.hc04.dorofeev.me/ -n; done would eventually hang.
Once I attempted to compile a new Go, I could never get caddy to rebuild. I always ended up with
qtls init failure. Attempting to fix that, I didn't realize at the time that the src tree was a bit special so I can no longer make any progress since I cannot download your smart-cache module.
If you could fix the tree, at least next time I know to make a copy of the entire tree before doing anything. I am a bit concerned that a simple go mod tidy prevented any further compilation of caddy.
Never mind, I got it working again....
So one thing I did verify is that using the latest golang/x/net/http2 code does not cause the hang I see with my simple testcase.
@fraenkel, what can I help?
You can see the one line change I made to caddyserver with a go mod tidy. See if this new version hangs for you as well.
@fraenkel, this new version is never hangs for us. And also we noticed that app become faster. 👍 Thank you!
Full page load time (with all resources) is 2% less than with old http/2, and median absolute deviation is 1% less too. So it's both faster, and shows more stable performance.
@fraenkel, should I close this ticket?
yes, given there is a solution and this should be fixed in 1.16 although one should verify that is true.
@fraenkel, this bug is still exist. But we found more clear way to reproduce it.
An issue in Caddy's repository: https://github.com/caddyserver/caddy/issues/3896
How to reproduce
It depends on the proxied website and caddy config, and some random factors, thus it occurs with different frequency on different hardware. The steps are:
Add this line to /etc/hosts 127.0.0.1 terem-pro.localhost
Clone Caddy repository git clone https://github.com/caddyserver/caddy.git
Build binary from master (a patch with the last version of x/net is already included) cd cmd/caddy && go build
sudo setcap 'cap_net_bind_service=+ep' ./caddy
Create Caddyfile with this content
https://terem-pro.localhost {
handle {
reverse_proxy https://www.terem-pro.ru {
header_up host {http.reverse_proxy.upstream.host}
}
push / {
/local/components/terem/catalog.list/templates/index.best.seller/style.css
/local/components/terem/new_services.content/templates/home.banner.lots/style.css
/local/components/terem/slider.blocks/templates/slider.useful/style.css
/local/components/terem/standard.blocks/templates/call.action.white/style.css
/local/components/terem/review.list/templates/carousel.home/style.css
/local/components/terem/standard.blocks/templates/promo.red.home/style.css
/local/components/terem/promotion.list/templates/home.slider/style.css
/local/components/terem/form.form/templates/template.pdf/style.css
/local/templates/terem/components/bitrix/menu/template.header.menu.top/desktop-menu.css
/local/components/terem/form.form/templates/template.taxi/style.css
/assets/resources/css/home.css
/local/templates/terem/components/bitrix/menu/template.header.menu-mobile/style_menu.css
/local/components/terem/catalog.type.list/templates/.default/style.css
/assets/resources/css/styles.css
/bitrix/cache/css/s1/terem/template_ad73b02503569e1113abf0b013fdbb28/template_ad73b02503569e1113abf0b013fdbb28_v1.css?16067202133580
/bitrix/cache/css/s1/terem/page_074396ca6d41424fe878cb365c109aa1/page_074396ca6d41424fe878cb365c109aa1_v1.css?160672023225970
}
}
}
Run Caddy server ./caddy run
Open developer tools network tab, to visually see the hangs; checking the "disable cache" toggle will help reproduce the problem faster, but is not necessary
Navigate to / page in a browser (i.e., https://terem-pro.localhost)
Wait till it fully loads
If it didn't hang on step 4 - hit f5, and again, wait till it fully loads; repeat several times if needed
On our test server, it usually hangs after 2-3 reloads. On some devices, it might require 10-15 attempts but still hangs at some point.
I reproduced a hang but its using the bundle http2 stack.
goroutine 5252 [select, 3 minutes]:
net/http.(*http2serverConn).writeHeaders(0xc000582900, 0xc000feec60, 0xc0003c93b0, 0xa, 0x24a6800)
/snap/go/6745/src/net/http/h2_bundle.go:5753 +0x172
net/http.(*http2responseWriterState).writeChunk(0xc000be7200, 0xc000fc1000, 0xdf, 0x1000, 0xc000fcfdf0, 0x1, 0xc0009a8180)
/snap/go/6745/src/net/http/h2_bundle.go:6020 +0x3bf
To fix it should bundle a recent version of http2-library? Or does this library itself need to be fixed?
Looks like with this hack it never hangs
http2.ConfigureServer(s, nil)
The solution is to use explicitly x/net/http2.
The problem is not solved neither by 1.16rc1, nor 1.15.8. Easy steps for reproducing are here: https://github.com/caddyserver/caddy/issues/3896
WIthout explicit usage of /x/net/http2 http2.ConfigureServer(s, nil) hangs still happen.
I was able to reproduce this with the reverse proxy Traefik, and can confirm that directly calling the /x/net/http2 ConfigureServer method fixes this issue. I can also confirm that this is not fixed in the golang 1.16.0 stdlib, @fraenkel any idea when we can expect the fix to make its way there?
cc @toothrot @dmitshur re possible release issue
CC @bradfitz, @tombergan, @rsc, @empijei via owners. Also CC @neild.
I hope it will help: https://github.com/golang/go/issues/45435
I believe this is fixed with Go 1.17 considering that the bundled http2 library was updated to a snapshot of x/net/http2 from May of this year.
@dtelyukh Can you give 1.17 a try with your test case (without manually using x/net/http2)?
@ReillyBrogan , I tried 1.17.3 - the problem still exists
Potentially fixed with https://github.com/golang/go/issues/49921. It's part of the Go 1.17.6 release.
This is still an issue on Go 1.18
I had the same problem using Go 1.19.
any progress now?i had the same problems with golang.org/x/net v0.0.0-20210813160813-60bc85c4be6d in branch internal-branch.go1.18-vendor
I had the same problem using Go 1.21.6
I had the same problem using Go 1.23.1
|
gharchive/issue
| 2020-11-12T09:13:02 |
2025-04-01T04:34:23.673875
|
{
"authors": [
"ReillyBrogan",
"ReillyTevera",
"Rohsichan",
"bakape",
"divanodestiny",
"dmitshur",
"dtelyukh",
"fraenkel",
"networkimprov",
"virtyaluk",
"whitewindmills"
],
"repo": "golang/go",
"url": "https://github.com/golang/go/issues/42534",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
993847792
|
x/sys/unix: add wireless iwreq support from Linux kernel
Currently x/sys/unix has type Ifreq and function IoctlIfreq, but it would be nice if somebody added wireless support (linux/wireless.h), since the kernel has a couple of useful functions for that.
For example, here is a sample C code to get SSID of a wireless connection:
...
#include <linux/wireless.h>
...
struct iwreq req;
strcpy(req.ifr_ifrn.ifrn_name, argv[1]);
int fd, status;
fd = socket(AF_INET, SOCK_DGRAM, 0);
char* buffer;
buffer = calloc(32, sizeof(char));
req.u.essid.pointer = buffer;
req.u.essid.length = 32;
if (ioctl(fd, SIOCGIWESSID, &req) == -1) {
fprintf(stderr, "Failed ESSID get on interface %s: %s\n", argv[1], strerror(errno));
} else {
printf("%s", (char*)req.u.essid.pointer);
}
free(buffer);
...
As far as of now, i don't know a way to reproduce the same thing in Go, apart from using import "C" in place. A library functions for that sort of thing would be great.
No objections to adding it from me, but you may also have some luck with https://github.com/mdlayher/wifi which uses the nl80211 netlink API rather than more ioctls.
@mdlayher just read your blog post, turns out ioctl wireless API is legacy? Suppose it would be more proper to actually use your netlink implementation in the future? If it is so, i don't actually see a reason anyone would want to develop anything for the legacy API, since iwd, NetworkManager, and even wpa_supplicant (with -Dnl80211) use the newer one. And yes, your implementation works for me, so i suppose this issue can be closed, if nobody objects.
I'm pretty far removed from the problem space but if all ioctl operations can be performed with nl80211, then yeah I'd recommend going that route instead. I'll leave closing the issue up to you since there's no harm in adding more API to x/sys/unix to mirror the kernel uapis.
|
gharchive/issue
| 2021-09-11T13:56:46 |
2025-04-01T04:34:23.679057
|
{
"authors": [
"aajonusonline",
"mdlayher"
],
"repo": "golang/go",
"url": "https://github.com/golang/go/issues/48338",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
996472254
|
dev.boringcrypto: 1.17 release
1.17.0 and 1.17.1 have been released. What is the ETA for the release with BoringCrypto?
Thanks again for maintaining this great branch!
@katiehockman @FiloSottile
Jeremy Faller has provided an update on a (tangentially) related thread on golang-dev:
We had a small snafu with boringcrypto, as we're transferring responsibilities for it's maintenance around internally. This branch will be updated with the next point release for Golang. Following our published process, unless a security release causes rescheduling, that next point release will happen this week, and boringcrypto will be updated soon after now that we've sorted out responsibilities.
I'm closing here since I don't think we need an issue to track the boringcrypto work (it's internal to Google I believe, and traditionally there never was a github issue).
|
gharchive/issue
| 2021-09-14T21:54:44 |
2025-04-01T04:34:23.681512
|
{
"authors": [
"ALTree",
"sodul"
],
"repo": "golang/go",
"url": "https://github.com/golang/go/issues/48391",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1250821885
|
affected/package: net/http => ParseMultipartForm
What version of Go are you using (go version)?
$ go version
go version go1.18.1 linux/amd64
Does this issue reproduce with the latest release?
yes
What operating system and processor architecture are you using (go env)?
go env Output
$ go env
GO111MODULE=""
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/alex/.cache/go-build"
GOENV="/home/alex/.config/go/env"
GOEXE=""
GOEXPERIMENT=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOINSECURE=""
GOMODCACHE="/home/alex/go/pkg/mod"
GONOPROXY=""
GONOSUMDB=""
GOOS="linux"
GOPATH="/home/alex/go"
GOPRIVATE=""
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/usr/local/go"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64"
GOVCS=""
GOVERSION="go1.18.1"
GCCGO="gccgo"
GOAMD64="v1"
AR="ar"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD="/dev/null"
GOWORK=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build2728191052=/tmp/go-build -gno-record-gcc-switches"
What did you do?
I have created a small upload tool for the caddy v2 server
https://github.com/git001/caddyv2-upload
I set the MaxBytesReader ( https://github.com/git001/caddyv2-upload/blob/main/upload.go#L125 ) and ParseMultipartForm ( https://github.com/git001/caddyv2-upload/blob/main/upload.go#L126 ) to limit the memory usage.
What we have observed is that even when we limit the ParseMultipartForm() maxMemory parameter is the initial memory consumption 7-8 times higher then the given maxMemory.
https://github.com/git001/caddyv2-upload/issues/2
What did you expect to see?
We expect that the memory usage is not 7-8 times higher at the initial upload phase then the configured one for ParseMultipartForm
What did you see instead?
We see that the memory usage 7-8 times higher then configured maxMemory in ParseMultipartForm
Please try profiling to identify where the memory might be used.
Note maxMemory refers to the max file size that will be stored in memory, not the maximum memory that will be used during the operation.
Note maxMemory refers to the max file size that will be stored in memory, not the maximum memory that will be used during the operation.
thank you for the info. I will profile the app.
I have now created a simple module which shows my observations.
https://github.com/git001/golang-multiparttest
What I have seen is that the parameter maxMemory is a very important "flag" between memory usage and filesystem usage.
My conclusion is that the Multipart upload requires 2*times the memory of maxMemory but I was not able to use the tool pprof in that way to find out why this happen.
Finally I understand now the sentence in the doc which explains exactly the same which could be the reason for the doubled memory usage.
The whole request body is parsed and up to a total of maxMemory bytes of its file parts are stored in memory, with the remainder stored on disk in temporary files.
|
gharchive/issue
| 2022-05-27T14:15:42 |
2025-04-01T04:34:23.688748
|
{
"authors": [
"git001",
"seankhliao"
],
"repo": "golang/go",
"url": "https://github.com/golang/go/issues/53109",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1296621715
|
crypto/x509: ParseRevocationList does not populate Number and AuthorityKeyId fields
What version of Go are you using (go version)?
go1.19rc1
Does this issue reproduce with the latest release?
Yes
What did you do?
Use x509.ParseRevocationList() to parse a CRL which contains the crlNumber and authorityKeyIdentifer extensions (such as one produced using x509.CreateRevocationList()).
See https://go.dev/play/p/6gSi8pmdzBd?v=gotip for a demonstration.
What did you expect to see?
The RevocationList.Number and RevocationList.AuthorityKeyId fields should be populated from the values in their corresponding extensions.
What did you see instead?
The .Number and .AuthorityKeyId fields retain their zero-values.
@rolandshoemaker this might be worth a freeze exception since it's a new API.
Yup agreed, cc @golang/release this adds expected functionality in a new API that was missing and the patch is minimal.
Note that fixing a bug or problem discovered in a new API thanks to pre-release testing is generally in scope of the freeze (within balance), so a freeze exception might not be needed if you think this fix is okay to accept at this stage.
Marking as tentative release-blocker since it's a change to a new API.
I think we should aim to make it clear whose input an issue in a NeedsDecision state is waiting on.
I think at this point this doesn't warrant a freeze exception (it's within scope) and so the crypto/x509 owners should (i.e., we should remove "[freeze exception]" suffix). Thoughts?
Works for me; done.
|
gharchive/issue
| 2022-07-07T00:14:55 |
2025-04-01T04:34:23.693981
|
{
"authors": [
"FiloSottile",
"aarongable",
"dmitshur",
"heschi",
"rolandshoemaker"
],
"repo": "golang/go",
"url": "https://github.com/golang/go/issues/53726",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1600863394
|
x/vuln: possible missed x/crypto vulnerability in govulncheck?
What version of Go are you using (go version)?
$ go version
go version go1.20.1 linux/amd64
Does this issue reproduce at the latest version of golang.org/x/vuln?
Yes.
What did you do?
GitHub dependabot reports a project is affected by GHSA-8c26-wmh5-6g9v. I can see it is in the vulnerability database at https://pkg.go.dev/vuln/GO-2021-0356, yet the govulncheck tool does not report it when running govulncheck ./.... Not even as an informational entry. Unsure if GitHub dependabot could wrong about this though as the report seems to concern <go1.18.
govulncheck is an experimental tool. Share feedback at https://go.dev/s/govulncheck-feedback.
Using go1.20.1 and govulncheck@v0.0.0 with
vulnerability data from https://vuln.go.dev (last modified 22 Feb 23 20:16 UTC).
Scanning your code and 463 packages across 64 dependent modules for known vulnerabilities...
No vulnerabilities found.
go.mod seems to contain the vulnerable indirect dependency:
go 1.18
//...
require (
//...
golang.org/x/crypto v0.0.0-20220214200702-86341886e292 // indirect
//...
)
What did you expect to see?
Scanning your code and 463 packages across 64 dependent modules for known vulnerabilities...
... information about the vulnerability, affected or informational.
What did you see instead?
Scanning your code and 463 packages across 64 dependent modules for known vulnerabilities...
No vulnerabilities found.
dependabot only looks at dependency versions
govulncheck will look at code that's imported and used to determine if a vulnerability is reachable / affects the code.
without further information, this is likely working as intended
dependabot only looks at dependency versions govulncheck will look at code that's imported and used to determine if a vulnerability is reachable / affects the code.
govulncheck usually reports the vulnerabilities that are not reachable under "informational" such as the example below. In this specific case, it doesn't report anything which makes me a little worried if it might sometimes miss things given that it is an experimental tool.
=== Informational ===
Found 3 vulnerabilities in packages that you import, but there are no call
stacks leading to the use of these vulnerabilities. You may not need to
take any action. See https://pkg.go.dev/golang.org/x/vuln/cmd/govulncheck
for details.
Vulnerability #1: GO-2022-0956
Parsing malicious or large YAML documents can consume excessive
amounts of CPU or memory.
More info: https://pkg.go.dev/vuln/GO-2022-0956
Found in: gopkg.in/yaml.v2@v2.2.2
Fixed in: gopkg.in/yaml.v2@v2.2.4
[...]
without further information, this is likely working as intended
What would be needed to help? It is sadly from a private client repo so I am unable to share much.
govulncheck usually reports the vulnerabilities that are not reachable under "informational"
Only if the vulnerable subpackage is imported.
This one does not report anything:
module demo
go 1.20
require golang.org/x/crypto v0.0.0-20220214200702-86341886e292
package main
import "golang.org/x/crypto/blowfish"
func main() {
_ = blowfish.BlockSize
}
However, this one does report the vulnerability in the "Informational" section:
package main
import "golang.org/x/crypto/ssh"
func main() {
_ = ssh.UserCert
}
=== Informational ===
Found 1 vulnerability in packages that you import, but there are no call
stacks leading to the use of this vulnerability. You may not need to
take any action. See https://pkg.go.dev/golang.org/x/vuln/cmd/govulncheck
for details.
Vulnerability #1: GO-2021-0356
Attackers can cause a crash in SSH servers when the server has
been configured by passing a Signer to ServerConfig.AddHostKey
such that 1) the Signer passed to AddHostKey does not implement
AlgorithmSigner, and 2) the Signer passed to AddHostKey returns
a key of type “ssh-rsa” from its PublicKey method. Servers
that only use Signer implementations provided by the ssh package
are unaffected.
More info: https://pkg.go.dev/vuln/GO-2021-0356
Found in: golang.org/x/crypto@v0.0.0-20220214200702-86341886e292
Fixed in: golang.org/x/crypto@v0.0.0-20220314234659-1baeb1ce4c0b
see above
|
gharchive/issue
| 2023-02-27T10:05:24 |
2025-04-01T04:34:23.701552
|
{
"authors": [
"gophun",
"seankhliao",
"taisph"
],
"repo": "golang/go",
"url": "https://github.com/golang/go/issues/58752",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1827872947
|
x/exp/jsonrpc2: panic when sending request and JSON marshaling fails
What version of Go are you using (go version)?
playground 1.20 and "dev branch"
Does this issue reproduce with the latest release?
yes
What operating system and processor architecture are you using (go env)?
playground
What did you do?
https://go.dev/play/p/4jpZ0wBGCTd
What did you expect to see?
Not a panic.
What did you see instead?
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x30 pc=0x526783]
goroutine 6 [running]:
testing.tRunner.func1.2({0x5500a0, 0x68fc50})
/usr/local/go-faketime/src/testing/testing.go:1545 +0x238
testing.tRunner.func1()
/usr/local/go-faketime/src/testing/testing.go:1548 +0x397
panic({0x5500a0?, 0x68fc50?})
/usr/local/go-faketime/src/runtime/panic.go:914 +0x21f
golang.org/x/exp/event.New({0x0, 0x0}, 0x4)
/tmp/gopath3654295953/pkg/mod/golang.org/x/exp/event@v0.0.0-20220217172124-1812c5b45e43/event.go:64 +0x43
golang.org/x/exp/event.End({0x0, 0x0}, {0xc000093d18, 0x1, 0xc000070d40?})
/tmp/gopath3654295953/pkg/mod/golang.org/x/exp/event@v0.0.0-20220217172124-1812c5b45e43/common.go:126 +0x39
golang.org/x/exp/jsonrpc2.(*AsyncCall).Await(0xc00009e090, {0x5af630, 0x6c7660}, {0x0, 0x0})
/tmp/gopath3654295953/pkg/mod/golang.org/x/exp/jsonrpc2@v0.0.0-20230728194245-b0cb94b80691/conn.go:219 +0x3ce
play.TestJRPCBug(0xc000007860)
/tmp/sandbox3486629624/prog_test.go:36 +0x1dc
testing.tRunner(0xc000007860, 0x584538)
/usr/local/go-faketime/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
/usr/local/go-faketime/src/testing/testing.go:1648 +0x3ad
2 fixes are possible:
initialize ctx field at https://cs.opensource.google/go/x/exp/+/master:jsonrpc2/conn.go;l=146
resultBox: make(chan asyncResult, 1),
+ ctx: ctx,
}
move the https://cs.opensource.google/go/x/exp/+/master:jsonrpc2/conn.go;l=155-158 block before https://cs.opensource.google/go/x/exp/+/master:jsonrpc2/conn.go;l=148
@jba @ianthehat
Option (2) seems reasonable to me. Want to send a CL?
Sure: https://go-review.googlesource.com/c/exp/+/516556
|
gharchive/issue
| 2023-07-30T09:45:43 |
2025-04-01T04:34:23.707303
|
{
"authors": [
"bcmills",
"dr2chase",
"maxatome"
],
"repo": "golang/go",
"url": "https://github.com/golang/go/issues/61654",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
2473294578
|
cmd/go: any invocation creates read-only telemetry configuration file under GOMODCACHE
Go version
go version go1.23.0 linux/amd64
Output of go env in your module/workspace:
GO111MODULE=''
GOARCH='amd64'
GOBIN=''
GOCACHE='/home/csn/please-go-rules/plz-out/tmp/test/root_test/root_test._test/run_1/.cache/go-build'
GOENV='/home/csn/please-go-rules/plz-out/tmp/test/root_test/root_test._test/run_1/.config/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='amd64'
GOHOSTOS='linux'
GOINSECURE=''
GOMODCACHE='/home/csn/please-go-rules/plz-out/tmp/test/root_test/root_test._test/run_1/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='linux'
GOPATH='/home/csn/please-go-rules/plz-out/tmp/test/root_test/root_test._test/run_1/go'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/home/csn/please-go-rules/plz-out/bin/third_party/go/toolchain'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/home/csn/please-go-rules/plz-out/bin/third_party/go/toolchain/pkg/tool/linux_amd64'
GOVCS=''
GOVERSION='go1.23.0'
GODEBUG=''
GOTELEMETRY='local'
GOTELEMETRYDIR='/home/csn/please-go-rules/plz-out/tmp/test/root_test/root_test._test/run_1/.config/go/telemetry'
GCCGO='gccgo'
GOAMD64='v1'
AR='ar'
CC='gcc'
CXX='g++'
CGO_ENABLED='1'
GOMOD='/home/csn/please-go-rules/plz-out/go.mod'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -m64 -pthread -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=/home/csn/please-go-rules/plz-out/tmp/test/root_test/root_test._test/run_1/go-build1923171216=/tmp/go-build -gno-record-gcc-switches'
What did you do?
The Please build system performs build actions in an environment that is isolated from the host system as much as possible. In particular, the inputs for a build action are linked into a temporary directory, and the build action's environment is sanitised - you'll notice the impact this has on the Go environment in the output of go env above. The go-rules plugin contains build definitions for compiling Go code with a Go toolchain, similarly to Bazel's rules_go.
After bumping go-rules' Go toolchain to 1.23.0, we've run into a situation where Please can't clean up the temporary directory it creates after a target has been built:
28 test targets and 78 tests run; 75 passed, 1 errored, 2 skipped.
Total time: 2m18.18s real, 5m33.79s compute.
Messages:
19:36:15.239 WARNING: Failed to remove temporary directory for //tools/please_go/install/exec:exec: failed to remove plz-out/tmp/tools/please_go/install/exec/exec._build: exit status 1
Output: rm: cannot remove 'plz-out/tmp/tools/please_go/install/exec/exec._build/pkg/mod/cache/download/golang.org/x/telemetry/config/@v': Directory not empty
19:36:19.065 WARNING: Failed to remove test directory for //tools/please_go/install:install_test: failed to remove plz-out/tmp/tools/please_go/install/install_test._test/run_1: exit status 1
Output: rm: cannot remove 'plz-out/tmp/tools/please_go/install/install_test._test/run_1/go/pkg/mod/golang.org/x/telemetry/config@v0.29.0/doc.go': Permission denied
rm: cannot remove 'plz-out/tmp/tools/please_go/install/install_test._test/run_1/go/pkg/mod/golang.org/x/telemetry/config@v0.29.0/LICENSE': Permission denied
rm: cannot remove 'plz-out/tmp/tools/please_go/install/install_test._test/run_1/go/pkg/mod/golang.org/x/telemetry/config@v0.29.0/config.json':Permission denied
rm: cannot remove 'plz-out/tmp/tools/please_go/install/install_test._test/run_1/go/pkg/mod/golang.org/x/telemetry/config@v0.29.0/go.mod': Permission denied
The root cause of this is that, by default, any invocation of go causes x/telemetry/internal/configstore to pull the latest telemetry configuration from x/telemetry/config via go mod download without using the -modcacherw flag:
https://github.com/golang/telemetry/blob/0693e6240b9b888df93a2e280a64431c10d47a63/internal/configstore/download.go#L43
Because this writes the configuration beneath GOROOT with 0444 permissions (and with 0555 intermediate directories), Please is prevented from unlinking the entire temporary directory tree.
I'm aware that telemetry can be disabled by writing off to $HOME/.config/go/telemetry before running go, which prevents this module from being pulled in the first place. However, the value of HOME is also set to the path to the temporary directory so this would have to be done before every invocation of go inside the build sandbox - while we could guarantee that this happens for the build definitions in the go-rules plugin, it's impossible to guarantee in the general case, where users write custom build rules that run whatever commands they want.
What did you see happen?
Please builds the target, but isn't able to clean up its own temporary directory afterwards owing to the Go toolchain writing a read-only file tree beneath GOROOT.
What did you expect to see?
Please builds the target and is able to clean up its own temporary directory afterwards without having to resort to hacks such as running go clean -modcache or chmodding GOROOT to make every file within it writeable after every invocation of go.
Related Issues and Documentation
cmd/dist, cmd/go: Go tip leaving pkg/mod/golang.org/x/telemetry read-only files behind #67463 (closed)
cmd/go: GOTELEMETRY=off environment variable has no effect #68928 (closed)
buildet: read-only directory in work area leads to cleanup failure when not root #34980 (closed)
(Emoji vote if this was helpful or unhelpful; more detailed feedback welcome in this discussion.)
#67463 seems relevant here, although some of the comments imply that this only happens when telemetry is uploaded, which I don't believe is the case - here, the default telemetry mode (local) is in use, and the config file is still being pulled.
perhaps please can set GOFLAGS=-modcacherw to have it apply to all go commands instead of passing it individually to each go invocation?
and/or clean up with go clean -modcache?
cc @golang/telemetry @golang/tools-team
although some of the comments imply that this only happens when telemetry is uploaded,
I think we may download the config too eagerly during the check for work to upload. In any case, if this is breaking users it doesn't really matter if it is infrequent.
I think we had discussed having the upload use a temp (or even dedicated) GOMODCACHE and clean up after itself. However, in #67463 we were only thinking about cmd/dist, not general usage where -modcacherw is desirable.
We should try to fix this for 1.23.1, particularly if the fix is low risk. I will investigate.
I think we need to fix x/telemetry/internal/upload to avoid downloading the config unless the telemetry mode is on.
(we still want to run other uploader logic such as compacting the counter files in local mode and producing json files even when the telemetry mode is local).
I still wish we could avoid polluting the module cache even when telemetry is on. Previously in https://github.com/golang/go/issues/67463#issuecomment-2119623176 the idea of using a temporary module cache was rejected because we thought checksum db was still stored under GOPATH. But I don't think that's true.
$ GOMODCACHE=/tmp/gomodcache GOPATH=/tmp/gopath go mod download golang.org/x/telemetry/config@latest
$ tree /tmp/gomodcache
/tmp/gomodcache
├── cache
│ └── download
│ ├── golang.org
│ │ └── x
│ │ └── telemetry
│ │ └── config
│ │ └── @v
│ │ ├── list
│ │ ├── v0.29.0.info
│ │ ├── v0.29.0.lock
│ │ ├── v0.29.0.mod
│ │ ├── v0.29.0.zip
│ │ └── v0.29.0.ziphash
│ └── sumdb
│ └── sum.golang.org
│ ├── lookup
│ │ └── golang.org
│ │ └── x
│ │ └── telemetry
│ │ └── config@v0.29.0
│ └── tile
│ └── 8
│ ├── 0
│ │ └── x113
│ │ ├── 563
│ │ └── 749.p
│ │ └── 112
│ ├── 1
│ │ ├── 443
│ │ └── 444.p
│ │ └── 85
│ ├── 2
│ │ └── 001.p
│ │ └── 188
│ └── 3
│ └── 000.p
│ └── 1
└── golang.org
└── x
└── telemetry
└── config@v0.29.0
├── LICENSE
├── config.json
├── doc.go
└── go.mod
28 directories, 17 files
$ tree /tmp/gopath
/tmp/gopath
└── pkg
└── sumdb
└── sum.golang.org
└── latest
3 directories, 1 file
perhaps please can set GOFLAGS=-modcacherw to have it apply to all go commands instead of passing it individually to each go invocation?
We could indeed do that, although it's still suboptimal if it'll attempt to download the module for each invocation.
and/or clean up with go clean -modcache?
We would ideally not want to have to add that to the end of every command; it also wouldn't get run if something else fails first so we'd still get the same issue (and for a language-agnostic build system it isn't clear on which actions it should/shouldn't run this if we try to special-case it).
The ideal from my perspective would be some way of turning all telemetry off, preferably via an env var which is easy to set, which also prevents it from downloading any telemetry libraries.
The ideal from my perspective would be some way of turning all telemetry off, preferably via an env var which is easy to set, which also prevents it from downloading any telemetry libraries.
+1. The idea of a GOTELEMETRY environment variable whose value would only be honoured if it were set to off (i.e. telemetry could be opted out of, but not into, using this mechanism) was floated in https://github.com/golang/go/issues/65503#issuecomment-1925446871. We'd certainly make use of that in Please, although I think it's also worth addressing the over-eager downloading of the config by x/telemetry/internal/configstore as a separate issue.
@chrisnovakovic I filed #68960 last night for making GOTELEMETRY=off settable.
isn't telemetry off by default and an opt-in? if not, this would violate European Union law.
@thediveo Yes, telemetry is off by default.
@thediveo the fix for this particular issue will be to not download the telemetry configuration if telemetry uploading is off (the default).
Moved to Go1.24 since this need to be fixed on the main branch first (for Go 1.24), before being considered for backporting. Please use the usual process (https://go.dev/wiki/MinorReleases) to create a separate backport tracking issue in the Go1.23.1 milestone.
Thanks @dmitshur.
@gopherbot please backport this issue to 1.23: it is a misbehavior of the telemetry integration with 1.23 that breaks existing CI workflows.
Ran into this while building yay from the Arch User Repository: https://github.com/archsink/x86_64/actions/runs/10484074392/job/29037778152
|
gharchive/issue
| 2024-08-19T12:54:44 |
2025-04-01T04:34:23.726950
|
{
"authors": [
"cherrymui",
"chrisnovakovic",
"dmitshur",
"findleyr",
"fwcd",
"gabyhelp",
"hyangah",
"ianlancetaylor",
"peterebden",
"seankhliao",
"thediveo"
],
"repo": "golang/go",
"url": "https://github.com/golang/go/issues/68946",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1903210020
|
How to append the user's package name into the generated go source.
Hi, I would like to reuse some of the proto files among different Golang projects. For a simple example, I have a project with such structure:
.
├── base
│ └── base.proto
└── user
├── service.proto
└── user.proto
The base.proto defines some basic message used by all the other folders. So I use import "base.base.proto" in other proto files. I was told to add go_package option to give the package name of the generated Golang source files. But if I use ./user for user.proto and service.proto and ./base for base.proto. The generated user.pb.go will import ./base which can't be resolved by compiler.
I use the following command to generate the proto files.
protoc --proto_path=./pb --go_out=proto_gen --go-grpc_out=proto_gen \
--go_opt=paths=import --go-grpc_opt=paths=import \
user/service.proto user/user.proto base/base.proto
So the question is, how could I make the generated source file import the right package name. For example, importing <my_project_package_name>/pb/base rather than ./base in user.pb.go.
The go_package option should contain the full import path of the Go package. So, for your example:
option go_package = "<my_project_package_name>/pb/base";
But what if the other project wants to use the proto file? Change the name manually everytime?
The simplest approach is to generate the Go package once and make it a dependency of whatever other packages require it.
If you do need to generate code in various Go packages from the same .proto source file (not recommended), you can set the Go package on the protoc command line with something like --go_opt=Mbase/base.proto=<my_project_package_name>/pb/base. See https://protobuf.dev/reference/go/go-generated/#package for details.
The simplest approach is to generate the Go package once and make it a dependency of whatever other packages require it.
If you do need to generate code in various Go packages from the same .proto source file (not recommended), you can set the Go package on the protoc command line with something like --go_opt=Mbase/base.proto=<my_project_package_name>/pb/base. See https://protobuf.dev/reference/go/go-generated/#package for details.
Thanks, that's helpful.
|
gharchive/issue
| 2023-09-19T15:11:04 |
2025-04-01T04:34:23.739653
|
{
"authors": [
"Kidsunbo",
"neild"
],
"repo": "golang/protobuf",
"url": "https://github.com/golang/protobuf/issues/1565",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2676755178
|
google.golang.org/protobuf/types/known/structpb: TestToStruct fails at Go tip after the runtime map change in Go CL 627716
The TestToStruct test in google.golang.org/protobuf/types/known/structpb package has started to fail at Go tip, as of go.dev/cl/627716 (CC @randall77):
=== RUN TestToStruct
--- FAIL: TestToStruct (0.00s)
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x402751]
goroutine 18 [running]:
testing.tRunner.func1.2({0x72aa40, 0xa171b0})
/home/swarming/.swarming/w/ir/x/w/goroot/src/testing/testing.go:1706 +0x21c
testing.tRunner.func1()
/home/swarming/.swarming/w/ir/x/w/goroot/src/testing/testing.go:1709 +0x35e
panic({0x72aa40?, 0xa171b0?})
/home/swarming/.swarming/w/ir/x/w/goroot/src/runtime/panic.go:787 +0x132
github.com/google/go-cmp/cmp/internal/value.isLess({0x712c60?, 0xc00011bcd0?, 0xc000137a00?}, {0x712c60?, 0xc00011bca0?, 0xc000126c18?})
/home/swarming/.swarming/w/ir/x/w/gopath/pkg/mod/github.com/google/go-cmp@v0.5.5/cmp/internal/value/sort.go:59 +0xb1e
github.com/google/go-cmp/cmp/internal/value.SortKeys.func1(0xc00012cea0?, 0xc000110ed8?)
/home/swarming/.swarming/w/ir/x/w/gopath/pkg/mod/github.com/google/go-cmp@v0.5.5/cmp/internal/value/sort.go:22 +0x4f
sort.insertionSort_func({0xc000110f80?, 0xc000202ea0?}, 0x14, 0x26)
/home/swarming/.swarming/w/ir/x/w/goroot/src/sort/zsortfunc.go:12 +0xa7
sort.stable_func({0xc000110f80?, 0xc000202ea0?}, 0x26)
/home/swarming/.swarming/w/ir/x/w/goroot/src/sort/zsortfunc.go:343 +0x75
sort.SliceStable({0x70d260?, 0xc000126c00?}, 0xc000110f80)
/home/swarming/.swarming/w/ir/x/w/goroot/src/sort/slice.go:44 +0xb0
github.com/google/go-cmp/cmp/internal/value.SortKeys({0xc000137808, 0x26, 0x2a})
/home/swarming/.swarming/w/ir/x/w/gopath/pkg/mod/github.com/google/go-cmp@v0.5.5/cmp/internal/value/sort.go:22 +0x9e
github.com/google/go-cmp/cmp.(*state).compareMap(0xc0001308c0, {0x7ff880, 0xc0001be930}, {0xc0001be930?, 0xc0001f6780?, 0x0?}, {0xc0001be930?, 0xc0001f7d70?, 0x0?})
/home/swarming/.swarming/w/ir/x/w/gopath/pkg/mod/github.com/google/go-cmp@v0.5.5/cmp/compare.go:539 +0x3de
github.com/google/go-cmp/cmp.(*state).compareAny(0xc0001308c0, {0x7f8b70, 0xc0001f9200})
/home/swarming/.swarming/w/ir/x/w/gopath/pkg/mod/github.com/google/go-cmp@v0.5.5/cmp/compare.go:290 +0xb29
github.com/google/go-cmp/cmp.(*state).compareInterface(0xc0001308c0, {0x7ff880?, 0x7240a0?}, {0x7240a0?, 0xc00011bb00?, 0x3?}, {0x7240a0?, 0xc00011bb10?, 0xc000202de0?})
/home/swarming/.swarming/w/ir/x/w/gopath/pkg/mod/github.com/google/go-cmp@v0.5.5/cmp/compare.go:592 +0x305
github.com/google/go-cmp/cmp.(*state).compareAny(0xc0001308c0, {0x7f8b10, 0xc00013e960})
/home/swarming/.swarming/w/ir/x/w/gopath/pkg/mod/github.com/google/go-cmp@v0.5.5/cmp/compare.go:294 +0xadd
github.com/google/go-cmp/cmp.(*state).compareMap(0xc0001308c0, {0x7ff880, 0x763ba0}, {0x763ba0?, 0xc0001f65d0?, 0x10002?}, {0x763ba0?, 0xc0001f7d10?, 0xc00016c0f0?})
/home/swarming/.swarming/w/ir/x/w/gopath/pkg/mod/github.com/google/go-cmp@v0.5.5/cmp/compare.go:561 +0x566
github.com/google/go-cmp/cmp.(*state).compareAny(0xc0001308c0, {0x7f8ba0, 0xc00016e5a0})
/home/swarming/.swarming/w/ir/x/w/gopath/pkg/mod/github.com/google/go-cmp@v0.5.5/cmp/compare.go:290 +0xb29
github.com/google/go-cmp/cmp.(*state).statelessCompare(0xc0001308c0, {0x7f8ba0?, 0xc00016e5a0?})
/home/swarming/.swarming/w/ir/x/w/gopath/pkg/mod/github.com/google/go-cmp@v0.5.5/cmp/compare.go:229 +0x7a
github.com/google/go-cmp/cmp.(*state).callTRFunc(0xc0001308c0, {0x71fd40?, 0xc00011aff0?, 0x75f840?}, {0x75f840?, 0xc000129410?, 0x411254?}, {0x418c96?})
/home/swarming/.swarming/w/ir/x/w/gopath/pkg/mod/github.com/google/go-cmp@v0.5.5/cmp/compare.go:334 +0x2ba
github.com/google/go-cmp/cmp.(*transformer).apply(0xc0001f80c0, 0xc0001308c0, {0x75f840?, 0xc000129410?, 0xc000111a78?}, {0x75f840?, 0xc0001f64e0?, 0x7ff880?})
/home/swarming/.swarming/w/ir/x/w/gopath/pkg/mod/github.com/google/go-cmp@v0.5.5/cmp/options.go:320 +0x139
github.com/google/go-cmp/cmp.(*state).tryOptions(0xc0001308c0, {0x7ff880?, 0x75f840?}, {0x75f840?, 0xc000129410?, 0x7e7aac21e668?}, {0x75f840?, 0xc0001f64e0?, 0xa28260?})
/home/swarming/.swarming/w/ir/x/w/gopath/pkg/mod/github.com/google/go-cmp@v0.5.5/cmp/compare.go:303 +0xf3
github.com/google/go-cmp/cmp.(*state).compareAny(0xc0001308c0, {0x7f8bd0, 0xc0001f8100})
/home/swarming/.swarming/w/ir/x/w/gopath/pkg/mod/github.com/google/go-cmp@v0.5.5/cmp/compare.go:258 +0x4bf
github.com/google/go-cmp/cmp.Diff({0x75f840, 0xc000129410}, {0x75f840, 0xc0001f64e0}, {0xc00011b020?, 0xa08c80?, 0xc00005a600?})
/home/swarming/.swarming/w/ir/x/w/gopath/pkg/mod/github.com/google/go-cmp@v0.5.5/cmp/compare.go:119 +0x75
google.golang.org/protobuf/types/known/structpb_test.TestToStruct(0xc000102700)
/home/swarming/.swarming/w/ir/x/w/targetrepo495428555/types/known/structpb/struct_test.go:101 +0x1c25
testing.tRunner(0xc000102700, 0x796c60)
/home/swarming/.swarming/w/ir/x/w/goroot/src/testing/testing.go:1764 +0xf4
created by testing.(*T).Run in goroutine 1
/home/swarming/.swarming/w/ir/x/w/goroot/src/testing/testing.go:1823 +0x409
See LUCI build 8730991062707988305.
CC @chressie.
This failure no longer happens at Go tip as of https://go.dev/cl/630279.
|
gharchive/issue
| 2024-11-20T18:17:35 |
2025-04-01T04:34:23.743585
|
{
"authors": [
"dmitshur"
],
"repo": "golang/protobuf",
"url": "https://github.com/golang/protobuf/issues/1656",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
496038386
|
all: vet usages of cached pre-computation and weak fields
In many places in our code-base we perform some expensive computation once and cache the result. At the time that the computation is run, we assume that all information about the protobuf type system is known. However, the deprecated weak feature violates this assumption because it is possible that a weak reference is registered later on that we didn't know about at the time that the computation logic was run.
I'm not marking this as v2-blocking since the v1 code has pretty much the same set of bugs.
A simple a cache invalidation trigger is to record protoregistry.GlobalFiles.NumMessages with the computed results. At any point when we are about to use the cached results, we check whether the current protoregistry.GlobalFiles.NumMessages differs from the number of messages known at the time of previous computation. If different, then it is possible that new weak fields are known and our cache result is invalid.
Marking as blocks-v2. I'm perception is that v2 does more pre-computation up-front where initialization races are a stronger possibility.
Weak fields are going away. It is not worth the effort to go searching for potential bugs in the weak field implementation. We can always address weak field bugs if they show up.
|
gharchive/issue
| 2019-09-19T21:36:57 |
2025-04-01T04:34:23.746949
|
{
"authors": [
"dsnet"
],
"repo": "golang/protobuf",
"url": "https://github.com/golang/protobuf/issues/951",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
227828536
|
ner_duckling component training issue
rasa NLU version: 0.8.4
Used backend / pipeline: [ "nlp_mitie", "tokenizer_mitie", "ner_mitie", "ner_synonyms", "ner_duckling", "intent_featurizer_mitie", "intent_classifier_sklearn" ]
Issue:
When trying to train a dataset using the above pipeline, the following error is generated
INFO:root:Finished training component.
Traceback (most recent call last):
File "/usr/lib/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/vagrant/devFolder/rasa-nlu-env/src/rasa_nlu/rasa_nlu/train.py", line 83, in <module>
do_train(config)
File "/vagrant/devFolder/rasa-nlu-env/src/rasa_nlu/rasa_nlu/train.py", line 74, in do_train
persisted_path = trainer.persist(config['path'], persistor, model_name=config['name'])
File "rasa_nlu/model.py", line 183, in persist
update = component.persist(dir_name)
File "rasa_nlu/extractors/duckling_extractor.py", line 104, in persist
f.write(json.dumps({"dimensions": self.dimensions}))
TypeError: must be unicode, not str
The strings of the python duckling wrapper aren't unicode. so we might need to wrap that when reading the available dimensions from there.
i get this error in 0.8.5
#365
|
gharchive/issue
| 2017-05-10T22:29:09 |
2025-04-01T04:34:23.770010
|
{
"authors": [
"ecomaven",
"oziee",
"tmbo"
],
"repo": "golastmile/rasa_nlu",
"url": "https://github.com/golastmile/rasa_nlu/issues/358",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
71458408
|
Feature Request: Increased Mining Speed
For a Drill and it's cost it would be nice if it drilled Obsidian much faster. It would be nice to have a config option for drill speeds. Or possibly have it Drill in a 3x3 but at the cost of 9x more power. I use Capacitors to recharge the tools i use and they hold up to 50mil RF.
The tools are already configurable. 3x3 mining probably won't happen because I'm not planning on making upgrades/mining modes soon.
Dammit >_< Sorry, I'm still to this issue tracker.
Actually, the 1x3x1 mode is a thing, so... Closing!
I thought you wanted to make it different than redstone arsenal not just change the appearance of their tools.
|
gharchive/issue
| 2015-04-28T02:27:58 |
2025-04-01T04:34:23.772306
|
{
"authors": [
"Badcholo",
"goldenapple3"
],
"repo": "goldenapple3/RFDrills",
"url": "https://github.com/goldenapple3/RFDrills/issues/1",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
50839358
|
Security Contact
Hi,
this is kind of a "meta" issue, but I was wondering where I can
disclose a security issue in gollum? Due to the severity of the issue
I don't want to just put it in this public issue tracker.
thx,
joernchen
@dometto
I was curious so I had a look around but couldn't really find a nice option to create a free, closed and trustworthy mailing list. Do you know of any?
If not, a quick alternative might be a special Google Group (GG) and a GPG key? Simply put, by registering a GG, you get a global email address that redirects all traffic (within the group or from outside) to the members (except for the sender). Now, if the traffic was to be encrypted by a GPG key for the group address, Google itself will only see the encrypted message, both at the group and potentially at member addresses. Members can then use email clients to locally decipher the messages.
Not perfect but I think it's definitely an option. There are several problems that I can see atm:
The use of a GPG-enabled email client is a must, since we want the messages to be decrypted only on machines owned by the members.
Distribution of the private key and its associated passphrase, changing which is not a very comfortable process (the key would probably have to be redestributed again, and history potentially lost over time).
To avoid spam messages, emails to the group should probably have a specific subject format.
All member communication should be encrypted again, with the GG's public key.
With this setup, it's impossible for multiple members to have an encrypted conversation with the issue-reporter so only a single person should probably handle that.
All in all, it seems a very viable option, with 1 or 2 members. It is probably also a better alternative to having your personal address used for this purpose since you most probably will not be handing over access to it anytime in the future :).
There is actually a gollum-dev group already (although not used). I think as an alternative to a shared private key, which I take it you are suggesting and which just opens up the question of how to safely share the key and ensure it is passed on to future developers, it would be fine to have a personal key fingerprint for as many people in the group in the readme.
With a little effort, the distribution can be done quite safely, as long as the members treat the key with care. To ensure it is passed on would probably mean at least 2 members in the group. And if, for some reason, that fails, creating a new group and key is always a possibility, however inconvenient :).
But you're right that your suggestion is also an alternative. To me, they are more or less equal since they both have advantages and disadvantages. Yet, it's maybe wiser not to over-kill this indeed :).
|
gharchive/issue
| 2014-12-03T13:46:41 |
2025-04-01T04:34:23.815912
|
{
"authors": [
"SkyCrawl",
"dometto",
"joernchen"
],
"repo": "gollum/gollum-lib",
"url": "https://github.com/gollum/gollum-lib/issues/120",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
233964208
|
gollum-lib linked to vulnerable nokogiri version
Hi,
nokogiri dependencies are tightened to ~> 1.6.4, however 1.6.x includes vulnerable bundled libs:
Name: nokogiri
Version: 1.6.8.1
Advisory: CVE-2016-4658
Criticality: Unknown
URL: https://github.com/sparklemotion/nokogiri/issues/1615
Title: Nokogiri gem contains several vulnerabilities in libxml2 and libxslt
Solution: upgrade to >= 1.7.1
Vulnerabilities found!
@dometto @bartkamphorst can we relax nokogiri dependency?
I think so, and I'd like to, but I don't have the time to run tests at the moment. @josacar Can you confirm that nokogiri 1.7 doesn't (seem to) break anything in gollum?
@bartkamphorst It does pass green. I've opened a PR https://github.com/gollum/gollum-lib/pull/279
|
gharchive/issue
| 2017-06-06T17:18:20 |
2025-04-01T04:34:23.818694
|
{
"authors": [
"bartkamphorst",
"d2bit",
"josacar"
],
"repo": "gollum/gollum-lib",
"url": "https://github.com/gollum/gollum-lib/issues/278",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1050871035
|
Build options such as --build-arg fails
Hi there,
Thanks for your Github Actions and work.
I tried to use it on our project and it seems it fails when we pass --build-arg VALUE="key" in options.
I didn't try but from having a look the source seems to be fine, will you be able to verify this?
Here is the failure we get:
Run gonuit/heroku-docker-deploy@v1.3.3
Logging into the Heroku docker registry...
Building docker container...
"docker build" requires exactly 1 argument.
See 'docker build --help'.
Usage: docker build [OPTIONS] PATH | URL | -
Build an image from a Dockerfile
Error: Building container failed.
Error: undefined
here is a sample of what we're using and it fails:
deploy-integration:
runs-on: ubuntu-latest
name: Deploy
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Deploy (Build, Push & Release)
uses: gonuit/heroku-docker-deploy@v1.3.3
with:
email: ${{ secrets.HEROKU_LOGIN }}
heroku_api_key: ${{ secrets.HEROKU_API_KEY }}
heroku_app_name: deno-fireplace
dockerfile_directory: .
docker_options: '--build-arg key="${{ secrets.SECRET_KEY }}" --build-arg commit_hash="${{ github.sha }}"'
Never mind it's working! Don't know how I missed the version
Hello, it is not working for me, what did you change from that so that it would work?
@matias-gonz just use pure Github Actions run and it should work. I didn't know that the runner even supports Heroku command like. This is what I'm doing right now it works fine:
name: Test & Deploy
on:
workflow_dispatch:
push:
branches:
- master
jobs:
deploy-development:
runs-on: ubuntu-latest
name: Deploy to Development
needs: test
env:
HEROKU_APP_NAME: aminpaks
HEROKU_API_KEY: ${{ secrets.HEROKU_API_KEY }}
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Build Image
run: |
echo $HEROKU_API_KEY | docker login -u ${{ secrets.HEROKU_LOGIN }} registry.heroku.com --password-stdin
docker build --build-arg commit_hash=${{ github.sha }} --tag registry.heroku.com/${HEROKU_APP_NAME}/web .
docker push registry.heroku.com/${HEROKU_APP_NAME}/web
heroku container:release web --app ${HEROKU_APP_NAME}
|
gharchive/issue
| 2021-11-11T11:19:13 |
2025-04-01T04:34:23.830472
|
{
"authors": [
"aminpaks",
"matias-gonz"
],
"repo": "gonuit/heroku-docker-deploy",
"url": "https://github.com/gonuit/heroku-docker-deploy/issues/18",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1557891970
|
null is not an object (evaluating 'RNAudioRecord.init')
Hi there,
I am getting this error:
[Unhandled promise rejection: TypeError: null is not an object (evaluating 'RNAudioRecord.init')]
at node_modules/react-native-audio-record/index.js:7:43 in AudioRecord.init
at components/AudioTest.tsx:38:20 in recordSound
In AudioTest.tsx I just have a button that calls the following method (basically copied from the README):
import AudioRecord from "react-native-audio-record";
...
async function recordSound() {
await PermissionsAndroid.requestMultiple([
PermissionsAndroid.PERMISSIONS.RECORD_AUDIO,
]);
const options = {
sampleRate: 16000, // default 44100
channels: 1, // 1 or 2, default 1
bitsPerSample: 16, // 8 or 16, default 16
audioSource: 6, // android only (see below)
wavFile: 'test.wav' // default 'audio.wav'
};
AudioRecord.init(options); // <--- error happens here
AudioRecord.start();
AudioRecord.stop();
// or to get the wav file path
let audioFile = await AudioRecord.stop();
AudioRecord.on('data', data => {
console.log('data');
// base64-encoded audio data chunks
});
}
Can you tell me what I am doing wrong?
I am using expo
on Android
versions "react-native-audio-record": "^0.2.2"
Turns out it was expo. I ejected to use react-native-cli and it works.
|
gharchive/issue
| 2023-01-26T09:58:53 |
2025-04-01T04:34:23.846637
|
{
"authors": [
"tGrothmannFluffy"
],
"repo": "goodatlas/react-native-audio-record",
"url": "https://github.com/goodatlas/react-native-audio-record/issues/81",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
356544977
|
WebARonARCore on Android 9
The app says device not supported and quits. Used to work fine before the Android upgrade on my Pixel 2. What can I do to get it to work again?
I have the same problem, and when i go in play store, i don't found WebARonARcore. Why?
WebARonARCore is not in the play store; the focus has been experimenting with these AR prototypes, and not to be a production-ready app in the play store. That being said, we now have a working version in Chrome Canary: Augmented reality for the web
I just reinstalled it and it's working again! maybe give it a try too
Please, follow the steps described in the readme carefully. WebARonARCore
is deprecated (we are working on a replacement) and only works with ARCore
Developer Preview.
Thank you for your interest. We are working on something we hope will make
a great replacement for this project.
On Wed, Sep 12, 2018 at 3:10 PM Michael Nebeling notifications@github.com
wrote:
I just reinstalled it and it's working again! maybe give it a try too
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/google-ar/WebARonARCore/issues/81#issuecomment-420814654,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AFcSKswCOZe7aSHKZtvWXaJBq-zyvfQfks5uaYZlgaJpZM4WXyyt
.
Please, follow the steps described in the readme carefully. WebARonARCore is deprecated (we are working on a replacement) and only works with ARCore Developer Preview. Thank you for your interest. We are working on something we hope will make a great replacement for this project.
Are you, by any chance, referring to WebXR https://codelabs.developers.google.com/codelabs/ar-with-webxr/#0 ? After developing multiple AR applications I've been looking into making them more accessible and WebAR seemed to be the way to go, especially knowing how powerful ARCore and ARKit are. However I can't see any information about support of WebXR supporting iOS devices and anchors.
Would it be wise right now to attempt to make a +- commercial solution and teach collegues to use it?
As I have mentioned, we are working on a replacement for these projects and
yes, the idea is to base them on WebXR as it is where the AR based
standardization efforts are going at the moment. Cannot provide ETA on the
release (if ever) though.
On Wed, Oct 3, 2018 at 10:37 PM BarsikTheCaT notifications@github.com
wrote:
Please, follow the steps described in the readme carefully. WebARonARCore
is deprecated (we are working on a replacement) and only works with ARCore
Developer Preview. Thank you for your interest. We are working on something
we hope will make a great replacement for this project.
Are you, by any chance, referring to WebXR
https://codelabs.developers.google.com/codelabs/ar-with-webxr/#0 ? After
developing multiple AR applications I've been looking into making them more
accessible and WebAR seemed to be the way to go, especially knowing how
powerful ARCore and ARKit are. However I can't see any information about
support of WebXR supporting iOS devices and anchors.
Would it be wise right now to attempt to make a +- commercial solution and
teach collegues to use it?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/google-ar/WebARonARCore/issues/81#issuecomment-426891304,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AFcSKv3UtwIIvuaiTxwngFuYPJvWGbU3ks5uhZ6vgaJpZM4WXyyt
.
hi @judax !
any update on a working WebXr / arcore augmented reality sample web app ? thx
Still WIP I am afraid. Sorry.
On Thu, Sep 26, 2019 at 8:11 AM Perspective[S] notifications@github.com
wrote:
hi @judax https://github.com/judax !
any update on a working WebXr / arcore augmented reality sample web app ?
thx
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/google-ar/WebARonARCore/issues/81?email_source=notifications&email_token=ABLREKTWM4T6M4RD6VSNYJTQLTGLJA5CNFSM4FS7FSW2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD7V5U7A#issuecomment-535550588,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABLREKQIX7GXVEKKUJ7SLZTQLTGLJANCNFSM4FS7FSWQ
.
|
gharchive/issue
| 2018-09-03T15:26:32 |
2025-04-01T04:34:23.955379
|
{
"authors": [
"BarsikTheCaT",
"PerspectivesLab",
"YanneYoshi",
"jsantell",
"judax",
"michaelnebeling"
],
"repo": "google-ar/WebARonARCore",
"url": "https://github.com/google-ar/WebARonARCore/issues/81",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1406524392
|
GStreamer issue when running bodypix.py
Description
Hi
I ran into an issue related to the GStreamer and Gi modules when running the bodypix.py file. The following is the output:
$ python bodypix.py Traceback (most recent call last): File "C:\Users\zalta\Desktop\project-bodypix\project-bodypix\bodypix.py", line 28, in <module> import gstreamer File "C:\Users\zalta\Desktop\project-bodypix\project-bodypix\gstreamer.py", line 20, in <module> import gi ModuleNotFoundError: No module named 'gi'
I tried looking on stack overflow for quite sometime for a solution and wasn't successful in getting PyGObject working at all. I didn't want to necessarily rewrite my entire post so I'll just link my own post on stack overflow relating to this issue right here. Here's a link to my stack overflow post on this issue. I was hoping to get help on this issue as soon as possible. Just as a reference I am on a Windows 11 OS. Thanks for all the help!
Click to expand!
Issue Type
Build/Install
Operating System
Windows 10
Coral Device
Dev Board Mini
Other Devices
No response
Programming Language
Python 3.9
Relevant Log Output
No response
Hello @zain-altaf this project has not been implemented for Windows platforms. It only works on Linux machines and Coral Dev Board. Thanks!
|
gharchive/issue
| 2022-10-12T16:54:02 |
2025-04-01T04:34:23.960078
|
{
"authors": [
"hjonnala",
"zain-altaf"
],
"repo": "google-coral/project-bodypix",
"url": "https://github.com/google-coral/project-bodypix/issues/33",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1602133439
|
Is this a memory error?
A large number (half?) of my AF2 runs running on standard docker install on AWS looking at complexes/multimers aren't working anymore when it used to be only 1 in 10 that failed. They aren't massive in size (total of 900 aa) but it seems maybe there is a memory problem with the sequence alignments. The instance simply shuts down and the log file isn't super helpful (although I only speak 'basic' computer talk). The errors vary but it is always at the MSA step. For example, below: Does this make sense to anyone?
I0227 06:20:39.545079 140374311003968 run_docker.py:255]
I0227 06:20:39.545193 140374311003968 run_docker.py:255] - 06:18:32.515 INFO: Realigning 14726 HMM-HMM alignments using Maximum Accuracy algorithm
I0227 06:20:39.545316 140374311003968 run_docker.py:255]
I0227 06:20:39.545434 140374311003968 run_docker.py:255] - 06:20:38.284 ERROR: Error in /tmp/hh-suite/src/hhalignment.cpp:3539: MergeMasterSlave:
I0227 06:20:39.545548 140374311003968 run_docker.py:255]
I0227 06:20:39.545661 140374311003968 run_docker.py:255] - 06:20:38.284 ERROR: did not find 511 match states in sequence 1 of UniRef100_Q0F0S0. Sequence:
I0227 06:20:39.545774 140374311003968 run_docker.py:255]
I encounter a similar error. Did you find anything to circumvent this error?
This could be related to https://github.com/soedinglab/hh-suite/issues/277
Any news on this front? I am running into the same issue with my predictions. Using alphafold from singularity. v 2.3.2
|
gharchive/issue
| 2023-02-27T23:54:03 |
2025-04-01T04:34:23.964896
|
{
"authors": [
"Sparklibug",
"jdmontenegro",
"pur80a"
],
"repo": "google-deepmind/alphafold",
"url": "https://github.com/google-deepmind/alphafold/issues/705",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2281442745
|
Clarify google-ai-generativelanguage version
Hi,
I'm in the process of adding google-generativeai to conda-forge here: https://github.com/conda-forge/staged-recipes/pull/26259
One of the dependencies, google-ai-generativelanguage is not yet on conda-forge so I am adding that as well but it is not clear to me from the setup.py which version I need to specify?
Ref: https://github.com/google-gemini/generative-ai-python/blob/a96feda3b5dfe0709bde023b8025ac0f7595f5b3/setup.py#L45
Conda-forge cannot use prebuilt and downloadable files as we built everything ourselves on our infrastructure so would it be possible to clarify if that version is downloadable as a source distribution from somewhere and what version it has?
Currently, I have it specified as ==0.6.2 but I would assume that is not correct.
Thank you!
You stumbled on a temporary value that allows us to install from HEAD while we work on unreleased features. Please do not ship copies of the SDK using code built from HEAD to package managers - you should be able to find what you need in the versioned tags.
However, I'm not 100% sure you can or should maintain our packages on another distribution channel. I'll consult with some people internally and see if it's OK, but in the meantime please pause! I'm excited about having another distribution channel but I would hate for you do to this work only to have our security or legal team ask you to take it all down.
Hi everyone, thanks for the response!
Just to clarify:
I am building off of GitHub and PyPI releases and not any branches or downloads so only stable code should be included. Conda forge works based on versions so that is the only way I can distribute anything anyways. Also, we don't ship WIP code.
The license of both packages is Apache 2.0 so distribution is legal but of course, I'll hold off on these until you have confirmed it. In case distribution is not allowed, it would probably be good though to change the licenses to something proprietary that prohibits distribution or others will do the same.
Yes, I have specified 0.6.2 as a placeholder for now and once 0.6.3 is available, I would upgrade and distribute the second package. Conda-forge cannot redistribute binaries or any other compiled code so that wouldn't work either way :)
Is there a public timeline for when new releases are published?
Thanks! :)
0.6.3 came out yesterday: https://pypi.org/project/google-ai-generativelanguage/#history
Apache 2.0 so distribution is legal but of course
If there's any issue I expect that it would be around people thinking that those packages are provided by google, when it's actually you. But we'll see what answer @markmcd gets back.
Is there a public timeline for when new releases are published?
No, it's just whenever we think it's necessary.
Ahh perfect, thank you!
Conda-forge is a known redistribution platform and most of the packages are not submitted by the original package distributor so that shouldn't be an issue but let's wait. I'm also happy to add anyone as a co-maintainer, if there is interest?
Ok makes sense thanks.
@markmcd
|
gharchive/issue
| 2024-05-06T18:06:02 |
2025-04-01T04:34:23.978146
|
{
"authors": [
"BastianZim",
"MarkDaoust",
"markmcd"
],
"repo": "google-gemini/generative-ai-python",
"url": "https://github.com/google-gemini/generative-ai-python/issues/320",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1292282578
|
Wrong example for cloudrun-docker.yml
TL;DR
I had to make two change to make it work:
${{ env.GAR_LOCATION }}-docker.pkg.dev/${{ env.PROJECT_ID }}/${{ env.SERVICE }}:${{ github.sha }}
->
${{ env.GAR_LOCATION }}-docker.pkg.dev/${{ env.PROJECT_ID }}/${{ env.SERVICE }}/${{ github.sha }}
credentials_json: '${{ secrets.GCP_CREDENTIALS }}'' (one too much ' at the end :) )
->
credentials_json: '${{ secrets.GCP_CREDENTIALS }}'
Also it would be nice to include docker auth using json credentials:
- name: Docker Auth
id: docker-auth
uses: docker/login-action@v2
with:
registry: ${{ env.GAR_LOCATION }}-docker.pkg.dev
username: _json_key
password: ${{ secrets.GCP_CREDENTIALS }}
Expected behavior
No response
Observed behavior
No response
Action YAML
# This workflow build and push a Docker container to Google Artifact Registry and deploy it on Cloud Run when a commit is pushed to the $default-branch branch
#
# Overview:
#
# 1. Authenticate to Google Cloud
# 2. Authenticate Docker to Artifact Registry
# 3. Build a docker container
# 4. Publish it to Google Artifact Registry
# 5. Deploy it to Cloud Run
#
# To configure this workflow:
#
# 1. Ensure the required Google Cloud APIs are enabled:
#
# Cloud Run run.googleapis.com
# Artifact Registry artifactregistry.googleapis.com
#
# 2. Create and configure Workload Identity Federation for GitHub (https://github.com/google-github-actions/auth#setting-up-workload-identity-federation)
#
# 3. Ensure the required IAM permissions are granted
#
# Cloud Run
# roles/run.admin
# roles/iam.serviceAccountUser (to act as the Cloud Run runtime service account)
#
# Artifact Registry
# roles/artifactregistry.admin (project or repository level)
#
# NOTE: You should always follow the principle of least privilege when assigning IAM roles
#
# 4. Create GitHub secrets for WIF_PROVIDER and WIF_SERVICE_ACCOUNT
#
# 5. Change the values for the GAR_LOCATION, SERVICE and REGION environment variables (below).
#
# NOTE: To use Google Container Registry instead, replace gcr.io with gcr.io
#
# For more support on how to run this workflow, please visit https://github.com/marketplace/actions/deploy-to-cloud-run
#
# Further reading:
# Cloud Run IAM permissions - https://cloud.google.com/run/docs/deploying
# Artifact Registry IAM permissions - https://cloud.google.com/artifact-registry/docs/access-control#roles
# Container Registry vs Artifact Registry - https://cloud.google.com/blog/products/application-development/understanding-artifact-registry-vs-container-registry
# Principle of least privilege - https://cloud.google.com/blog/products/identity-security/dont-get-pwned-practicing-the-principle-of-least-privilege
name: Build and Deploy to Cloud Run
on:
push:
branches:
- release/**
jobs:
deploy:
# Add 'id-token' with the intended permissions for workload identity federation
permissions:
contents: 'read'
id-token: 'write'
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Google Auth
id: auth
uses: 'google-github-actions/auth@v0'
with:
credentials_json: '${{ secrets.GCP_CREDENTIALS }}'
# BEGIN - Docker auth and build (NOTE: If you already have a container image, these Docker steps can be omitted)
# Authenticate Docker to Google Cloud Artifact Registry
- name: Docker Auth
id: docker-auth
uses: docker/login-action@v2
with:
registry: ${{ env.GAR_LOCATION }}-docker.pkg.dev
username: _json_key
password: ${{ secrets.GCP_CREDENTIALS }}
- name: Build and Push Container
run: |-
docker build -f Dockerfile.users -t "${{ env.GAR_LOCATION }}-docker.pkg.dev/${{ env.PROJECT_ID }}/${{ env.SERVICE }}/${{ github.sha }}" .
docker push "${{ env.GAR_LOCATION }}-docker.pkg.dev/${{ env.PROJECT_ID }}/${{ env.SERVICE }}/${{ github.sha }}"
# END - Docker auth and build
- name: Deploy to Cloud Run
id: deploy
uses: google-github-actions/deploy-cloudrun@v0
with:
service: ${{ env.SERVICE }}
region: ${{ env.REGION }}
# NOTE: If using a pre-built image, update the image name here
image: ${{ env.GAR_LOCATION }}-docker.pkg.dev/${{ env.PROJECT_ID }}/${{ env.SERVICE }}/${{ github.sha }}
# If required, use the Cloud Run url output in later steps
- name: Show Output
run: echo ${{ steps.deploy.outputs.url }}
Log output
No response
Additional information
No response
Fixed via PR #5
Thanks @mpiorowski!
|
gharchive/issue
| 2022-07-03T11:38:15 |
2025-04-01T04:34:23.983370
|
{
"authors": [
"mpiorowski",
"verbanicm"
],
"repo": "google-github-actions/example-workflows",
"url": "https://github.com/google-github-actions/example-workflows/issues/4",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1587195135
|
release-please-action not tagging releases after manifest-pr is merged
TL;DR
Here is an implementation of creating a release PR: https://github.com/ipfs/ipfs-companion/blob/main/.github/workflows/ci.yml#L96-L104
This creates the release PR as expected. On merging this PR, this action runs again, but does nothing. The expectation is not to create a release, because the need is to add more assets to the created release. The release step is separate and is executed when a new tag gets pushed. However release-please-action does not push the tag hence this step never gets executed.
Expected behavior
Since release-please-action is only being used to create a collector PR and not the release itself, the expectation as per the documentation was:
When you're ready to tag a release, simply merge the release PR.
This does not happen if the release step is not happening.
Observed behavior
The release PR merges without creating a release tag, breaking the workflow.
Action YAML
release-pr:
runs-on: ubuntu-latest
needs: [test]
if: github.ref == 'refs/heads/main' && (github.event_name == 'push' || github.event_name == 'workflow_dispatch')
steps:
- uses: google-github-actions/release-please-action@v3.7.3
with:
command: manifest-pr
changelog-notes-type: github
Log output
No response
Additional information
No response
If you want it to tag, use the manifest command. The manifest-pr command only creates the PR (manifest does PR creation and tagging)
|
gharchive/issue
| 2023-02-16T07:48:46 |
2025-04-01T04:34:23.988664
|
{
"authors": [
"chingor13",
"whizzzkid"
],
"repo": "google-github-actions/release-please-action",
"url": "https://github.com/google-github-actions/release-please-action/issues/719",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2577959538
|
Not able to render shorter variant for a vertical creative
Hi Team,
The solution not able to render a shorter version of a vertical base video. Please look into it and help in resolving the issue
Thanks in advance!
Please provide more information. Have you checked the Cloud Function logs, are there errors? Or is the application failing within the UI stage already?
Hi Mohab,
Thanks for the prompt response. The application is failing within the UI.
After clicking on the render button, the processing isn't ending. It is
working fine for a landscape video.
Thanks,
Sahil
On Thu, Oct 10, 2024 at 2:59 PM Mohab Fekry @.***>
wrote:
Please provide more information. Have you checked the Cloud Function logs,
are there errors? Or is the application failing within the UI stage already?
—
Reply to this email directly, view it on GitHub
https://github.com/google-marketing-solutions/vigenair/issues/24#issuecomment-2404577457,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/BL7QRWZK6ZZ53ID6RQ7NSO3Z2ZCGTAVCNFSM6AAAAABPWEXOV6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIMBUGU3TONBVG4
.
You are receiving this because you authored the thread.Message ID:
@.***>
Thanks for the additional information, that was quite helpful. We identified an issue which will be fixed very soon.
Thanks Mohab. Looking forward to it!
On Thu, Oct 10, 2024 at 9:52 PM Mohab Fekry @.***>
wrote:
Thanks for the additional information, that was quite helpful. We
identified an issue which will be fixed very soon.
—
Reply to this email directly, view it on GitHub
https://github.com/google-marketing-solutions/vigenair/issues/24#issuecomment-2405549453,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/BL7QRW77RDSAGRTAFFX65HDZ22SSXAVCNFSM6AAAAABPWEXOV6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIMBVGU2DSNBVGM
.
You are receiving this because you authored the thread.Message ID:
@.***>
|
gharchive/issue
| 2024-10-10T07:46:44 |
2025-04-01T04:34:23.996348
|
{
"authors": [
"Sahil45745",
"mohabfekry"
],
"repo": "google-marketing-solutions/vigenair",
"url": "https://github.com/google-marketing-solutions/vigenair/issues/24",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
250187700
|
Improved floating point for layout constraint and added functional test. (#594)
Fixes #594
@khandpur Made changes.
|
gharchive/pull-request
| 2017-08-15T00:08:05 |
2025-04-01T04:34:23.999648
|
{
"authors": [
"saurabhj80"
],
"repo": "google/EarlGrey",
"url": "https://github.com/google/EarlGrey/pull/601",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
732959640
|
remove jit from backends
We should switch to using the jit decorator.
Like the tn.jit thing I wrote some time ago or jax.jit? The former calls the backend jit
i noticed, closing this
|
gharchive/issue
| 2020-10-30T07:50:02 |
2025-04-01T04:34:24.035718
|
{
"authors": [
"alewis",
"mganahl"
],
"repo": "google/TensorNetwork",
"url": "https://github.com/google/TensorNetwork/issues/864",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
175616220
|
[EN] Feedback for: /web/fundamentals/getting-started/your-first-progressive-web-app/step-02?hl=en
the url is the only place that show my current step: step 2
This'll be updated in our DevSite relaunch coming up shortly.
|
gharchive/issue
| 2016-09-07T21:55:18 |
2025-04-01T04:34:24.036669
|
{
"authors": [
"clauderc4e",
"petele"
],
"repo": "google/WebFundamentals",
"url": "https://github.com/google/WebFundamentals/issues/3347",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
130428173
|
Adds line highlighting support to {% highlight blocks %}
In a {% highlight %} block, this will make certain lines bold so that they are more apparent.
Use: {% highlight javascript hl_lines="3 4 5 6" %}
@petele
Is there a way to implement this relies on wrapping the actual lines instead of naming them by number? Granting that we're not a wiki, MDN uses a naming approach and it has a horrible tendency to wrong. Food for thought.
No, sadly not. The feature is provided by Pygments, all I've done is change the highlight from nothing to bold. :(
|
gharchive/pull-request
| 2016-02-01T17:32:00 |
2025-04-01T04:34:24.038352
|
{
"authors": [
"jpmedley",
"petele"
],
"repo": "google/WebFundamentals",
"url": "https://github.com/google/WebFundamentals/pull/2488",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
569100500
|
Removes & redirect PWA content to web.dev
What's changed, or what was fixed?
Removes and redirects PWA content to web.dev
See https://github.com/GoogleChrome/web.dev/pull/2200
Target Live Date: Once https://github.com/GoogleChrome/web.dev/pull/2200 goes live
Whoops!
There were 16 warnings that will prevent this PR from being merged. Please take a look, and either fix, or provide a justification for why they can't be fixed.
WARNINGS
src/content/en/fundamentals/app-install-banners/index.md - Unable to read file, was it deleted?
src/content/en/fundamentals/app-install-banners/promoting-install-mobile.md - Unable to read file, was it deleted?
src/content/en/fundamentals/web-app-manifest/index.md - Unable to read file, was it deleted?
src/content/en/progressive-web-apps/_book.yaml - Unable to read file, was it deleted?
src/content/en/progressive-web-apps/_index.yaml - Unable to read file, was it deleted?
src/content/en/progressive-web-apps/checklist.md - Unable to read file, was it deleted?
src/content/en/progressive-web-apps/desktop.md - Unable to read file, was it deleted?
src/content/es/fundamentals/web-app-manifest/index.md - Unable to read file, was it deleted?
src/content/id/fundamentals/web-app-manifest/index.md - Unable to read file, was it deleted?
src/content/it/progressive-web-apps/desktop.md - Unable to read file, was it deleted?
src/content/en/fundamentals/web-app-manifest/images/background-color.gif - Unable to read file stats: ENOENT: no such file or directory, stat 'src/content/en/fundamentals/web-app-manifest/images/background-color.gif'
src/content/en/fundamentals/web-app-manifest/images/devtools-manifest.png - Unable to read file stats: ENOENT: no such file or directory, stat 'src/content/en/fundamentals/web-app-manifest/images/devtools-manifest.png'
src/content/en/fundamentals/web-app-manifest/images/homescreen-icon.png - Unable to read file stats: ENOENT: no such file or directory, stat 'src/content/en/fundamentals/web-app-manifest/images/homescreen-icon.png'
src/content/en/fundamentals/web-app-manifest/images/manifest-display-options.png - Unable to read file stats: ENOENT: no such file or directory, stat 'src/content/en/fundamentals/web-app-manifest/images/manifest-display-options.png'
src/content/en/fundamentals/web-app-manifest/images/manifest-orientation-options.png - Unable to read file stats: ENOENT: no such file or directory, stat 'src/content/en/fundamentals/web-app-manifest/images/manifest-orientation-options.png'
src/content/en/fundamentals/web-app-manifest/images/theme-color.png - Unable to read file stats: ENOENT: no such file or directory, stat 'src/content/en/fundamentals/web-app-manifest/images/theme-color.png'
|
gharchive/pull-request
| 2020-02-21T17:58:41 |
2025-04-01T04:34:24.048697
|
{
"authors": [
"WebFundBot",
"petele"
],
"repo": "google/WebFundamentals",
"url": "https://github.com/google/WebFundamentals/pull/8508",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1579486382
|
[Pager] Horizontal Pager compose page twice when has exact 2 pages
Reproduce:
run Horizontal Pager: Looping with indicators
com/google/accompanist/sample/pager/HorizontalPagerLoopingIndicatorSample.kt
val pageCount = 2
HorizontalPager(
// Set the raw page count to a really large number
count = loopingCount,
state = pagerState,
// Add 32.dp horizontal padding to 'center' the pages
contentPadding = PaddingValues(horizontal = 32.dp),
// Add some horizontal spacing between items
itemSpacing = 4.dp,
modifier = Modifier
.weight(1f)
.fillMaxWidth()
) { index ->
// We calculate the page from the given index
val page = pageMapper(index)
Log.d("pager", "compose index: $index for page $page at ${System.currentTimeMillis()} ")
PagerSampleItem(
page = page,
modifier = Modifier
.fillMaxWidth()
.aspectRatio(1f)
)
}
The log shows compose each page twice:
Note: It only happens when pager has exact 2 pages
Accompanist Pager has now been deprecated as we have upstreamed it to the main Compose library and so I am closing this bug.
Please retest your issue using Compose Foundation Pager in the March release. You can see our migration guide for help and if this is still an issue, please file a bug at goo.gle/compose-feedback
|
gharchive/issue
| 2023-02-10T11:07:45 |
2025-04-01T04:34:24.052357
|
{
"authors": [
"bentrengrove",
"debbiefu"
],
"repo": "google/accompanist",
"url": "https://github.com/google/accompanist/issues/1511",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
564692386
|
Compare two faces in AIY vision kit
Hello,
Is it possible to compare two faces in the AIY vision kit by tweaking the existing models or features available for the same? If yes, how to do it.
As of now, only face detections demos are available. face_comparison or face_recognition demos haven't been documented for AIY kits yet.
|
gharchive/issue
| 2020-02-13T13:44:27 |
2025-04-01T04:34:24.053645
|
{
"authors": [
"manoj7410",
"techguyzz"
],
"repo": "google/aiyprojects-raspbian",
"url": "https://github.com/google/aiyprojects-raspbian/issues/671",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1032341365
|
Move UnitConverter to common module.
IMPORTANT: All PRs must be linked to an issue (except for extremely trivial and straightforward changes).
Fixes #[issue number]
Description
UnitConverter code changes are required in data capture module to support Quantity data type in inequality operator implementation.
Therefore, to avoid code duplication of UnitConverter in data capture , it is moved out in separate module which is common.
Now engine and datacapture module will have 'common' module dependency.
Alternative(s) considered
No
Type
Choose one:Code health
Screenshots (if applicable)
Checklist
[x] I have read and acknowledged the Code of conduct
[x] I have read How to Contribute
[x] I have read the Developer's guide
[x] I have signed the Google Individual CLA, or I am covered by my company's Corporate CLA
[x] I have discussed my proposed solution with code owners in the linked issue(s) and we have agreed upon the general approach
[x] I have run ./gradlew spotlessApply and ./gradlew spotlessCheck to check my code follows the style guide of this project
[x] I have run ./gradlew check and ./gradlew connectedCheck to test my changes locally
[x] I have built and run the reference app(s) to verify my change fixes the issue and/or does not break the reference app(s)
Codecov Report
Merging #847 (eb0d5df) into master (1d7f2cc) will decrease coverage by 0.06%.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #847 +/- ##
============================================
- Coverage 18.33% 18.26% -0.07%
+ Complexity 267 265 -2
============================================
Files 118 117 -1
Lines 9203 9190 -13
Branches 572 572
============================================
- Hits 1687 1679 -8
+ Misses 7268 7263 -5
Partials 248 248
Impacted Files
Coverage Δ
...va/com/google/android/fhir/db/impl/DatabaseImpl.kt
74.07% <0.00%> (+1.85%)
:arrow_up:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 1d7f2cc...eb0d5df. Read the comment docs.
|
gharchive/pull-request
| 2021-10-21T10:48:04 |
2025-04-01T04:34:24.066269
|
{
"authors": [
"codecov-commenter",
"santosh-pingle"
],
"repo": "google/android-fhir",
"url": "https://github.com/google/android-fhir/pull/847",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
55277827
|
Only participate in the election when caught up
This is the first half of this, also planning to add a ConsistentStore wrapper which only allows calls to modify cluster-wide state if the caller is currently master.
Needs a bit of sorting out, I'll look again when I'm at the office. :smiley:
Yah, remnants of the other PRs which I separated out, soz. :/
LGTM
|
gharchive/pull-request
| 2015-01-23T12:47:13 |
2025-04-01T04:34:24.102171
|
{
"authors": [
"AlCutter",
"pphaneuf"
],
"repo": "google/certificate-transparency",
"url": "https://github.com/google/certificate-transparency/pull/439",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
263178875
|
Unresolved reference when parsing arguments
--startup references a variable logStartup that doesn't seem to be defined. Maybe you forgot to change this to True?
Thanks!
|
gharchive/issue
| 2017-10-05T15:58:14 |
2025-04-01T04:34:24.103075
|
{
"authors": [
"aaxu",
"dschuyler"
],
"repo": "google/ci_edit",
"url": "https://github.com/google/ci_edit/issues/74",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
84790315
|
Warn about placing @nocollapse on a prototype property
Folks at Google are starting to use nocollapse in their projects, and they are putting it in all sorts of places. The following should be a warning, that the annotation does nothing, since collapse-properties doesn't look at prototype properties anyway.
/** @constructor */
function Foo() {}
/** @nocollapse */
Foo.prototype.dprop = 234;
var y = Foo.prototype.dprop;
This isn't as straight forward as I had hoped. Since the CollapseProperties pass uses the GlobalNamespace class, prototype properties are ignored.
I can try to shoe horn this into GlobalNamespace with a flag to indicate whether warnings should be emitted or not (since GlobalNamespace is used in multiple passes), or I can try to find another pass to issue the warning from.
Thoughts?
I'd prefer not to put more warnings into the optimizations phase. Especially not something like this that can just be checked syntactically regardless of whether the properties are actually being collapsed.
Maybe CheckJSDoc.java ?
I was gonna suggest parsing/IRFactory.java, but I very much like the idea of CheckJSDoc.java.
It's good to have a pass that detects misplaced jsdocs. It is quite common that the passes who need a jsdoc annotation only look at the nodes that might have it, so they don't warn about the annotation placed on irrelevant nodes. It can be the first check in DefaultPassConfig#getChecks, always on (without a compiler option).
That's about where I had landed as well. I'll start work on the CheckJSDoc.java pass then.
Closed by #1006
|
gharchive/issue
| 2015-06-03T22:00:43 |
2025-04-01T04:34:24.106201
|
{
"authors": [
"ChadKillingsworth",
"blickly",
"dimvar"
],
"repo": "google/closure-compiler",
"url": "https://github.com/google/closure-compiler/issues/984",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1918840863
|
add example of issue 1243 to detect changes in clspv
Ref #1243
I would worry that we did not figure out the right IR now, thus missing a potential fix for it.
|
gharchive/pull-request
| 2023-09-29T08:32:35 |
2025-04-01T04:34:24.107140
|
{
"authors": [
"rjodinchr"
],
"repo": "google/clspv",
"url": "https://github.com/google/clspv/pull/1244",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
456888726
|
User-reported unhelpful error message
https://code.world/#PfDSvNaEK01xEr6lc4O5cCA
program.hs:7:18-27: error:
• Missing punctuation before this expression.
Perhaps you forgot a comma, an operator, or a bracket.
• To multiply, please use the * operator.
For example: drawingOf * ourPicture
• To apply a function, add parentheses around the argument.
For example: drawingOf(ourPicture)
This looks like someone who intended to use https://code.world/haskell, and ended up at the educational variant instead. Given that they are where they are, though, the error message seems clear, and even explicitly suggests adding the parentheses to apply a function. I don't think there's any obvious improvement here.
|
gharchive/issue
| 2019-06-17T11:42:20 |
2025-04-01T04:34:24.109609
|
{
"authors": [
"cdsmith"
],
"repo": "google/codeworld",
"url": "https://github.com/google/codeworld/issues/988",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
203369680
|
Add method addDefaultShareMenuItem for Android
Some versions of Android did not display the action button Share the URL. I added the call to the method addDefaultShareMenuItem() and works.
Hello?
|
gharchive/pull-request
| 2017-01-26T13:20:19 |
2025-04-01T04:34:24.114647
|
{
"authors": [
"gabfiocchi"
],
"repo": "google/cordova-plugin-browsertab",
"url": "https://github.com/google/cordova-plugin-browsertab/pull/16",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
93640891
|
e2e library for handling PGP/MIME messages
The original library was built by Yan Zhu from yahoo.
I've ported the library to e2e and made a number of changes to the original code.
The library will require some more work before it's finished, but I'd appreciate any advice/feedback at this point.
As of now, the library only supports the building of an outgoing PGP/MIME message.
I'll soon add the parsing of incoming messages.
@diracdeltas
lgtm other than comments above. BTW, I should be covered under the Google corporate CLA (for Yahoo).
LGTM modulo comments I've given.
|
gharchive/pull-request
| 2015-07-07T22:10:13 |
2025-04-01T04:34:24.121209
|
{
"authors": [
"diracdeltas",
"koto",
"yonigoogle"
],
"repo": "google/end-to-end",
"url": "https://github.com/google/end-to-end/pull/323",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2091993825
|
mvn package for dspace error
error-prone version: 2.10.0
BugPattern: ReferenceEquality
Stack Trace:
com.google.common.util.concurrent.ExecutionError: java.lang.NoSuchMethodError: 'com.sun.tools.javac.tree.JCTree$JCExpression com.sun.tools.javac.tree.TreeMaker.Select(com.sun.tools.javac.tree.JCTree$JCExpression, com.sun.tools.javac.code.Symbol)'
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2049)
at com.google.common.cache.LocalCache.get(LocalCache.java:3951)
at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3974)
at com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4935)
Hey @glarbi! This issue looks like a duplicate of other reported issues related to JDK 21 compatibility. Version 2.10.0 is quite old; please upgrade to the latest version.
|
gharchive/issue
| 2024-01-20T10:03:08 |
2025-04-01T04:34:24.124118
|
{
"authors": [
"Stephan202",
"glarbi"
],
"repo": "google/error-prone",
"url": "https://github.com/google/error-prone/issues/4255",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2071952689
|
Remove incorrect statement from BugPattern index doc
#4248
@cushon I'm unfamiliar with CI/CD of google opensource - do I need to something else for this to be merged or is copybara going to take care of it?
@cushon I'm unfamiliar with CI/CD of google opensource - do I need to something else for this to be merged or is copybara going to take care of it?
There's an internal review process before it's merged. You don't need to do anything else, thanks!
|
gharchive/pull-request
| 2024-01-09T09:20:50 |
2025-04-01T04:34:24.125637
|
{
"authors": [
"cushon",
"elkkhan"
],
"repo": "google/error-prone",
"url": "https://github.com/google/error-prone/pull/4249",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1143656302
|
Evosax - Augmented Random Search
Hi there 🤗 - I am opening a first PR, which adds Augmented Random Search (Mania et al., 2018) to evojax. The implementation wraps evosax's.
Reference: Mania et al. (2018)
evosax Source Code: https://github.com/RobertTLange/evosax/blob/main/evosax/strategies/ars.py
There are a couple general considerations:
evosax strategies by default minimize the objective, while all evojax tasks maximize. Hence, the FitnessShaper has to transform the "raw" fitness.
In order to easier access the individual strategies, I added a Strategies dictionary. This comes in handy when trying to benchmark multiple strategies.
evosax is now a required dependency, which in turn requires newer versions of jax and jaxlib. I am not sure if this breaks anything. So far when running the benchmarks it did not.
I have also created a mini-repository for running the benchmarks, storing logs and configuration files. Maybe this can be of general interest? I am planning to add a mle-hyperopt parameter search pipeline. You can find the ARS logs here and this is the benchmarking summary for ARS:
Benchmarks
Parameters
Results
CartPole (easy)
900 (max_iter=1000)
Link
902.107
CartPole (hard)
600 (max_iter=1000)
Link
666.6442
Waterworld
6 (max_iter=500)
Link
6.1300
Waterworld (MA)
2 (max_iter=2000)
Link
1.4831
Brax Ant
3000 (max_iter=300)
Link
3298.9746
MNIST
90.0 (max_iter=2000)
Link
0.9610
Update: I added hyperparameter search utilities and coarsely grid searched the initiatl learning rate and standard deviation. Here are some results for the cartpole and mnist taks:
Cartpole-Easy
Cartpole-Hard
MNIST
Hey @RobertTLange , Thanks for the detailed PR!
I have merged it with some quick adds:
quick tests, and
putting evosax as the optional ([extra]) dependency of EvoJAX.
BTW, It seems that evosax may have issues under Python 3.6, which you may want to have a look at (although it does not affect the rest of EvoJAX for now.) Please refer to
The output of CI workflow https://github.com/google/evojax/runs/5268178005 , and
The output of my local smoke testing:
Python 3.6.13 |Anaconda, Inc.| (default, Feb 23 2021, 12:58:59)
[GCC Clang 10.0.0 ] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import evosax
>>> from evosax import Augmented_RS, FitnessShaper
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: cannot import name 'Augmented_RS'
>>>
|
gharchive/pull-request
| 2022-02-18T19:41:31 |
2025-04-01T04:34:24.138529
|
{
"authors": [
"RobertTLange",
"alantian"
],
"repo": "google/evojax",
"url": "https://github.com/google/evojax/pull/9",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
630231721
|
[master] Fix spelling errors
Produced via:
github.com/client9/misspell
/assign helmick
/cc helmick
/assign mikehelmick
/ok-to-test
/lgtm
/approve
|
gharchive/pull-request
| 2020-06-03T18:39:17 |
2025-04-01T04:34:24.140761
|
{
"authors": [
"mattmoor",
"mikehelmick"
],
"repo": "google/exposure-notifications-server",
"url": "https://github.com/google/exposure-notifications-server/pull/539",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
565541483
|
Reduce size of gltfio for Android.
The net effect of these 3 changes on arm64 reduces the uncompressed libgltfio so from 1.5 MB to 850 KB. Over 55% of the newly optimized binary is its rodata section (i.e. materials).
As a bonus, this also reduces libfilament so by 8 KB because it fixes a typo in our CMake file wrt --gc-sections.
Later today, we would like to create a "gltfio-lite" which would only support lit opaque materials. This would have a size of ~470 KB .
Are there variants we can strip out of the materials? matc lets you do that.
|
gharchive/pull-request
| 2020-02-14T20:29:12 |
2025-04-01T04:34:24.146162
|
{
"authors": [
"prideout",
"romainguy"
],
"repo": "google/filament",
"url": "https://github.com/google/filament/pull/2135",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
762646746
|
Feature request: early stopping for model training
Opening this feature request based on #711.
Description of the feature to be implemented
Object to stop training when a monitored metric (e.g. validation loss, etc.) has stopped improving.
Specific points to consider
Whether this should live in flax.training
Should this stop the training loop early or should it stop checkpointing early (but continue the train loop)? If so, how should it do that?
Reference implementations in other frameworks
https://github.com/tensorflow/tensorflow/blob/v2.3.1/tensorflow/python/keras/callbacks.py#L1559-L1690
https://pytorch-lightning.readthedocs.io/en/latest/_modules/pytorch_lightning/callbacks/early_stopping.html#EarlyStopping
Curious how this relates to CLU library (cc @andsteing @Marvin182)
Copying @jheek comments from PR #711 for reference:
"My suggestion would be to consider the following though:
We generally try to avoid "inversion of control" APIs (a somewhat vague term). In this case the EarlyStopping class takes control of checkpointing even though it only adds the logic to decide whether we should stop or not.
I think a proposal that decouples checkpointing from the early stopping criteria (I imagine there are many criteria like this) is more likely to be accepted."
My main question is whether we need EarlyStopping to itself call save_checkpoint. Is there a simple way to devise an API that simply returns true/false as to whether training should stop? Then adding it into an existing training loop is just adding two lines regardless of how that training loop was handling checkpointing.
What do you think @gmittal? Would you be OK trying this out and showing a sample diff of how you'd also add this to one of our existing training loops (doesn't really matter which)?
It would be useful to shows how your code works somewhere, perhaps in an example, or just with a test.
@avital: One thing that might be worth exploring is having EarlyStopping as an iterator that we wrap around the training loop and have that terminate early when the validation metric does not improve:
epoch_iterator = EarlyStopping(range(epochs))
for step in epoch_iterator:
optimizer = train_step(...)
validation_metric = ...
# It would be nice to not save checkpoints that do not improve
ckpt = flax.training.save_checkpoint(...)
epoch_iterator.update(validation_metric)
I'll put together a couple of examples and post them here soon.
@avital @marcvanzee I've rewritten EarlyStopping as an iterator and have pushed some simple tests as well: https://github.com/google/flax/pull/711/files
What is the benefit of writing this as an iterator?
I would expect a simple api that can do 3 things:
did_improve = early_stop.update(...) - incorporate eval metric and return True if it's the best one so far.
early_stop.should_stop -- tells me if the early stop criteria is met.
EarlyStop itself is a flax.struct.dataclass which can be part of a checkpoint.
The train loop would look something like this:
early_stop = EarlyStop(...)
for step in range(epochs):
optimizer = train_step(...)
validation_metric = ...
did_improve = early_stop.update(validation_metric)
if did_improve:
state = (optimizer, early_stop, ...)
save_checkpoint(state, prefix='best_')
if early_stop.should_stop:
logging.info('early stop....')
# cleanup
break
Thanks for the feedback. I've updated the object definition and tests accordingly.
|
gharchive/issue
| 2020-12-11T17:16:36 |
2025-04-01T04:34:24.162503
|
{
"authors": [
"avital",
"gmittal",
"jheek",
"marcvanzee"
],
"repo": "google/flax",
"url": "https://github.com/google/flax/issues/728",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
867640727
|
Add proper cache behavior for lift.jit
Fixes cache misses in lift.jit
NOTE: how to cache jit in linen transforms still needs additional changes
A clever hack! what exactly does your note in the comment above mean? is this unsound when nested inside other transforms?
Just like jit the cache doesn't trigger if you call lift.jit multiple times which is what transforms.py ends up doing.
|
gharchive/pull-request
| 2021-04-26T12:18:38 |
2025-04-01T04:34:24.164517
|
{
"authors": [
"jheek"
],
"repo": "google/flax",
"url": "https://github.com/google/flax/pull/1275",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
935492497
|
Use Optax optimizer for ppo example
Use Optax optimizer for ppo example:
Use TrainState
use apply_fn inline with train_state pattern
Part of #1053
Codecov Report
Merging #1404 (934a818) into master (34eab18) will decrease coverage by 0.01%.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #1404 +/- ##
==========================================
- Coverage 82.28% 82.27% -0.02%
==========================================
Files 65 65
Lines 5324 5332 +8
==========================================
+ Hits 4381 4387 +6
- Misses 943 945 +2
Impacted Files
Coverage Δ
flax/core/lift.py
96.15% <0.00%> (-0.26%)
:arrow_down:
flax/linen/module.py
94.21% <0.00%> (-0.16%)
:arrow_down:
flax/linen/transforms.py
91.45% <0.00%> (ø)
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 34eab18...934a818. Read the comment docs.
Did you rerun the code to verify we get the same results with Optax?
Yes I did a re-run and it is consistent with the original run
|
gharchive/pull-request
| 2021-07-02T07:18:42 |
2025-04-01T04:34:24.174922
|
{
"authors": [
"codecov-commenter",
"jheek"
],
"repo": "google/flax",
"url": "https://github.com/google/flax/pull/1404",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
150577219
|
make --dev option valid in Gruntfile.js
grunt.option('dev', false) cannot be true, probably.
Thanks for fixing this, Ken! I'm going to merge this, but can you please register your Github account internally at Google to avoid any issues with the CLA bot?
Thanks Phil. I've registered my account Google internally.
|
gharchive/pull-request
| 2016-04-23T17:23:55 |
2025-04-01T04:34:24.190873
|
{
"authors": [
"miuraken",
"pames"
],
"repo": "google/gae-secure-scaffold-python",
"url": "https://github.com/google/gae-secure-scaffold-python/pull/19",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
136447826
|
Issuing a PR fails with a 422
I'm attempting to issue a PR as follows:
title := ""
head := "my-id:my-branch"
base := "master"
body := "This is an automated PR"
pr := &github.NewPullRequest{
Title: &title,
Head : &head,
Base: &base,
Body: &body,
}
fmt.Printf("PR: %+v issued~\n", pr)
prResult, _, err := client.PullRequests.Create("my-org", "my-repo", pr)
And when I execute it I get:
PR: &{Title:0xc82000ad50 Head:0xc82000ad60 Base:0xc82000ad70 Body:<nil> Issue:<nil>} issued~
422 Validation Failed [{Resource:Issue Field:title Code:missing_field}]
Am I doing something wrong or is the PR object not getting marshalled correctly?
I realized that it was USER error, typically helps to actually set a title.
|
gharchive/issue
| 2016-02-25T17:03:05 |
2025-04-01T04:34:24.257352
|
{
"authors": [
"rdifrango"
],
"repo": "google/go-github",
"url": "https://github.com/google/go-github/issues/296",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
205103595
|
Generate accessors for structs with pointer fields
Fixes #45.
Change-Id: Ib2b6cc5d713e2eb833ee3c7fcfbd804bfe8fa313
Thanks for working on this @gmlewis. Before I look at the details of the implementation, I'd like to discuss this change at a higher level. I have some concerns about the general direction and I'd like to see where others stand. Let's maybe do that in #45 itself, and leave this PR to discuss implementation details. I'll ping you there.
As I posted in https://github.com/google/go-github/issues/45#issuecomment-279566195, my potential concerns are resolved and I have no objections to doing this. I'll leave some review comments on the implementation now.
First, some high level questions/thoughts for @gmlewis, looking only at the final API after this PR.
The "Get" prefix is slightly unfortunate, given Go style suggests to avoid "Get" prefix from getters (source: https://golang.org/doc/effective_go.html#Getters).
However, I understand this is unavoidable because otherwise field name and method name would have same name, right? Also, this is the same solution that protobuf uses and is therefore consistent, right?
Just mentioning this, I don't think we need to change it. Also, one way to justify the "Get" prefix is that it's not a simple getter, it has logic for returning zero value if the pointer is nil.
The documentation text for methods seems to prioritize mentioning the nil case:
// GetMessage returns the zero value if nil, otherwise the actual value.
Is that the way to go? I'd expect the nil case to be the less common path, and the actual value being the primary path. So would it better to reflect that in the docs by swapping the order? Something that mentions the actual value first, nil case second, like:
// GetMessage returns the Message field if its value is non-nil, and zero value otherwise.
(That phrasing doesn't sound better, but perhaps it can be tweaked. Just want to get your thoughts.)
This is actually somewhat implementation-related. Currently, this PR generates 56 "-accessors.go" files. Given they're all generated and most users/developers won't need to look at/modify the detailed contents of those files, maybe it'd be better to consolidate them all in a single generated .go file?
That way, when working on this package, there are less files get in your way that you don't want to modify. Also smaller risk of accidentally modifying a generated file, because there are fewer of them.
Compare the file list before and after this PR.
Please don't change this yet, I just want to get your thoughts on this first. We should consider the trade-offs of both approaches.
@shurcooL - addressing your questions/thoughts:
Yes, "Get" is consistent with the proto2 implementation.
I went through a few iterations on the comment, and am fine with changing it. Both versions seem clear to me, but am happy to emphasize as suggested.
Sure, I'm happy to consolidate to a single file... I was originally thinking it would be nice to easily jump to just the accessors for a particular file, but with all the editor tooling we have these days, maybe a single file is better after all.
+1 to single file
On Tue, Feb 14, 2017 at 7:55 AM, Glenn Lewis notifications@github.com
wrote:
@shurcooL https://github.com/shurcooL - addressing your
questions/thoughts:
Yes, "Get" is consistent with the proto2 implementation.
I went through a few iterations on the comment, and am fine with
changing it. Both versions seem clear to me, but am happy to emphasize as
suggested.
Sure, I'm happy to consolidate to a single file... I was originally
thinking it would be nice to easily jump to just the accessors for a
particular file, but with all the editor tooling we have these days, maybe
a single file is better after all.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/google/go-github/pull/543#issuecomment-279747553, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AAAEWG5WFHKq6kjaw6N5AdsyJ-ogUXP3ks5rcc5mgaJpZM4L2GC9
.
I believe I've addressed the review comments from @shurcooL.
PTAL.
Thank you for the detailed discussion regarding generators, @shurcooL!
I was not aware of this common usage, and it makes sense, so I've moved gen-accessors.go into the github directory.
Merging.
Integrated in https://github.com/google/go-github/commit/cd756c0dfc435cf14d9c696e3b44012f6a4fb5aa
|
gharchive/pull-request
| 2017-02-03T09:02:24 |
2025-04-01T04:34:24.270061
|
{
"authors": [
"gmlewis",
"shurcooL",
"willnorris"
],
"repo": "google/go-github",
"url": "https://github.com/google/go-github/pull/543",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
249916991
|
Make it easier to specify calendar event times in target timezone
Repro
using Google.Apis.Auth.OAuth2;
using Google.Apis.Calendar.v3;
using Google.Apis.Calendar.v3.Data;
using Google.Apis.Services;
using System;
using System.IO;
class Program
{
static CalendarService Service;
static void Main()
{
var key = File.ReadAllText("service-account-key.json");
var credential = GoogleCredential.FromJson(key).CreateScoped(CalendarService.Scope.Calendar)
.CreateWithUser("me@mydomain.com");
Service = new CalendarService(new BaseClientService.Initializer() {
HttpClientInitializer = credential,
ApplicationName = "TimeDemo"
});
DateTime start, end;
start = new DateTime(2017, 8, 14, 9, 0, 0, DateTimeKind.Local); // 9am local
end = start.AddHours(1);
AddEvent("Local", start, end);
start = new DateTime(2017, 8, 14, 9, 0, 0, DateTimeKind.Utc); // 9am UTC
end = start.AddHours(1);
AddEvent("Utc", start, end);
start = new DateTime(2017, 8, 14, 9, 0, 0, DateTimeKind.Unspecified); // 9am unspecified
end = start.AddHours(1);
AddEvent("Unspecified", start, end);
}
static void AddEvent(string title, DateTime start, DateTime end)
{
var ev = new Event {
Summary = title,
Start = new EventDateTime { DateTime = start, TimeZone = "Europe/London" },
End = new EventDateTime { DateTime = end, TimeZone = "Europe/London" }
};
Service.Events.Insert(ev, "primary").Execute();
}
}
Expected result
Create the events at 9am London time, taking into account daylight savings, because the TimeZone was specified in the API as "Europe/London". Or at least do this for the "Unspecified" event.
Actual result
Creates a "Utc" event at 9am UTC (10am London time).
Creates "Local" and "Unspecified" events at 9am in the timezone of the client machine. If I change my computer clock or deploy to a server in a different timezone, these events end up at a different time - not good!
The selected TimeZone of "(GMT+01:00) London" is displayed when I open any of the events in Google Calendar.
Question
I understand that this is probably by design, but it gave me a real headache! I wonder if there's a way the API could make it easier to say "Create an event at 9am in Europe/London"?
At the moment I'm doing this; there's probably a more extensible way but it works for now.
static TimeZoneInfo tzi = TimeZoneInfo.FindSystemTimeZoneById("GMT Standard Time");
static string ConvertToDateTimeRaw(DateTime date)
{
var suffix = tzi.IsDaylightSavingTime(date) ? "+01:00" : "Z";
return date.ToString("yyyy-M-dTHH:mm:ss") + suffix;
}
static void AddEvent(string title, DateTime start, DateTime end)
{
var ev = new Event {
Summary = title,
Start = new EventDateTime { DateTimeRaw = ConvertToDateTimeRaw(start) },
End = new EventDateTime { DateTimeRaw = ConvertToDateTimeRaw(end) }
};
Service.Events.Insert(ev, "primary").Execute();
}
Suggestion
How about a ZonedDateTime property, which creates an event at the specified DateTime (ignoring its DateTimeKind) in the specified timezone?
var ev = new Event {
Summary = title,
Start = new EventDateTime { ZonedDateTime = new ZonedDateTime(start, "Europe/London") },
End = new EventDateTime { ZonedDateTime = new ZonedDateTime(end, "Europe/London") }
};
The Google calendar api allows but does not require you as the developer to send the time including a timezone. If you don't then the timezone will either default to UTC like a calendar inserted without a timezone or it will default to the time zone set in the calendar itself. This is by design of the api itself and has nothing to do with this client library.
The client library just let's you access the api. what you send is up to you as a developer. What the api accepts is up to the api teams.
I don't think this is a issues with the client library.
I don't think the behavior of Local is a bug. DateTimeKind.Local explicitly means "local to the machine the code is running on".
The behavior of Unspecified is a bug in the client library IMO. We've looked at this before in #853. This isn't a matter of what the API accepts - it's what the client library does with the values in order to convert them to strings - and with DateTimeKind.Unspecified, it's converting it into UTC before sending it, instead of passing it along as-is. The problem is in this code.
That code is used by Utilities.GetDateTimeFromString which is called in many places directly by client code (e.g. in the Created and Updated properties of Event) as well as from the JSON converter.
We can't change the behavior in general now without it being massively backwardly incompatible - and all the client library code is automatically generated, so adding an extra property certainly isn't an easy option. We could potentially do it through partial classes, but that would mean changes to quite a lot of the code generation.
A possible alternative would be to add a per-service-instance setting that controls how DateTimeKind.Unspecified is handled. We'd probably want to be consistent across the service, so we'd need JSON support as well as different ways of calling Utilities.GetDateTimeFromString directly.
This is quite a lot of work, so I certainly wouldn't expect it to happen soon.
For the moment, the best workaround is to specify the time in UTC, even though you want the local time. I know that's annoying, and it's not what I think you should have to do, but it's what's available at the moment.
I stand corrected. This looked like a issue on StackOverflow recently.
The thing is, for most DateTime properties it's probably not a particularly bad thing to do. It's this one class (EventDateTime) where a time zone can be specified separately that it really makes sense to be able to use DateTimeKind.Unspecified to provide a local-to-that-time-zone date/time. (Admittedly that could be ambiguous due to DST transitions. The joys of date/time work.)
At some point I'd really like to build a Calendar client based on Noda Time but don't expect that any time soon!
If the issue is only with the JSON parsing couldn't we just create a second method for that to use?
It's formatting more than parsing. Creating a method to do the work is easy. The tricky bits are:
Handling compatibility - we don't want to break any users already relying on this behavior, so it has to be opt-in
Hooking any new code into the generated code, just at the right place
Ideas welcome - I'll chat about this with @chrisdunelm with a few possibilities. If we changed the codegen to generate partial classes, and had a way of writing client-specific partials, that would definitely open up possibilities.
You may also want to consider how many people may be relying on the "feature" that would be affected by an easy change. Imo a lot has already been done to this library that was not 100% backwards compatible because no one thought it would effect anyone.
100% backwards compatible is not always the best way forward.
Okay, here's a proposal for how we could do this:
Change the code generator to make everything partial
If there's any build infrastructure that completely wipes out directories, be more selective so that any manually written code is preserved
Add a Discovery doc patch entry to remove the start/end times so we can write our own code there
Add the property back into EventDateTime, but also another property for "PreserveUnspecifiedDateTime" or something similar
We'll need to check that we can get handle unspecified values coming back from the API as well.
Main drawback:
Users will have to specify PreserveUnspecifiedDateTime for each event. Unless we can autodetect, they'll need to set it on received events too
Alternative:
Have a static property to determine the behavior, and either modify the global behavior of GetDateTimeFromString etc, or use the partial technique above to use it just for EventDateTime. (It could be a static property in that class.)
I'm not sure what Json.NET does when parsing this - whether it populated DateTime or DateTimeRaw. More experimentation would be required.
Given that:
This repo is in maintenance mode.
There is a workaround for this problem.
A solution to this is moderately complex.
I think we will not do any work on this issue; so closing.
Please re-open if you disagree.
|
gharchive/issue
| 2017-08-14T02:13:06 |
2025-04-01T04:34:24.306166
|
{
"authors": [
"LindaLawton",
"chrisdunelm",
"jamesgurung",
"jskeet"
],
"repo": "google/google-api-dotnet-client",
"url": "https://github.com/google/google-api-dotnet-client/issues/1075",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
133923065
|
Fix typo and improve comment for MediaApiErrorHandling
(I spotted the typo when reviewing a previous change...)
LGTM
|
gharchive/pull-request
| 2016-02-16T09:06:00 |
2025-04-01T04:34:24.307664
|
{
"authors": [
"jskeet",
"peleyal"
],
"repo": "google/google-api-dotnet-client",
"url": "https://github.com/google/google-api-dotnet-client/pull/680",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
116262006
|
Segfault when calling insert_calendar with String-key hash
Not sure if this an issue with the gem itself, but I can consistently reproduce this issue:
https://bugs.ruby-lang.org/issues/11675
If the hash keys are symbols, it throws ArgumentError (unknown keyword: summary), which is a relevant error here.
The segfault itself is a ruby issue. That said, the README is clear in saying hash keys must be symbols, not strings.
|
gharchive/issue
| 2015-11-11T04:27:13 |
2025-04-01T04:34:24.309436
|
{
"authors": [
"feifanzhou",
"sqrrrl"
],
"repo": "google/google-api-ruby-client",
"url": "https://github.com/google/google-api-ruby-client/issues/307",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
145230481
|
Check that pem exists before verifying it.
Should fix #46.
Coverage increased (+0.08%) to 92.331% when pulling 421106d7432d6d0fab080c09aacedf8e53256570 on murgatroid99:pem_existence_check into 229b0ca66b48ee32292c55b922a12aba3232aa00 on google:master.
ping
LGTM
Good to see this fixed! 👍
|
gharchive/pull-request
| 2016-04-01T16:56:12 |
2025-04-01T04:34:24.311680
|
{
"authors": [
"brunolazzaro",
"coveralls",
"murgatroid99",
"tbetbetbe"
],
"repo": "google/google-auth-library-nodejs",
"url": "https://github.com/google/google-auth-library-nodejs/pull/80",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
188210848
|
Compilation of (part of) libpam fails on SunOS 5.11 oi_151a7 (OpenIndiana)
From @ThomasHabets on October 10, 2014 8:6
Original issue 234 created by olaf.lists on 2012-12-19T09:50:55.000Z:
-> What steps will reproduce the problem?
Download either current repository or released package
2a. CC=/opt/gcc/4.4.4/bin/gcc make (to use gcc-illumos 4.4.4)
2b modify Makefile to remove -fvisibility=hidden option, then "make" (to use gcc 3.4.3)
-> What is the expected output? What do you see instead?
Compilation complete expected, but instead:
olaf@openindiana:~/tools/libpam-google-authenticator-1.0$ CC=/opt/gcc/4.4.4/bin/gcc make
/opt/gcc/4.4.4/bin/gcc --std=gnu99 -Wall -O2 -g -fPIC -c -D_POSIX_PTHREAD_SEMANTICS -D_REENTRANT -fvisibility=hidden -o google-authenticator.o google-authenticator.c
/opt/gcc/4.4.4/bin/gcc --std=gnu99 -Wall -O2 -g -fPIC -c -D_POSIX_PTHREAD_SEMANTICS -D_REENTRANT -fvisibility=hidden -o base32.o base32.c
/opt/gcc/4.4.4/bin/gcc --std=gnu99 -Wall -O2 -g -fPIC -c -D_POSIX_PTHREAD_SEMANTICS -D_REENTRANT -fvisibility=hidden -o hmac.o hmac.c
/opt/gcc/4.4.4/bin/gcc --std=gnu99 -Wall -O2 -g -fPIC -c -D_POSIX_PTHREAD_SEMANTICS -D_REENTRANT -fvisibility=hidden -o sha1.o sha1.c
/opt/gcc/4.4.4/bin/gcc -g -mimpure-text -o google-authenticator google-authenticator.o base32.o hmac.o sha1.o -ldl
/opt/gcc/4.4.4/bin/gcc --std=gnu99 -Wall -O2 -g -fPIC -c -D_POSIX_PTHREAD_SEMANTICS -D_REENTRANT -fvisibility=hidden -o pam_google_authenticator.o pam_google_authenticator.c
pam_google_authenticator.c: In function ‘converse’:
pam_google_authenticator.c:121: warning: passing argument 2 of ‘conv->conv’ from incompatible pointer type
pam_google_authenticator.c:121: note: expected ‘struct pam_message *’ but argument is of type ‘const struct pam_message *’
pam_google_authenticator.c: In function ‘get_first_pass’:
pam_google_authenticator.c:765: warning: passing argument 3 of ‘pam_get_item’ from incompatible pointer type
/usr/include/security/pam_appl.h:186: note: expected ‘void *’ but argument is of type ‘const void *’
pam_google_authenticator.c: In function ‘request_pass’:
pam_google_authenticator.c:776: warning: initialization discards qualifiers from pointer target type
/opt/gcc/4.4.4/bin/gcc -shared -g -mimpure-text -o pam_google_authenticator.so pam_google_authenticator.o base32.o hmac.o sha1.o -lpam
/opt/gcc/4.4.4/bin/gcc --std=gnu99 -Wall -O2 -g -fPIC -c -D_POSIX_PTHREAD_SEMANTICS -D_REENTRANT -fvisibility=hidden -o demo.o demo.c
demo.c: In function ‘pam_get_item’:
demo.c:97: warning: initialization from incompatible pointer type
demo.c: At top level:
demo.c:106: error: conflicting types for ‘pam_set_item’
/usr/include/security/pam_appl.h:175: note: previous declaration of ‘pam_set_item’ was here
make: *** [demo.o] Error 1
that means, apparently important warnings in the pam_google_authenticator.c file and an real error in demo.c.
In a software that concerns security, I would feel safer if warnings were all taken care of.
-> What version of the product are you using? On what operating system?
Released version or current development version on SunOS 5.11 oi_151a7 (OpenIndiana)
Copied from original issue: google/google-authenticator#233
Comment #2 originally posted by olaf.lists on 2012-12-19T19:54:12.000Z:
If I try further, I get:
olaf@openindiana:~/tools/google-authenticator-a096a628455a/libpam$ make pam_google_authenticator_unittest
gcc -DTESTING --std=gnu99 -Wall -O2 -g -fPIC -c -D_POSIX_PTHREAD_SEMANTICS -D_REENTRANT
-o pam_google_authenticator_testing.o pam_google_authenticator.c
pam_google_authenticator.c: In function converse': pam_google_authenticator.c:121: warning: passing arg 2 of pointer to function from incompatible pointer type pam_google_authenticator.c: In function get_first_pass':
pam_google_authenticator.c:765: warning: passing arg 3 of pam_get_item' from incompatible pointer type pam_google_authenticator.c: In function request_pass':
pam_google_authenticator.c:776: warning: initialization discards qualifiers from pointer target type
gcc -shared -g -mimpure-text -o pam_google_authenticator_testing.so pam_google_authenticator_testing.o base32.o hmac.o sha1.o -lpam
gcc --std=gnu99 -Wall -O2 -g -fPIC -c -D_POSIX_PTHREAD_SEMANTICS -D_REENTRANT -o pam_google_authenticator_unittest.o pam_google_authenticator_unittest.c
pam_google_authenticator_unittest.c: In function `pam_get_item':
pam_google_authenticator_unittest.c:88: warning: initialization from incompatible pointer type
pam_google_authenticator_unittest.c:98: warning: assignment discards qualifiers from pointer target type
pam_google_authenticator_unittest.c: At top level:
pam_google_authenticator_unittest.c:109: error: conflicting types for 'pam_set_item'
/usr/include/security/pam_appl.h:180: error: previous declaration of 'pam_set_item' was here
pam_google_authenticator_unittest.c:109: error: conflicting types for 'pam_set_item'
/usr/include/security/pam_appl.h:180: error: previous declaration of 'pam_set_item' was here
make: *** [pam_google_authenticator_unittest.o] Error 1
Comment #1 originally posted by olaf.marzocchi on 2012-12-19T19:53:35.000Z:
If I try further, I get:
olaf@openindiana:~/tools/google-authenticator-a096a628455a/libpam$ make pam_google_authenticator_unittest
gcc -DTESTING --std=gnu99 -Wall -O2 -g -fPIC -c -D_POSIX_PTHREAD_SEMANTICS -D_REENTRANT
-o pam_google_authenticator_testing.o pam_google_authenticator.c
pam_google_authenticator.c: In function converse': pam_google_authenticator.c:121: warning: passing arg 2 of pointer to function from incompatible pointer type pam_google_authenticator.c: In function get_first_pass':
pam_google_authenticator.c:765: warning: passing arg 3 of pam_get_item' from incompatible pointer type pam_google_authenticator.c: In function request_pass':
pam_google_authenticator.c:776: warning: initialization discards qualifiers from pointer target type
gcc -shared -g -mimpure-text -o pam_google_authenticator_testing.so pam_google_authenticator_testing.o base32.o hmac.o sha1.o -lpam
gcc --std=gnu99 -Wall -O2 -g -fPIC -c -D_POSIX_PTHREAD_SEMANTICS -D_REENTRANT -o pam_google_authenticator_unittest.o pam_google_authenticator_unittest.c
pam_google_authenticator_unittest.c: In function `pam_get_item':
pam_google_authenticator_unittest.c:88: warning: initialization from incompatible pointer type
pam_google_authenticator_unittest.c:98: warning: assignment discards qualifiers from pointer target type
pam_google_authenticator_unittest.c: At top level:
pam_google_authenticator_unittest.c:109: error: conflicting types for 'pam_set_item'
/usr/include/security/pam_appl.h:180: error: previous declaration of 'pam_set_item' was here
pam_google_authenticator_unittest.c:109: error: conflicting types for 'pam_set_item'
/usr/include/security/pam_appl.h:180: error: previous declaration of 'pam_set_item' was here
make: *** [pam_google_authenticator_unittest.o] Error 1
Comment #3 originally posted by olaf.lists on 2013-01-05T19:17:59.000Z:
I checked the lines of the source producing this error and it seems they are the same affected by issue 76 http://code.google.com/p/google-authenticator/issues/detail?id=76
Could this be of any help to solve the problem?
Thank you
|
gharchive/issue
| 2016-11-09T10:18:39 |
2025-04-01T04:34:24.333727
|
{
"authors": [
"ThomasHabets"
],
"repo": "google/google-authenticator-libpam",
"url": "https://github.com/google/google-authenticator-libpam/issues/31",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
180465186
|
Iphone 6s Plus restore lost my all authenticator codes
I am facing some issue. I was using google authenticator for dropbox and also for bitbuket.
After i purchase new iphone i restored it from my itunes backup but while opening authenticator app i don't see any of my account there. I was able to resetup it with my dropbox because i have still access it from my mac. But i don't recently logged to bitbucket and its asking for authenticator code. What i do now?
Same issue as what? Should this be a comment on another issue?
https://github.com/google/google-authenticator/issues/459
Yes the same issue. It wont sync with my backup?
No it will not. See other bug and the related android bug for more info.
tl;dr: It's not clear that this is possible to do without compromising security.
And please don't open a new 'issue' for the same issue.
|
gharchive/issue
| 2016-10-01T16:20:26 |
2025-04-01T04:34:24.336485
|
{
"authors": [
"ThomasHabets",
"banna360"
],
"repo": "google/google-authenticator",
"url": "https://github.com/google/google-authenticator/issues/572",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
716524939
|
Support for importing/exporting semaphores
Vulkan enables applications to import external semaphores or export semaphores via VK_KHR_external_semaphore_fd. This enables low-cost synchronization with external entities (either other APIs or other devices). Supporting this would greatly improve the overall performance when IREE is integrated into a multi-device pipeline.
Thanks @kpet! Yes, this is certainly something on our roadmap. Being able to interact with the embedding application in a non-intrusive way is one of the key goals IREE tries to hit. Basically the application should be able to bring its own device/queue/whatever and IREE will use those when instructed. We have APIs like iree_hal_vulkan_driver_wrap_device right now and the rest needs to be built out, including using external semaphores.
Yep! Same with import/export of memory; these features will likely start to land early next year as we need to continue building out the compiler support for async behaviors and can co-design the cross-platform APIs with their usage in IREE-generated modules.
One bit of prework would be understanding how external semaphores work with timeline semaphores - that's still unclear to me. Would be worth prototyping something to ensure we can smoothly integrate things.
The concrete work here is now adding APIs that use iree_wait_handle_t to import/export fds/HANDLEs/etc.
|
gharchive/issue
| 2020-10-07T13:20:27 |
2025-04-01T04:34:24.402707
|
{
"authors": [
"antiagainst",
"benvanik",
"kpet"
],
"repo": "google/iree",
"url": "https://github.com/google/iree/issues/3378",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
664916754
|
Closes: #1
Updated validations.py python script.
Fixed the behavior of validate_user function in validations.py.
Fixes #<issue_number_goes_here>
It's a good idea to open an issue first for discussion.
[ ] Tests pass
[ ] Appropriate changes to README are included in PR
@googlebot I signed it
|
gharchive/pull-request
| 2020-07-24T05:00:56 |
2025-04-01T04:34:24.405284
|
{
"authors": [
"BakhtiarH"
],
"repo": "google/it-cert-automation-practice",
"url": "https://github.com/google/it-cert-automation-practice/pull/5197",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
690360294
|
Updated validations.py python script.
Updated validations.py python script.
Fixed the behavior of validate_user function in validations.py.
Fixes #<issue_number_goes_here>
It's a good idea to open an issue first for discussion.
[ ] Tests pass
[ ] Appropriate changes to README are included in PR
Updated validations.py python script.
Updated validations.py python script.
|
gharchive/pull-request
| 2020-09-01T18:38:03 |
2025-04-01T04:34:24.408465
|
{
"authors": [
"Ve-Va"
],
"repo": "google/it-cert-automation-practice",
"url": "https://github.com/google/it-cert-automation-practice/pull/6976",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
724180288
|
Closes: #1
Updated validations.py python script.
Fixed the behavior of validate_user function in validations.py.
Fixes #<issue_number_goes_here>
It's a good idea to open an issue first for discussion.
[ ] Tests pass
[ ] Appropriate changes to README are included in PR
@googlebot I signed it!
|
gharchive/pull-request
| 2020-10-19T00:37:14 |
2025-04-01T04:34:24.410932
|
{
"authors": [
"hassanali-khan"
],
"repo": "google/it-cert-automation-practice",
"url": "https://github.com/google/it-cert-automation-practice/pull/8822",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1423276339
|
Would it make more sense to rename custom_vjp's nondiff_argnums to static_argnums?
Custom VJP has a great error now, which definitely eliminates any confusion once you see it:
jax._src.errors.UnexpectedTracerError: Found a JAX Tracer object passed as an argument to a custom_vjp function in a position indicated by nondiff_argnums as non-differentiable. Tracers cannot be passed as non-differentiable arguments to custom_vjp functions; instead, nondiff_argnums should only be used for arguments that can't be or contain JAX tracers, e.g. function-valued arguments. In particular, array-valued arguments should typically not be indicated as nondiff_argnums.
See https://jax.readthedocs.io/en/latest/errors.html#jax.errors.UnexpectedTracerError
However, it does seem a bit confusing for it to be called nondfiff_argnums since it isn't a way to mark things you don't want to differentiate. This is unlike custom_jvp, which does (as far as I can tell?) mark things you don't want to differentiate using nondiff_argnums.
Perhaps it would make sense to:
rename the parameter static_argnums since that's really what's being indicated, and
maybe pointing out in the documentation and error message that arguments that have no derivative are indicated by returning none in the corresponding position of the tuple returned in the backwards pass.
I think we'd be open to renaming the parameter, but we'd have to keep the old one for backwards compatibility. Perhaps for some time we should just allow both, but hide the old name from the docs? After all, it's not inaccurate. It's just not as precise as static_argnums is.
Since this is low priority, we welcome contributions, but might not take it up ourselves in the short term.
|
gharchive/issue
| 2022-10-26T00:49:19 |
2025-04-01T04:34:24.413591
|
{
"authors": [
"NeilGirdhar",
"apaszke"
],
"repo": "google/jax",
"url": "https://github.com/google/jax/issues/12982",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1881564816
|
XLARunTime Error: Cannot remove instruction %all-reduce while sharding with convolutions
Description
Hi team, I have been using jax with equinox for some time, and was excited by the new AutoParallelism feature using sharding. Unfortunately, there's a bug I have encountered while using sharding using convolutions. XLA with sharding and convolutions just breaks.
With all other operations with FC layers and other, sharding works, but fails with convolution.
See the below error:
MWE for the replication of code is below:
import equinox as eqx
import jax
import jax.experimental.mesh_utils as mesh_utils
import jax.numpy as jnp
import jax.random as jr
import jax.sharding as sharding
import numpy as np
import optax # https://github.com/deepmind/optax
# Hyperparameters
dataset_size = 64
channel_size = 4
hidden_size = 32
depth = 1
learning_rate = 3e-4
num_steps = 10
batch_size = 16 # must be a multiple of our number of devices.
# Generate some synthetic data
xs = np.random.normal(size=(dataset_size, channel_size))
ys = np.sin(xs)
num_samples = 100
image_height = 64
image_width = 64
num_channels = 3
# Generate random image data with values between 0 and 255
images = np.random.randint(0, 256, size=(num_samples, num_channels, image_height, image_width ), dtype=np.uint8)
# Generate corresponding random labels from 0 to 5
labels = np.random.randint(0, 6, size=num_samples)
class SimpleConv(eqx.Module):
conv_layer: eqx.nn.Conv2d
linear : eqx.nn.Linear
def __init__(self, key):
key1, key2 = jr.split(key, 2)
self.conv_layer = eqx.nn.Conv2d(3, 5, 5, 1, padding=2, key=key1)
self.linear = eqx.nn.Linear(20480, 1, key=key2)
def __call__(self, x):
x = self.conv_layer(x)
return self.linear(x.flatten())
model = SimpleConv(key=jr.PRNGKey(6789))
optim = optax.adam(learning_rate)
opt_state = optim.init(eqx.filter(model, eqx.is_inexact_array))
def compute_loss(model, x, y):
pred_y = jax.vmap(model)(x)
return jnp.mean((y - pred_y) ** 2)
@eqx.filter_jit
def make_step(model, opt_state, x, y):
grads = eqx.filter_grad(compute_loss)(model, x, y)
updates, opt_state = optim.update(grads, opt_state)
model = eqx.apply_updates(model, updates)
return model, opt_state
def dataloader(arrays, batch_size):
dataset_size = arrays[0].shape[0]
assert all(array.shape[0] == dataset_size for array in arrays)
indices = np.arange(dataset_size)
while True:
perm = np.random.permutation(indices)
start = 0
end = batch_size
while end != dataset_size:
batch_perm = perm[start:end]
yield tuple(array[batch_perm] for array in arrays)
start = end
end = start + batch_size
num_devices = len(jax.devices())
devices_x = mesh_utils.create_device_mesh((num_devices, 1, 1, 1))
devices_y = mesh_utils.create_device_mesh((num_devices,))
shard_x = sharding.PositionalSharding(devices_x)
shard_y = sharding.PositionalSharding(devices_y)
print(f"Devices being used {num_devices}")
for step, (x, y) in zip(range(num_steps), dataloader((images, labels), batch_size)):
x = jnp.asarray(x, dtype="float32")
x, y = jax.device_put(x, shard_x), jax.device_put(y, shard_y)
model, opt_state = make_step(model, opt_state, x, y)
What jax/jaxlib version are you using?
jax==0.4.13, jaxlib==0.4.13+cuda12.cudnn89
Which accelerator(s) are you using?
GPU
Additional system info
Linux
NVIDIA GPU info
NVIDIA-SMI 530.41.03 Driver Version: 530.41.03 CUDA Version: 12.1
I can't reproduce this at head on 2 GPUs. Can you say more about your GPU setup? How many GPUs are you using and what kinds?
If you're so inclined, if you build jaxlib from source at head does the problem reproduce?
Closing because no further details were provided. I'd be happy to look into this if you can still reproduce this with the latest jax and jaxlib!
|
gharchive/issue
| 2023-09-05T09:19:57 |
2025-04-01T04:34:24.418401
|
{
"authors": [
"anuragithub",
"hawkinsp"
],
"repo": "google/jax",
"url": "https://github.com/google/jax/issues/17431",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
395070671
|
Does the LogSoftmax works correctly?
test in examples/mnist_classifier.py,
def accuracy(params, batch):
inputs, targets = batch
target_class = np.argmax(targets, axis=1)
x = predict(params, inputs)
print(x[0])
the final layer is LogSoftmax, but the output seems not correct,
Starting training...
[-2.3113673 -2.6440005 -2.4797316 -1.79847 -1.6207608 -2.931935
-3.303906 -2.7275395 -2.2099946 -2.1403143]
Can you say a bit more about what might be incorrect here? The sum of the elementwise-exp of those numbers is 1, which is intended.
I mean the sum of those numbers should be 1, not their elementwise-exp's.
sorry, i misunderstood log-softmax, maybe it will be more convenient if providing loss functions.
Got it! I'll add that to my todo list.
|
gharchive/issue
| 2019-01-01T10:07:28 |
2025-04-01T04:34:24.420909
|
{
"authors": [
"cookfish",
"mattjj"
],
"repo": "google/jax",
"url": "https://github.com/google/jax/issues/182",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.