id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
257327271
Transits depending on the cumuls of another dimension in python Is there any plans to expose API to create dimension with transits depending on the cumuls of another dimension into SWIG?, i.e. AddDimensionDependentDimensionWithVehicleCapacity and corresponding functions in routing.h are now available only in cpp. As I understand from description, this API provides ability to solve VRP with traffic conditions. Would be great to have this api in swig, in particular in python I am interested in this also. Haven't been able to find a workaround that performs the specific task of this method. Seems a dupliate of #339 Not available in non C++ language. still experimental.
gharchive/issue
2017-09-13T10:11:23
2025-04-01T06:38:50.510472
{ "authors": [ "Mizux", "TheZepto", "emakarov", "lperron" ], "repo": "google/or-tools", "url": "https://github.com/google/or-tools/issues/480", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
335669799
Fix typo in Readme.md Just a small typo. I signed it!
gharchive/pull-request
2018-06-26T05:59:16
2025-04-01T06:38:50.511580
{ "authors": [ "dirkschumacher" ], "repo": "google/or-tools", "url": "https://github.com/google/or-tools/pull/734", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
836024014
Combine Gpu debug marker and submission tracks into one. Prior this, we had two Gpu tracks per queue (when the Vulkan layer was enabled). However, those belong tightly together. This change makes GpuTrack being a composed track of GpuDebugMarkerTrack and GpuSubmissionTrack. It refactors out the logic of each kind of prior GpuTrack. Test: Take a capture with the Vulkan layer. Bug: http://b/182751932. It collapses the complete subtracks to just displaying the "hw_execution" timers. Which is actually default. See in the first screenshot the compute queue. And in contrast to this the graphics queue and the DMA queue on the same screenshot. I think it makes sense to have both, as the default, which is not showing a lot is much more clean. It collapses the complete subtracks to just displaying the "hw_execution" timers. Which is actually default. See in the first screenshot the compute queue. And in contrast to this the graphics queue and the DMA queue on the same screenshot. I think it makes sense to have both, as the default, which is not showing a lot is much more clean. Ok,makes sense! I've set the font size of the subtrack labels to 90% of the parent's size: Made this a draft request, as the accessibility interface is not implemented correctly. Added intent-based behaviour:
gharchive/pull-request
2021-03-19T13:57:23
2025-04-01T06:38:50.516983
{ "authors": [ "florian-kuebler", "ronaldfw" ], "repo": "google/orbit", "url": "https://github.com/google/orbit/pull/2099", "license": "bsd-2-clause", "license_type": "permissive", "license_source": "bigquery" }
2562958515
fix(sourcerepo-sync): improve debugability Problem: + poetry run python source_sync.py --kind SourceRepository --project oss-vdb-test --file ../../source_test.yaml --no-dry-run --verbose Loaded 26 local source repositories Validated 26 source repositories Retrieved 26 source repositories from datastore Traceback (most recent call last): File "/workspace/tools/sourcerepo-sync/source_sync.py", line 162, in <module> main() File "/workspace/tools/sourcerepo-sync/source_sync.py", line 134, in main validate_repository(ds_repos, False) File "/workspace/tools/sourcerepo-sync/source_sync.py", line 69, in validate_repository if 'link' in repo and repo['link'][-1] != '/': ~~~~~~~~~~~~^^^^ TypeError: 'NoneType' object is not subscriptable happening on https://github.com/google/osv.dev/pull/2699 was harder to get to the bottom of than I'd have liked... Capture some of the knowledge gained as part of debugging this in some basic service documentation to help the next person I'm having some feelings about all of the functions doing the things being nested within main()... It doesn't feel like it's adhering to the spirit of the Python style guide on this? I'm having some feelings about all of the functions doing the things being nested within main()... It doesn't feel like it's adhering to the spirit of the Python style guide on this? +1 - it looks like most of the nested functions are relatively straightforward to extract +1 - it looks like most of the nested functions are relatively straightforward to extract Yeah I was trying to time-box the amount of time I spent on this, as I burned close to 2 hours between debugging the failure and making this PR, so I wanted to avoid a large-scale refactor (and attendant breakage) so I can unwind my stack and get back to my other task... Was this fixed? it seems like that if 'link' in repo check isn't accomplishing what it intended to do. Yeah good point... I fixed it directly in Datastore to get the code to stop crashing... Looking over https://github.com/google/osv.dev/commits/master/source_test.yaml, it looks a bunch of recent commits haven't been applying successfully 😞 Looking over https://github.com/google/osv.dev/commits/master/source_test.yaml, it looks a bunch of recent commits haven't been applying successfully 😞 Actually it's not quite so dire, it was just this most recent PR that fell over 🤔 The problem seemed to be that the cve-osv source in Datastore was quite different to the YAML reality for it...
gharchive/pull-request
2024-10-03T02:00:40
2025-04-01T06:38:50.523850
{ "authors": [ "andrewpollock", "michaelkedar" ], "repo": "google/osv.dev", "url": "https://github.com/google/osv.dev/pull/2700", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1107661902
trace config invalid Hello, I got this error when I turn on the 'Scheduling details' on CPU Probes. could you help me? -:69:13 error: No field named "symbolize_ksyms" in proto FtraceConfig symbolize_ksyms: true ^~~~~~~~~~~~~~~ [452.354] perfetto_cmd.cc:476 The trace config is invalid, bailing out. Please provide the Android version of the device you are collecting the trace from @octaviantu - maybe you can take a look at this as part of your refactoring. This seem simly an issue of passing a config to an older device that doesn't support symbolize_ksyms Actually, my device is quite new.. galaxy Z Fold 3. and the Android version is 11. Unfortunately symbolize_ksyms was added in Android S/12 so even Android 11 is too old for this flag. As Primiano said, we should just not enable this option if you specify/we detect an older version in the UI when generating the config. That's why there was a problem. Thank you! I think we'd still keep this issue open to track the change in the UI to stop specifying this flag. symbolize_ksyms is added when I turn on 'Scheduling details'. If so, is there any other way to get CPU Scheduling details on the Android version 11? If you mean collecting directly through the UI (i.e. without using the command line), then this is not possible until we fix this bug. However, uou can just remove that line from the generated config and run the adb command the UI gives. Scheduling info will still be collected in that case (the only thing you'll miss is some information like the kernel blocked function in sched_blocked_reason). Fixed in https://github.com/google/perfetto/commit/327ad962a59d149ad6662fcbd18e38d808f91a3f For now the fix is only available on the Autopush branch. (you can get to Autopush by writing localStorage.perfettoUiChannel = 'autopush' in the dev console) It will be rolled out to canary and stable in the next few weeks.
gharchive/issue
2022-01-19T05:30:34
2025-04-01T06:38:50.529756
{ "authors": [ "LalitMaganti", "Sunghyeok93", "octaviantu", "primiano" ], "repo": "google/perfetto", "url": "https://github.com/google/perfetto/issues/230", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
381821169
physic: decimal overflow fix. Fixes bug in overflow detection in dtoi() when the exponent is large enough that overflow check is passed when it has overflowed and wrapped passed base. Add fail test case for over overflow. gohci (Ignoring the as7262 failure)
gharchive/pull-request
2018-11-17T02:56:15
2025-04-01T06:38:50.531191
{ "authors": [ "NeuralSpaz", "maruel" ], "repo": "google/periph", "url": "https://github.com/google/periph/pull/340", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1639086316
Output source image is too different from the input image Hi, thanks for your work! When I tried some real images, the null-text inversion output was fine, but the ptp editing output was totally different from the input, including both the source image and the edited image. Could you please help to explain why this happens? And any advice on how to solve it? In my understanding, the ptp generated source image should be the same as the output image of the null text inversion, is that correct? Anyone has any thoughts on this would be very appreciated! Hi, thanks for your work! When I tried some real images, the null-text inversion output was fine, but the ptp editing output was totally different from the input, including both the source image and the edited image. Could you please help to explain why this happens? And any advice on how to solve it? I had exactly the same problem, null-text inversion works perfectly well. But the first output of ptp editing isn't the same as the null-text inverted one (it's supposed to be the same). I managed to narrow down the problem: when using x_t, uncond_embeddings, and prompts = [prompt_used_for_inversion] gives the correct inverted result. However, once prompts has a batch size > 1, for example if we use [prompt_used_for_inversion, new_prompt], it gives an entirely different result. And this the case even with an empty controller... I dug into this as well. Seems like whenever the batch size is different from what was used during inversion (bs=1 during inversion, bs=2 during ptp), unet outputs different results. The first divergence I found is after a pytorch linear layer. This explanation here might be why this is happening: https://github.com/pytorch/pytorch/issues/9146#issuecomment-409344822 It's saying something like when the input shape is different, BLAS could have entirely different operation ordering and that could lead to small differences. These small differences might not matter much for neural network training and such, but in our case we need the first half of the ptp results to be exactly the same as the inversion results, so any small difference after going through many layers could end up a big difference. I think our best bet would be to execute the unet two times. First time the exact input we had for inversion, [base prompt uncond, base prompt cond]. And the second time run [new prompt uncond, new prompt cond]. @jingweim Thanks for your suggestion! I found that directly execute the unet twice during the editing process will lead to a bug, as the p2p script is supposed to get an batch size of an even number. Did you modify the script and get it running? i'm not sure, but prompt like this work... image_path = "./example_images/gnochi_mirror.jpeg" prompt = "a cat sitting next to a mirror, a cat sitting next to a mirror" (image_gt, image_enc), x_t, uncond_embeddings = null_inversion.invert(image_path, prompt, offsets=(0,0,200,0), verbose=True) print("Modify or remove offsets according to your image!") #prompts = ["a cat sitting next to a mirror", "a tiger sitting next to a mirror"] prompts = ["a cat sitting next to a mirror, a cat sitting next to a mirror","a tiger sitting next to a mirror, a tiger sitting next to a mirror"] it's not good, what shoud i have to revise? Do you mean 1 works but 2 does not work? Do you mean 1 works but 2 does not work? umm.. i think i make a mistake, now it runs well!! That looks great. Could you share what part did you modify? Also what version of pytorch, transformers, and diffusers are you using? Thanks. That looks great. Could you share what part did you modify? Also what version of pytorch, transformers, and diffusers are you using? Thanks. i just modify two things. requirement.txt diffusers==0.14.0 +pytorch : 1.13.1 in ptp_utils.py, follow below issue. replace the original def forward(x, context=None, mask=None) function in def register_attention_control(model, controller) of ptp_utils.py with the following codes: https://github.com/google/prompt-to-prompt/issues/44#issuecomment-1593284782
gharchive/issue
2023-03-24T10:02:48
2025-04-01T06:38:50.544337
{ "authors": [ "MunchkinChen", "XinDing5", "failbetter77", "g-jing", "jingweim" ], "repo": "google/prompt-to-prompt", "url": "https://github.com/google/prompt-to-prompt/issues/47", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
56626650
Update recaptchalib.php The array index in Line 133 needs quotes to prevent an error message. It should be ['error-codes'] instead of [error-codes]. Thanks! Duplicate of #1 (and many other PRs), can you close this PR? Fixed in v1.1
gharchive/pull-request
2015-02-05T04:37:53
2025-04-01T06:38:50.566215
{ "authors": [ "mrodrigueztech", "rowan-m", "svivian" ], "repo": "google/recaptcha", "url": "https://github.com/google/recaptcha/pull/23", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
154452537
shaka-player and webGL with drm content Hello, I'm developing some drafts to study the viability of make video mosaics using webGL. I've thought to give DASH support through a shaka player, building a texture with the content of the html5 video element attached to the player and rendering it in a webGL context. I had successful with non-drm content. But with Widevine videos I've suffered some issues depending on the device, with Android mobiles the test works really good but in Mac/Linux, despite the shaka player works as expected, the texture in the webGL context is black. I'm using chrome both Android and (Mac/Linux). Someone knows why this weird behavior? I am testing with the links supported in the documentation of shaka-player Non-DRM: //storage.googleapis.com/shaka-demo-assets/sintel/dash.mpd DRM: //storage.googleapis.com/shaka-demo-assets/sintel-widevine/dash.mpd Sorry for taking so long to reply. This slipped through the cracks. I don't know why you would get that behavior. We really don't have any expertise in webGL. I would guess that some platforms and DRM systems may prevent you from capturing content by design, but that is just my best guess. Since we don't have details on the internals of this, and since WebGL and capturing frames are both very much out of the scope of what Shaka is trying to provide, I'm going to go ahead and close this issue. I'm very sorry we weren't able to help you with this, but I hope that Shaka has at least been a useful tool for mosaics of non-DRM content. @eipporko were you able to make this work for widevine DRM videos in chrome? I'm planning to capture video frames of DRM protected using webGL and send it to server in my application.
gharchive/issue
2016-05-12T10:47:12
2025-04-01T06:38:50.614999
{ "authors": [ "eipporko", "iamprem", "joeyparrish" ], "repo": "google/shaka-player", "url": "https://github.com/google/shaka-player/issues/379", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
391695319
Split the crate in 2: starlark and starlark-repl The main crate had a lot of transitive dependencies due to the REPL so this change extract the REPL in a separate crate. This way the main Starlark crate only have 9 non-build transitive dependencies: $ cargo tree starlark v0.2.0-pre (/usr/local/google/home/dmarting/git/skylark-rust/starlark) ├── codemap v0.1.1 ├── codemap-diagnostic v0.1.0 │ ├── codemap v0.1.1 (*) │ ├── isatty v0.1.9 │ │ ├── cfg-if v0.1.6 │ │ └── libc v0.2.45 │ └── term v0.4.6 ├── lalrpop-util v0.16.2 └── linked-hash-map v0.5.1 Fixes #23 @jmillikin: would you mind taking a look at this PR? Maybe we can split the crate even further but that won't decrease the number of dependencies. Getting rid of codemap-diagnostic could help but this is more work (we could use a custom diagnostic instead and have a converter as a separate crate. Thanks sounds good, I'll wait for feedback and I'll push version 0.2 Early feedback: I've verified rustc 1.31 is able to build a WebAssembly module with starlark-rust at HEAD, and the compiler's tree-shaking is smart enough to drop ~all of the codemap-diagnostic transitive dependencies from the final output. A small fix to codemap-diagnostic is needed to fix a #[cfg] guard (https://github.com/kevinmehall/codemap-diagnostic/pull/3), but releasing starlark v0.2 doesn't need to block on that.
gharchive/pull-request
2018-12-17T12:42:53
2025-04-01T06:38:50.634746
{ "authors": [ "damienmg", "jmillikin" ], "repo": "google/starlark-rust", "url": "https://github.com/google/starlark-rust/pull/24", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
722211884
pkg/bisect: run multiple config bisect rounds Crash might be dependent on multiple config options. Run config bisect multiple times to identify multiple config options. Signed-off-by: Jouni Hogander jouni.hoegander@partner.bmw.de Before sending a pull request, please review Contribution Guidelines: https://github.com/google/syzkaller/blob/master/docs/contributing.md This can now be done directly during config minimization: https://github.com/google/syzkaller/blob/99c64d5c672700d6c0de63d11db25a0678e47a75/pkg/kconfig/minimize.go#L42-L69 Can find all pair of configs.
gharchive/pull-request
2020-10-15T10:35:19
2025-04-01T06:38:50.637949
{ "authors": [ "dvyukov", "hogander-unikie" ], "repo": "google/syzkaller", "url": "https://github.com/google/syzkaller/pull/2193", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
348459615
sys/fuchsia: add fdio_service_connect description Current fuchsia syscall description does not have a way to bind an endpoint of a channel to a service, which makes subsequent zx_channel_{call,read,write} fail too early. This CL adds a syscall description for fdio_service_connect, which binds a given channel endpoint to a service as specified by the svcfs path. Once a connection has been established to a service, subsequent zx_channel_{call,read,write}'s on the other endpoint of the channel will be able to communicate with the service. I just noticed that you left comments on #668. Addressing those comments would this PR obsolete, hence closing this one.
gharchive/pull-request
2018-08-07T19:39:40
2025-04-01T06:38:50.639936
{ "authors": [ "dokyungs" ], "repo": "google/syzkaller", "url": "https://github.com/google/syzkaller/pull/672", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
66026849
Support extended import syntax This adds support for a import clause pair: import def, {a, b} from 'mod'; import def2, * as m from 'mod'; The AST for this is using a ImportClausePair. To simplify the transformer the above is first transformed into: import {default as def, a, b} from 'mod'; import {default as def2} from 'mod'; import * as m from 'mod'; then the existing transformer just works. This preprocessing is done in the ModuleNormalizeTransformer. Fixes #1509 @johnjbarton PTAL I don't understand "import clause pair". The list of imported bindings is exactly two items? I need an AST node that represents def, * as m and def, {...}. I could not come up with a good name. My question was about the language. Is it simply that it never makes sense to have three items in the clause? Is import * as m, * as p from 'foo.js'; illegal or just dumb? I tried really hard to make the grammar for this non insane but the export default mafia didn't budge. Those are the only two allowed forms. You are supposed to remember that you can only do import def, {x} from 'mod'; import def, * as m from 'mod'; but none of: import {x}, def from 'mod'; import {x}, * as m from 'mod'; import * as m, def from 'mod'; import * as m, {x} from 'mod'; import def, * as m, {x} from 'mod'; import def, {x}, * as m from 'mod'; import * as m, def, {x} from 'mod'; import * as m, {x}, def from 'mod'; import {x}, * as m, def from 'mod'; import {x}, def, * as m from 'mod'; Another option for the AST would be to have the ImportDeclaraion use an Array. I tried this too but the code ended up a bit uglier. The grammar would be much simpler and users would not have to remember the arbitrary order if this was just a lost of ImportSpecifier | NameSpaceImport | ImportDefault. @johnjbarton Ping! The grammar would be much simpler and users would not have to remember the arbitrary order if this was just a list of ImportSpecifier | NameSpaceImport | ImportDefault. I agree. Did you bring this up with TC39 and/or the proposal author? I brought it up on Bugzilla, face to face with the module champions and I even think I brought it up in the 20+ people meeting room. LGTM I renamed it to ImportSimplifyingTransformer.
gharchive/pull-request
2015-04-02T22:54:29
2025-04-01T06:38:50.646549
{ "authors": [ "UltCombo", "arv", "johnjbarton" ], "repo": "google/traceur-compiler", "url": "https://github.com/google/traceur-compiler/pull/1863", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
722252014
Error while installing usd_from_gltf : No rule to make target '$USDROOT/lib/libboost_python.so', needed by 'usd_from_gltf/usd_from_gltf' {USD} = USDROOT {UFG_SRC} = usd_from_gltf {UFG_BUILD} = UFG_BUILD `python usd_from_gltf/tools/ufginstall/ufginstall.py UFG_BUILD USDROOT Install Settings: Source Directory: /home/aman/usd_from_gltf Install Directory: /home/aman/UFG_BUILD USD Directory: /home/aman/USDROOT Download Directory: /home/aman/UFG_BUILD/src Build Directory: /home/aman/UFG_BUILD/build Build Config: Release CMake Generator: Default Already Installed: DRACO, GIF, JPG, JSON, ZLIB, PNG, STB_IMAGE, TCLAP Installing: USD_FROM_GLTF -------- Installing USD_FROM_GLTF -------- USD_FROM_GLTF: CWD: /home/aman/UFG_BUILD/build/usd_from_gltf USD_FROM_GLTF: Run: cmake /home/aman/usd_from_gltf -DCMAKE_INSTALL_PREFIX=/home/aman/UFG_BUILD -DCMAKE_PREFIX_PATH=/home/aman/UFG_BUILD -DUSD_DIR=/home/aman/USDROOT USD_FROM_GLTF: CWD: /home/aman/UFG_BUILD/build/usd_from_gltf USD_FROM_GLTF: Run: cmake --build . --config Release --target install -- [ 30%] Built target gltf [ 40%] Built target common [ 71%] Built target process [ 88%] Built target convert make[2]: *** No rule to make target '/home/aman/USDROOT/lib/libboost_python.so', needed by 'usd_from_gltf/usd_from_gltf'. Stop. CMakeFiles/Makefile2:331: recipe for target 'usd_from_gltf/CMakeFiles/usd_from_gltf.dir/all' failed make[1]: *** [usd_from_gltf/CMakeFiles/usd_from_gltf.dir/all] Error 2 Makefile:148: recipe for target 'all' failed make: *** [all] Error 2 ` ive had the same problem but found the solution to be quite simple. In the newer USD install it may not find the file 'libboost_python.so' because it is actually called 'libboost_python38.so'. Just look up the file in your USD install directory, duplicate it and take away the '38'. The try the installation again. At least that was the solution to my installation failing. hope that helps ive had the same problem but found the solution to be quite simple. In the newer USD install it may not find the file 'libboost_python.so' because it is actually called 'libboost_python38.so'. Just look up the file in your USD install directory, duplicate it and take away the '38'. The try the installation again. At least that was the solution to my installation failing. hope that helps Thanks Schlomoh! What i did instead was i renamed the line https://github.com/google/usd_from_gltf/blob/2387ab7f6678ef74cf2aaebf1d29da106020d9eb/CMakeLists.txt#L87 to list(APPEND USD_LIBS "${USD_DIR}/lib/libboost_python36.so") and it worked!
gharchive/issue
2020-10-15T11:35:05
2025-04-01T06:38:50.670394
{ "authors": [ "AksAman", "Schlomoh" ], "repo": "google/usd_from_gltf", "url": "https://github.com/google/usd_from_gltf/issues/60", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
593329182
Image resizing not working when the origin point is in the top-left corner Bug Description When the origin of the resizing is top-left corner AND the image is rotated 90 degrees, resizing fails and the image "disappears". Similar issues occur with text boxes and many different rotations (when rotated more than 90 degrees from 0 in either direction) Expected Behaviour Resizing should work despite of the rotation / start position Steps to Reproduce Add an image Assign rotation 90 degrees Position it to the top-left corner Resize from the bottom-right corner as it is visible currently without considering rotation See the odd stuff happening. Screenshots Do not alter or remove anything below. The following sections will be managed by moderators only. Acceptance Criteria QA Instructions Add an image Set rotation to 90 degrees Now drag the image to the upper left corner Resize it from the corners Verify the resizing works as expected and that the image doesn't "jump around". After spending some time debugging and making clear under which conditions the resizing fails, recreated the situation in CodeSandbox and added an issue to Moveable repo instead: https://github.com/daybrush/moveable/issues/251 For information, the bug happens when keepRatio is true and snapping is enabled -- also happens only when the element is in one of the snapping points, e.g. in the left edge of the Page (doesn't have to be a corner). Will wait for information from Moveable and then see how to proceed. @barklund FI: I'm resizing this to 1 since once it's fixed in Moveable, all we need to do is update the module. Let me know if you have any objections. Verified in QA (Master) UAT - looks good. Closing ticket
gharchive/issue
2020-04-03T11:54:45
2025-04-01T06:38:50.685997
{ "authors": [ "csossi", "miina", "samitron7" ], "repo": "google/web-stories-wp", "url": "https://github.com/google/web-stories-wp/issues/1011", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
668935725
Update Preview and AMP Story generated animation code to support effects like "Fly In" In order for animation effects like "Fly In" to be supported by our Dashboard preview animations and in a generated AMP Story, we have to make sure that the page's elements are being passed into the appropriate animation providers. To Test Once things are configured, you can test it by created a demo story and applying the "Fly In" animation effect to an element in the story. Then see if the effect plays properly in the Dashboard when you hover over the story's preview card, and also check that the animation plays properly when you view its generated AMP Story. Isn't this in review with this PR? https://github.com/google/web-stories-wp/pull/3545/files I've been referencing other tickets being blocked by #3421 but I think this might be a more descriptive ticket to reference. going to update now Isn't this in review with this PR? https://github.com/google/web-stories-wp/pull/3545/files Oh no, this ticket refers to the fact that the StoryAnimation.Provider we use to display the Preview card animations and the generated AMP story animations, does not have elements being passed in to it. I'm only passing in elements in the storybook examples but not our actual code.
gharchive/issue
2020-07-30T16:25:46
2025-04-01T06:38:50.689664
{ "authors": [ "littlemilkstudio", "mariano-formidable" ], "repo": "google/web-stories-wp", "url": "https://github.com/google/web-stories-wp/issues/3562", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
850561528
Default duration time overrules when using a shorter video Bug Description When auto-advance is selected the default page duration setting is overridden by the video element and will auto-advance after the video has finished playing. However, if that video is shorter than the default page duration time, then the page does not advance until the default duration time has completed. If the video is shorter than the default page duration time, should the video duration still override and auto-advance? Expected Behaviour The default page duration is automatically overridden by video. Steps to Reproduce Add a short video to the story Set the default page duration to a time that is longer than the video duration Preview the story and notice video has finished playing but the page has not advanced Screenshots This example story uses a 6-second video and has the default page duration set to 9 seconds. https://user-images.githubusercontent.com/66372350/113608959-3277e880-9619-11eb-86b0-93dbc85cf42a.mov Additional Context Related support topic: https://wordpress.org/support/topic/frame-duration/ Plugin Version: 1.5.0 WordPress Version: Operating System: Browser: Do not alter or remove anything below. The following sections will be managed by moderators only. Acceptance Criteria Implementation Brief @o-fernandez perhaps you could answer this product question about expected behavior? Per the spec, if there are non-looping video or audio, the page should auto advance when the longest video/audio on the page finishes playing. I think the spec is wrong on animation, since we decided that if the animation length is shorter than the default duration we'd keep the duration, I'll update that. But, for this instance (non-looping video), it is correct to say that we expect auto advance to kick in when the video plays once through. For reference, here's the code where we choose between default duration and video duration: https://github.com/google/web-stories-wp/blob/796e96c8cf40e295a6933be6f7bf4be0c14dd965/assets/src/edit-story/output/page.js#L41-L48 I think we can just remove the minDuration param from getLongestMediaElement so that the video always takes precedence. Would that mean that if the user adds a very short (e.g. a 1-second video), the Page would auto-advance right after that? Would that mean that if the user adds a very short (e.g. a 1-second video), the Page would auto-advance right after that? @miina yes, unless they set the video to loop. But if they have the video to play once through, and there's no other longer video, then that would happen. @miina @barklund updated the priority label here. This has been reported a few times to me already, and hopefully we can fix it soon? This has been reported a few times to me already, and hopefully we can fix it soon? Sure, moved it up in the list of P2-s. Thanks for prioritising this. Verified in QA
gharchive/issue
2021-04-05T18:23:28
2025-04-01T06:38:50.699162
{ "authors": [ "LuckynaSan", "csossi", "miina", "o-fernandez", "rajpalsaurabh", "swissspidy" ], "repo": "google/web-stories-wp", "url": "https://github.com/google/web-stories-wp/issues/7053", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1460065736
Fix typo in README.md seperate -> separate Thank you for the fix!
gharchive/pull-request
2022-11-22T15:15:06
2025-04-01T06:38:50.701808
{ "authors": [ "eltociear", "rictic" ], "repo": "google/wireit", "url": "https://github.com/google/wireit/pull/562", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
209997683
Plugin Ecosystem Having tried Xi-editor on Mac and see it starting up faster than anything else, I really hope this editor takes off. The preference to stream-based plugin over embedded extension is also to my liking, dare I say UNIX-y, or along the spirit of Acme from Plan 9. Now there is a plugin ecosystem around every major editor. ELPA for Emacs, Github for Vim, APM for Atom, VS Marketplace for VS Code, Package Control for Sublime Text. What will Xi's look like? That's a really good question. There are a few aspects to this. Xi is different from most editors, in that plugins are just programs, and can be written in any language. In most editors, plugins are written in the editor's scripting language. In those cases, it makes sense for the editor to have its own package manager / installer / updater. I don't think that will work well in the case of xi. My rough thinking is to rely on a package manager native to the host operating system. For Mac and Windows, that probably means having a preferred package manager (brew and chocolatey, for example). Another way to make the experience smoother for users is to have the concept of a "distribution", very much analogous to Anaconda for Python. The distribution would contain, of course, both the core and the front-end, and also a collection of plugins. This would reduce the need for users to deal with installing and upgrading individual plugins, and also make upgrading more atomic (reducing the effect of version skew). If a lot of the plugins are written in Rust, there's a technical issue, which is binary size. Each binary has an overhead of at least a few 100kbytes. In a packaged distribution, a number of plugins (each written to have a library interface) could be bundled into a single binary, busybox style. Alternatively, the plugins could be compiled against a dynamic library, but that has its own set of challenges. (Plugins written in Go would have a similar issue, but those written in scripting languages wouldn't). There's a final point I want to touch on in discussing the ecosystem. It's entirely possible that xi plugins could be useful and even desirable to use from other editors. My vision for the "xi-lang" module is that it provides both significantly faster and higher quality syntax highlighting than regex-based approaches. Since it talks json over rpc, it seems to me it wouldn't be especially difficult to make, say, Atom use this module for highlighting. I wouldn't put a lot of time into that myself, but would encourage it, and would also be open to fine-tuning the plugin protocol to make it suitable for non-xi editors.
gharchive/issue
2017-02-24T09:22:59
2025-04-01T06:38:50.705452
{ "authors": [ "louy2", "raphlinus" ], "repo": "google/xi-editor", "url": "https://github.com/google/xi-editor/issues/158", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
265666935
Load some winapi functions at runtime Certain functions in winapi are only supported on newer versions of windows. If we want to run on older versions, we have to load these functions at runtime. If loading fails, that means that the function is not available. In that case, we have to fall back to doing something else at runtime. I am running windows 8.1, so I am unable to run the program without these changes. This also means that somebody who runs windows 10 should to test this, to confirm that it still works as expected. The interesting part of this PR is in src/util.rs. Here fn load_optional_functions tries to load functions using kernel32::LoadLibraryA and kernel32::GetProcAddress. The functions are stored in the OptionalFunctions struct, which is passed to wherever these functions are needed. If it is desirable, we might be able to store these functions statically somewhere, so we don't have to pass the struct around (I think the gl crate does this). The functions which are currently loaded are: GetDpiForSystem (https://msdn.microsoft.com/en-us/library/windows/desktop/mt748623(v=vs.85).aspx), windows 10 or later GetDpiForMonitor (https://msdn.microsoft.com/en-us/library/windows/desktop/dn280510(v=vs.85).aspx), windows 8.1 or later SetProcessDpiAwareness (https://msdn.microsoft.com/en-us/library/windows/desktop/dn302122(v=vs.85).aspx), windows 10 or later Unfortunately, this is not working properly on Windows 10. It builds and runs correctly, but then it prints: Could not load `GetDpiForSystem`. Windows 10 or later is needed Could not load `SetProcessDpiAwareness`. Windows 10 or later is needed And then it doesn't display in hi-dpi at all. I could try to debug this, but maybe that's enough of a clue to figure out what's going wrong? Thanks for the effort, I appreciate it. Ack, accidentally merged, sorry about that. Ok, I think the issue is just that I made a silly mistake. GetDpiForSystem comes from user32, while SetProcessDpiAwareness comes from shcore, but I had it the other way around in code. Also, SetProcessDpiAwareness is available on windows 8.1, so I can test that. I'll have a commit ready in a few minutes, just need to test stuff. Should I open another PR since this one is closed? Not quite sure how to do this without messing up git :/ Yes, this works fine and I'm happy to merge it. There's probably some fancy way to update a merged PR, but it's probably just easiest to open a new one. Thanks for seeing this through.
gharchive/pull-request
2017-10-16T07:10:16
2025-04-01T06:38:50.712039
{ "authors": [ "raphlinus", "seventh-chord" ], "repo": "google/xi-win", "url": "https://github.com/google/xi-win/pull/6", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1031773412
Banner Ads not working correctly on Android, portions of content disappear and reappear below the ads when scrolling up and down Plugin Version Details Target Platform: Target OS version: Devices: Logs Logs @Sunny-Aiub Can you provide flutter doctor -v and a complete minimal reproducible code sample that we can use to verify this behavior ? Thanks.
gharchive/issue
2021-10-20T19:49:59
2025-04-01T06:38:50.723209
{ "authors": [ "Sunny-Aiub", "darshankawar" ], "repo": "googleads/googleads-mobile-flutter", "url": "https://github.com/googleads/googleads-mobile-flutter/issues/410", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
59780496
Add Network Extra Hi, I'm trying to add networkExtra to mediate flurry, but I can't find a way to do this, can somebody help me ? Hi there, Setting network extras on mediation networks is not directly supported from the API of the plugin. This isn't really something the plugin can feasibly implement either, since the network extras classes are developed by the third party networks and it doesn't scale to wrap an API around each one of the third party network extras. You can achieve this behavior yourself by modifying the plugin code to add in support for Flurry extras. You can see how the AdMob extras were added here. For Flurry, you could update this code to also create an AndroidJavaObject for the Flurry extras class and add it to the network extras. Thank's
gharchive/issue
2015-03-04T10:46:58
2025-04-01T06:38:50.725501
{ "authors": [ "ericleich", "paradizIscool" ], "repo": "googleads/googleads-mobile-plugins", "url": "https://github.com/googleads/googleads-mobile-plugins/issues/80", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
170570825
I want to use video.js play hls and google ima. cant i use version? I want to use video.js play hls and google ima. cant i use version? As long as your browser supports HLS in the <video> element you should be good to go. Let me know if you run into an specific issues. You can test it out by modifying one of our samples to use your content and ad tag to see if it works.
gharchive/issue
2016-08-11T04:59:51
2025-04-01T06:38:50.735427
{ "authors": [ "lipxitutvl10", "shawnbuso" ], "repo": "googleads/videojs-ima", "url": "https://github.com/googleads/videojs-ima/issues/260", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
437789778
Documentation directory outside of lib? The generator currently creates a bunch of documentation files in the /lib/doc directory. This location should be fine, as we are not expecting anyone to require those file paths. However, it is still possible. Currently we have added a raise statement with an error message stating that these files are not intended to be loaded. That said, what about placing this directory outside of /lib, and configuring the .yardoc file to load the files in that directory as well? That would allow YARD to know about the documentation, but avoid the issues of accidentally requiring the files. Thoughts? Sounds like a good idea to me. I think we still need something in the file, either a raise or a comment or something, to explain why we have them, but I think they can live anywhere as long as they are complete and ship with the gem. How about a README file in the non-/lib directory? Sounds good. Proposed name of the new directory?
gharchive/issue
2019-04-26T18:18:11
2025-04-01T06:38:50.737713
{ "authors": [ "blowmage", "jbolinger", "quartzmo" ], "repo": "googleapis/gapic-generator-ruby", "url": "https://github.com/googleapis/gapic-generator-ruby/issues/123", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
423452313
How do we use the method_signatures annotation? The method_signatures annotations allow for nested arguments, but how does this work with Ruby? So far, we are not considering this annotation. We are accepting either the request object as a positional argument, the request values in a Hash as a positional argument, or the request values as named arguments. We might consider the annotation for documentations purposes. We probably aren't going to use method_signatures, and stay with request and options positional arguments. It seems like we've made the decision at this point. Is there any reason to keep this open?
gharchive/issue
2019-03-20T20:16:10
2025-04-01T06:38:50.739793
{ "authors": [ "blowmage", "jbolinger" ], "repo": "googleapis/gapic-generator-ruby", "url": "https://github.com/googleapis/gapic-generator-ruby/issues/68", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
361983304
Add Java showcase tests Add a showcase test suite for Java using the https://github.com/googleapis/gapic-showcase/releases/tag/v0.0.4 release (mostly ported from the Kotlin version). @landrito I generated the client and got all these tests passing locally. I could use your help getting them wired up in the CI config. I'm almost as scared of that thing as you are of gradle 😃 Codecov Report Merging #2327 into master will not change coverage. The diff coverage is n/a. @@ Coverage Diff @@ ## master #2327 +/- ## ======================================== Coverage 86.8% 86.8% Complexity 5168 5168 ======================================== Files 454 454 Lines 20511 20511 Branches 2209 2209 ======================================== Hits 17805 17805 Misses 1932 1932 Partials 774 774 Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update ef24784...45bd003. Read the comment docs. @landrito can I close this? You already merged these in another PR correct?
gharchive/pull-request
2018-09-20T01:22:25
2025-04-01T06:38:50.746317
{ "authors": [ "codecov-io", "jbolinger" ], "repo": "googleapis/gapic-generator", "url": "https://github.com/googleapis/gapic-generator/pull/2327", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
139680582
Fixed an issue in LocalServiceHelper sendPostRequest() was not waiting for the response. PTAL LGTM.
gharchive/pull-request
2016-03-09T19:43:59
2025-04-01T06:38:50.747710
{ "authors": [ "garrettjonesgoogle", "shinfan" ], "repo": "googleapis/gax-java", "url": "https://github.com/googleapis/gax-java/pull/24", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
1204557645
"kid" invalid, unable to lookup correct key in /var/app/vendor/firebase/php-jwt/src/JWT.php:434 Environment details OS: Ubuntu PHP version: 8.1.2 Package name and version: google/apiclient v2.12.2 Steps to reproduce Composer update to latest release ( firebase/php-jwt:^6.0 ) Run example code https://github.com/googleapis/google-api-php-client/blob/main/examples/idtoken.php On verifyIdToken UnexpectedValueException of "kid" invalid, unable to lookup correct key Code example //Set the google client $client = new Google\Client(); $client->setClientId('XXXXXXXXX'); $client->setClientSecret('XXXXXXXXX'); $redirect_uri = 'https://' . $_SERVER['HTTP_HOST'] . str_replace('?' . $_SERVER['QUERY_STRING'], '', $_SERVER['REQUEST_URI']); //Add scope of user information $client->addScope(['email']); $client->setRedirectUri($redirect_uri); if (isset($_GET['code'])) { $token = $client->fetchAccessTokenWithAuthCode($_GET['code']); $client->setAccessToken($token); $token_data = $client->verifyIdToken(); // store in the session also print_r($token_data); } else { $authUrl = $client->createAuthUrl(); echo "<a class='login' href='" . $authUrl . "'>Connect Me!</a>"; } Issue Information https://github.com/firebase/php-jwt/releases/tag/v6.0.0 has breaking changes to the JWT::decode Need to implemente the key vertion to https://github.com/googleapis/google-api-php-client/blob/main/src/AccessToken/Verify.php#L102-L106 Having the same issue with PHP 7.2/Centos 7 Same issue on PHP 8.0.1/Linux. Was able to fix by downgrading to GoogleAPIClient version 2.11 in composer.json, which does require version 6 of firebase/php-jwt Same here, docker/php8.0.1, test and prod, downgrading to 6.0 I get some other error. This should now be fixed in v2.12.3. Please update to the latest version and let me know if this fixes the problem please! This should now be fixed in v2.12.3. Please update to the latest version and let me know if this fixes the problem please! yes, it fixed the issue, thank you
gharchive/issue
2022-04-14T13:51:57
2025-04-01T06:38:50.754338
{ "authors": [ "Baylie21", "bshaffer", "dany1980it", "gweller22", "megamk" ], "repo": "googleapis/google-api-php-client", "url": "https://github.com/googleapis/google-api-php-client/issues/2242", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
423475138
Refactor MSVC runtime configuration. We had this in the gRPC config, but there is nothing gRPC specific about it, and we will need it in a separate file for the googleapis submodule removal. This change is  Codecov Report Merging #2268 into master will not change coverage. The diff coverage is n/a. @@ Coverage Diff @@ ## master #2268 +/- ## ======================================= Coverage 92.46% 92.46% ======================================= Files 307 307 Lines 18895 18895 ======================================= Hits 17471 17471 Misses 1424 1424 Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update e3f55e6...b38dd41. Read the comment docs.
gharchive/pull-request
2019-03-20T21:13:16
2025-04-01T06:38:50.760851
{ "authors": [ "codecov-io", "coryan" ], "repo": "googleapis/google-cloud-cpp", "url": "https://github.com/googleapis/google-cloud-cpp/pull/2268", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
469994624
BigQuery: Verify/implement list options for job collection Filing this as a backlog issue for all the client languages. It looks like the BigQuery API doesn't really expose the minCreationTime / maxCreationTime options available in jobs.list via tests. My understanding is that dotnet exports the discovery types directly so there's no need for special wrapped implementations for invoking options. Ex: https://github.com/shollyman/google-cloud-dotnet/blob/master/apis/Google.Cloud.BigQuery.V2/Google.Cloud.BigQuery.V2.Tests/ListJobsOptionsTest.cs doesn't include the options. https://github.com/shollyman/google-cloud-dotnet/blob/master/apis/Google.Cloud.BigQuery.V2/Google.Cloud.BigQuery.V2.IntegrationTests/JobsTest.cs#L57 appears to implement time-based filtration by relying on client-side filtration rather than server side filtration. I'm not sure what you mean by "expose [...] via tests" - did you mean we don't expose it in the API itself? If so, you're absolutely right - and I assume that's because it didn't exist when I first put together ListJobsOptions. Will take this as a feature request to add those - it shouldn't be hard. We can then use them in tests, of course.
gharchive/issue
2019-07-18T21:20:30
2025-04-01T06:38:50.763852
{ "authors": [ "jskeet", "shollyman" ], "repo": "googleapis/google-cloud-dotnet", "url": "https://github.com/googleapis/google-cloud-dotnet/issues/3244", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2294618939
Release Google.Cloud.SecurityCenter.V1 version 3.21.0 Changes in this release: New features Add IServiceCollection extension methods for client registration where an IServiceProvider is required. (commit 022fab2) Created release for Google.Cloud.SecurityCenter.V1-3.21.0
gharchive/pull-request
2024-05-14T07:14:44
2025-04-01T06:38:50.765873
{ "authors": [ "jskeet", "yoshi-automation" ], "repo": "googleapis/google-cloud-dotnet", "url": "https://github.com/googleapis/google-cloud-dotnet/pull/12918", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
417626779
Should Publisher be shudown everytime ? I have multiple scheduler which synchronize different kinds of data to different topics every 1 min. Since my scheduler runs every 1 min either i have to either create an publisher on each invocation(and shutdown) of scheduler or i can maintain publisher as an spring bean and reuse the same. Can you suggest which is better approach, cause i do see Deadline_Exceeded error intermittently ? Although i know primary cause of Deadline_Exceeded is due to clogging pub sub client with too many messages but could this also impact it ? Also i read in one of the issues #1751 if we do no call shutdown on publisher it does not exit from JVM and threads are still alive . Which threads are these , thread used by RPC to publish each message ? Basically if i am publishing 100 messages then 100 threads are alive ? @ayushj158 Its better to create client just once and keep reusing it. For DEADLINE_EXCEEDED, how many message do you send every 1 min? Are you running from your local machine or from GCP? It can range from 100K-600K/min.... I am running on an on premise Linux machine with enough computing power. Also tried to use custom retry settings but i am not able to restrict the failures to zero, intermittently i see 10-15 messages being failed. Also i am using load shedding to minimize the failures #3867 BTW what is the effect of publisher being open in JVM , how many and which threads are not being releases? RetrySettings retrySettings = RetrySettings.newBuilder() .setInitialRetryDelay(Duration.ofMillis(5)) .setRetryDelayMultiplier(2) .setMaxRetryDelay(Duration.ofMillis(Long.MAX_VALUE)) .setTotalTimeout(Duration.ofSeconds(10)) .setInitialRpcTimeout(Duration.ofSeconds(10)) .setMaxRpcTimeout(Duration.ofSeconds(10)) .build(); @ajaaym Perfect thanks for the quick tip Ajay, i am gonna try this . :+1: :+1:
gharchive/issue
2019-03-06T05:15:56
2025-04-01T06:38:50.769742
{ "authors": [ "ajaaym", "ayushj158" ], "repo": "googleapis/google-cloud-java", "url": "https://github.com/googleapis/google-cloud-java/issues/4628", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
375595993
Fix search folder for BQDT synth output Fixes #3877 The synth script was looking in the wrong directory. Test failure is in BigTable and unrelated. Filed #3883 for that.
gharchive/pull-request
2018-10-30T17:01:34
2025-04-01T06:38:50.770846
{ "authors": [ "chingor13" ], "repo": "googleapis/google-cloud-java", "url": "https://github.com/googleapis/google-cloud-java/pull/3878", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
485743235
spanner-jdbc: Fix possible IllegalArgumentException in SingleUseTransaction The fireAndForgetRollbackAndCloseTxManager method could cause an IllegalArgumentException if it happened to execute after a different method had already closed the TransactionManager. The fireAndForgetRollback method has therefore been removed and replaced with a synchronous rollback call. This will make update statements in autocommit mode that return an error slightly slower than they were, as they will not return until the rollback method has been executed. The fireAndForgetRollback method is something that should rather be added to the TransactionManager itself. Fixes this flake: https://source.cloud.google.com/results/invocations/df3b7c34-a981-4dfa-9978-7fa34e290142/targets/cloud-devrel%2Fclient-libraries%2Fjava%2Fgoogle-cloud-java%2Fpresubmit%2Fjava7/log @skuruppu I had the same issue with a different PR yesterday and talked to @kolea2 about it. She confirmed that it is unrelated to this @skuruppu @olavloite yep, that is a temporary break while we wait for the linkage monitor build to be updated. Fine to merge.
gharchive/pull-request
2019-08-27T11:06:32
2025-04-01T06:38:50.773600
{ "authors": [ "kolea2", "olavloite" ], "repo": "googleapis/google-cloud-java", "url": "https://github.com/googleapis/google-cloud-java/pull/6175", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
567134718
Split packages The package com.google.cloud.kms.v1 is in multiple jars. This is going to be a problem in java 9 and later. This is a problem across every gapic generated client library we publish. See https://github.com/googleapis/google-cloud-java/issues/5760 No update but this issue needs to be addressed widely across gRPC and google-cloud-java libraries. Closing this one.
gharchive/issue
2020-02-18T20:19:51
2025-04-01T06:38:50.782005
{ "authors": [ "chingor13", "elharo", "suztomo" ], "repo": "googleapis/java-kms", "url": "https://github.com/googleapis/java-kms/issues/84", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
524627333
GA Release Package name: @google-cloud/asset Current release: beta Proposed release: GA Instructions Check the lists below, adding tests / documentation as required. Once all the "required" boxes are ticked, please create a release and close this issue. Required [ ] 28 days elapsed since last beta release with new API surface [x] Server API is GA [x] Package API is stable, and we can commit to backward compatibility [x] All dependencies are GA Optional [ ] Most common / important scenarios have descriptive samples [ ] Public manual methods have at least one usage sample each (excluding overloads) [ ] Per-API README includes a full description of the API [ ] Per-API README contains at least one “getting started” sample using the most common API scenario [ ] Manual code has been reviewed by API producer [ ] Manual code has been reviewed by a DPE responsible for samples [ ] 'Client LIbraries' page is added to the product documentation in 'APIs & Reference' section of the product's documentation on Cloud Site G Currently blocked on bake time Fixed in #264
gharchive/issue
2019-11-18T21:22:42
2025-04-01T06:38:50.786760
{ "authors": [ "JustinBeckwith" ], "repo": "googleapis/nodejs-asset", "url": "https://github.com/googleapis/nodejs-asset/issues/223", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
656248660
feat: detect param type if not in provided types Adds ability to detect named parameter type if types were provided for some parameters, but not all. [x] Ensure the tests and linter pass [x] Code coverage does not decrease (if any source code was changed) [x] Appropriate docs were updated (if necessary) Fixes #802 🦕 Was this actually applied? I'm having an issue right now where it's saying I'm not providing the type for an empty array even though I am and I'm starting to wonder if it's because I'm not providing the type for all the other params too Was this actually applied? I'm having an issue right now where it's saying I'm not providing the type for an empty array even though I am and I'm starting to wonder if it's because I'm not providing the type for all the other params too
gharchive/pull-request
2020-07-14T01:05:02
2025-04-01T06:38:50.789046
{ "authors": [ "pdfabbro", "steffnay" ], "repo": "googleapis/nodejs-bigquery", "url": "https://github.com/googleapis/nodejs-bigquery/pull/813", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1129231767
samples: update things referencing names to be namesOrIds This is because the Node library lets you use either. Fixes https://github.com/googleapis/nodejs-pubsub/issues/1198 🦕 Adding owlbot:ignore due to the blocked other OwlBot PR.
gharchive/pull-request
2022-02-10T00:10:15
2025-04-01T06:38:50.790484
{ "authors": [ "feywind" ], "repo": "googleapis/nodejs-pubsub", "url": "https://github.com/googleapis/nodejs-pubsub/pull/1488", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
435502034
chore(docs): formatting updates This PR was generated using Autosynth. :rainbow: Here's the log from Synthtool: synthtool > Executing /tmpfs/src/git/autosynth/working_repo/synth.py. synthtool > Ensuring dependencies. synthtool > Pulling artman image. latest: Pulling from googleapis/artman Digest: sha256:314eae2a40f6f7822db77365cf5f45bd513d628ae17773fd0473f460e7c2a665 Status: Image is up to date for googleapis/artman:latest synthtool > Cloning googleapis. synthtool > Running generator for google/spanner/artman_spanner.yaml. synthtool > Generated code into /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/js/spanner-v1. synthtool > Running generator for google/spanner/admin/database/artman_spanner_admin_database.yaml. synthtool > Generated code into /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/js/spanner-admin-database-v1. synthtool > Running generator for google/spanner/admin/instance/artman_spanner_admin_instance.yaml. synthtool > Generated code into /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/js/spanner-admin-instance-v1. .eslintignore .eslintrc.yml .github/ISSUE_TEMPLATE/bug_report.md .github/ISSUE_TEMPLATE/feature_request.md .github/ISSUE_TEMPLATE/support_request.md .jsdoc.js .kokoro/common.cfg .kokoro/continuous/node10/common.cfg .kokoro/continuous/node10/docs.cfg .kokoro/continuous/node10/lint.cfg .kokoro/continuous/node10/samples-test.cfg .kokoro/continuous/node10/system-test-grpcjs.cfg .kokoro/continuous/node10/system-test.cfg .kokoro/continuous/node10/test.cfg .kokoro/continuous/node11/common.cfg .kokoro/continuous/node11/test.cfg .kokoro/continuous/node6/common.cfg .kokoro/continuous/node6/test.cfg .kokoro/continuous/node8/common.cfg .kokoro/continuous/node8/test.cfg .kokoro/docs.sh .kokoro/lint.sh .kokoro/presubmit/node10/common.cfg .kokoro/presubmit/node10/docs.cfg .kokoro/presubmit/node10/lint.cfg .kokoro/presubmit/node10/samples-test.cfg .kokoro/presubmit/node10/system-test-grpcjs.cfg .kokoro/presubmit/node10/system-test.cfg .kokoro/presubmit/node10/test.cfg .kokoro/presubmit/node11/common.cfg .kokoro/presubmit/node11/test.cfg .kokoro/presubmit/node6/common.cfg .kokoro/presubmit/node6/test.cfg .kokoro/presubmit/node8/common.cfg .kokoro/presubmit/node8/test.cfg .kokoro/presubmit/windows/common.cfg .kokoro/presubmit/windows/test.cfg .kokoro/publish.sh .kokoro/release/publish.cfg .kokoro/samples-test.sh .kokoro/system-test.sh .kokoro/test.bat .kokoro/test.sh .kokoro/trampoline.sh .nycrc .prettierignore .prettierrc CODE_OF_CONDUCT.md CONTRIBUTING.md LICENSE Skipping: README.md codecov.yaml renovate.json Skipping: samples/README.md synthtool > Replaced "(const SpannerClient = require\\('\\./spanner_client'\\);)" in src/v1/index.js. synthtool > Replaced '(module\\.exports\\.SpannerClient = SpannerClient;)' in src/v1/index.js. synthtool > Replaced '../../package.json' in src/v1/database_admin_client.js. synthtool > Replaced '../../package.json' in src/v1/instance_admin_client.js. synthtool > Replaced '../../package.json' in src/v1/spanner_client.js. synthtool > Replaced 'https:\\/\\/cloud\\.google\\.com[\\s\\*]*http:\\/\\/(.*)[\\s\\*]*\\)' in src/v1/doc/google/protobuf/doc_timestamp.js. synthtool > No replacements made in **/doc/google/protobuf/doc_timestamp.js for pattern toISOString\], maybe replacement is not longer needed? synthtool > Replaced '`\\[a-z\\]\\(https:\\/\\/cloud\\.google\\.com\\[-a-z0-9\\]\\*\\[a-z0-9\\]\\)\\?`' in src/v1/doc/google/spanner/v1/doc_spanner.js. synthtool > Replaced '`\\[a-z\\]\\(https:\\/\\/cloud\\.google\\.com\\[-a-z0-9\\]\\*\\[a-z0-9\\]\\)\\?`' in src/v1/doc/google/spanner/admin/instance/v1/doc_spanner_instance_admin.js. synthtool > Replaced '`\\(\\[a-z\\]\\(https:\\/\\/cloud\\.google\\.com\\[-a-z0-9\\]\\*\\[a-z0-9\\]\\)\\?\\)\\?`' in src/v1/doc/google/spanner/v1/doc_spanner.js. synthtool > Replaced '`\\(\\[a-z\\]\\(https:\\/\\/cloud\\.google\\.com\\[-a-z0-9\\]\\*\\[a-z0-9\\]\\)\\?\\)\\?`' in src/v1/doc/google/spanner/admin/instance/v1/doc_spanner_instance_admin.js. npm WARN deprecated @types/p-retry@3.0.1: This is a stub types definition. p-retry provides its own type definitions, so you do not need this installed. > grpc@1.20.0 install /tmpfs/src/git/autosynth/working_repo/node_modules/grpc > node-pre-gyp install --fallback-to-build --library=static_library node-pre-gyp WARN Using needle for node-pre-gyp https download [grpc] Success: "/tmpfs/src/git/autosynth/working_repo/node_modules/grpc/src/node/extension_binary/node-v57-linux-x64-glibc/grpc_node.node" is installed via remote > protobufjs@6.8.8 postinstall /tmpfs/src/git/autosynth/working_repo/node_modules/protobufjs > node scripts/postinstall > @google-cloud/spanner@3.1.0 prepare /tmpfs/src/git/autosynth/working_repo > npm run compile > @google-cloud/spanner@3.1.0 compile /tmpfs/src/git/autosynth/working_repo > tsc -p . && cp -r src/v1 build/src && cp -r protos build && cp test/*.js build/test src/codec.ts:512:18 - error TS2349: Cannot invoke an expression whose type lacks a call signature. Type '(<U>(callbackfn: (value: unknown, index: number, array: readonly unknown[]) => U, thisArg?: any) => U[]) | (<U>(callbackfn: (value: never, index: number, array: never[]) => U, thisArg?: any) => U[]) | (<U>(callbackfn: (value: string, index: number, array: string[]) => U, thisArg?: any) => U[]) | (<U>(callbackfn: (va...' has no compatible call signatures. 512 const values = arrify(value).map(codec.encode); ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Found 2 errors. npm ERR! code ELIFECYCLE npm ERR! errno 1 npm ERR! @google-cloud/spanner@3.1.0 compile: `tsc -p . && cp -r src/v1 build/src && cp -r protos build && cp test/*.js build/test` npm ERR! Exit status 1 npm ERR! npm ERR! Failed at the @google-cloud/spanner@3.1.0 compile script. npm ERR! This is probably not a problem with npm. There is likely additional logging output above. npm ERR! A complete log of this run can be found in: npm ERR! /home/kbuilder/.npm/_logs/2019-04-21T11_52_05_362Z-debug.log npm ERR! code ELIFECYCLE npm ERR! errno 1 npm ERR! @google-cloud/spanner@3.1.0 prepare: `npm run compile` npm ERR! Exit status 1 npm ERR! npm ERR! Failed at the @google-cloud/spanner@3.1.0 prepare script. npm ERR! This is probably not a problem with npm. There is likely additional logging output above. npm ERR! A complete log of this run can be found in: npm ERR! /home/kbuilder/.npm/_logs/2019-04-21T11_52_05_439Z-debug.log > @google-cloud/spanner@3.1.0 fix /tmpfs/src/git/autosynth/working_repo > eslint --fix '**/*.js' /tmpfs/src/git/autosynth/working_repo/benchmark/ycsb.js 25:27 error "../" is not found node/no-missing-require /tmpfs/src/git/autosynth/working_repo/scripts/cleanup.js 17:27 error "../" is not found node/no-missing-require ✖ 2 problems (2 errors, 0 warnings) npm ERR! code ELIFECYCLE npm ERR! errno 1 npm ERR! @google-cloud/spanner@3.1.0 fix: `eslint --fix '**/*.js'` npm ERR! Exit status 1 npm ERR! npm ERR! Failed at the @google-cloud/spanner@3.1.0 fix script. npm ERR! This is probably not a problem with npm. There is likely additional logging output above. npm ERR! A complete log of this run can be found in: npm ERR! /home/kbuilder/.npm/_logs/2019-04-21T11_52_09_647Z-debug.log synthtool > Cleaned up 2 temporary directories. synthtool > Wrote metadata to synth.metadata. This is stale, gonna let autosynth submit again tomorrow.
gharchive/pull-request
2019-04-21T11:52:14
2025-04-01T06:38:50.794107
{ "authors": [ "JustinBeckwith", "yoshi-automation" ], "repo": "googleapis/nodejs-spanner", "url": "https://github.com/googleapis/nodejs-spanner/pull/582", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
934920993
chore: Removing unused region tag Really, just removing one unused tag. @m-strzelczyk , I'm going to close this PR due to inactivity but please feel free to re-open it.
gharchive/pull-request
2021-07-01T14:41:58
2025-04-01T06:38:50.797489
{ "authors": [ "m-strzelczyk", "parthea" ], "repo": "googleapis/python-compute", "url": "https://github.com/googleapis/python-compute/pull/73", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
315480565
Base ceres scan matcher on correspondence cost function Base ceres scan matcher on correspondence cost function instead of probabilities Step towards RFC 0019 The evaluation results show no performance regression with the current PR. Evaluation results averaged over 5 runs (b2-2016-04-05-14-44-52.bag). | Commit  | Wall Time [s] | Change in [%] | CPU Time [s] | Change in [%] | Memory [KiB] | Change in [%] -- | -- | -- | -- | -- | -- | -- | -- Introduce Grid2D as base class for 2D grids | https://github.com/googlecartographer/cartographer/commit/46d3a9443a47020ad408a2d3b0a61259ae67ffb4 | 48.31 |   | 161.54 |   | 356534.40 |   Correspondence cost based probability grid | https://github.com/googlecartographer/cartographer/commit/03d56871c1f3a6441f6404c99176206daf83f674 | 48.94 | 1.32 | 162.70 | 0.72 | 364671.20 | 2.28 Base ceres scan matcher on correspondence cost function (Current) | https://github.com/googlecartographer/cartographer/pull/1085/commits/536a2bd977b0c2e63c37cf55dc6bf6bdad355d11 | 48.65 | -0.61 | 160.77 | -1.19 | 360744.80 | -1.08 @wally-the-cartographer merge Merge requested by authorized user kdaun. Merge queue now has a length of 1.
gharchive/pull-request
2018-04-18T13:30:40
2025-04-01T06:38:50.806660
{ "authors": [ "kdaun", "wally-the-cartographer" ], "repo": "googlecartographer/cartographer", "url": "https://github.com/googlecartographer/cartographer/pull/1085", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
334166262
WIP: Monitoring bridge https://github.com/googlecartographer/rfcs/pull/26 (code not ready for merge yet, prototyping) The interface is now implemented. A simple test to view the unfiltered metrics message is to run rosservice call /collect_metrics | less while a node is running. @MichaelGrupp @gaschler I also think we could get away with fewer messages CollectMetrics cartographer_ros_msgs/StatusResponse status cartographer_ros_msgs/MetricFamily [] metric_families MetricFamily string name string description cartographer_ros_msgs/Metric[] metrics Metric ... uint8 TYPE_COUNTER=0 uint8 TYPE_GAUGE=1 uint8 TYPE_HISTOGRAM=2 uint8 type float64 counter_value float64 gauge_value cartographer_ros_msgs/HistogramBucket[] counts_by_bucket Thanks for the feedback, I will have a look at how to refine this soon. Addressed your reviews. @gaschler I can still split it up, as the branch name says ("playground") it involved lots of prototyping which was easier with the whole code to see the bigger picture.
gharchive/pull-request
2018-06-20T16:52:17
2025-04-01T06:38:50.809965
{ "authors": [ "MichaelGrupp", "cschuet" ], "repo": "googlecartographer/cartographer_ros", "url": "https://github.com/googlecartographer/cartographer_ros/pull/906", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
217966638
Update dependencies (Fixes #43) Update dependencies, update gradle, add gradle wrapper task, use jcenter instead of mavenCentral. Fixes #43 Anyone still maintain this repo?
gharchive/pull-request
2017-03-29T18:26:12
2025-04-01T06:38:50.810989
{ "authors": [ "barnhill" ], "repo": "googlecast/CastVideos-android", "url": "https://github.com/googlecast/CastVideos-android/pull/44", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1045237572
An error has occurred in the authorization process - after clicking the auth_init trigger URL and completing permission flow I was able to deploy the functions and add all the env_vars.yml and deploy a second time having completed the permissions grants. However, at step 5 "Setup Gmail push notifications" after clicking "Allow" I get the error: "An error has occurred in the authorization process". GCP Logging is full of only one error, but this was present before I was able to get to the "Allow" step 5. 2021-11-04T18:08:05.121Zauth_callback Searching for secrets in: /workspace/node_modules/@google-cloud/client_secret.json 2021-11-04T18:08:05.122Zauth_callback Provided module can't be loaded. 2021-11-04T18:08:05.122Zauth_callback Is there a syntax error in your code? 2021-11-04T18:08:05.122Zauth_callback Detailed stack trace: Error: Missing required keys: GCP_PROJECT 2021-11-04T18:08:05.122Zauth_callback at exports.Provider.Provider.required (/workspace/node_modules/nconf/lib/nconf/provider.js:364:11) 2021-11-04T18:08:05.123Zauth_callback at Object.<anonymous> (/workspace/node_modules/@google-cloud/express-oauth2-handlers/config.js:53:7) 2021-11-04T18:08:05.123Zauth_callback at Module._compile (internal/modules/cjs/loader.js:1072:14) 2021-11-04T18:08:05.123Zauth_callback at Object.Module._extensions..js (internal/modules/cjs/loader.js:1101:10) 2021-11-04T18:08:05.123Zauth_callback at Module.load (internal/modules/cjs/loader.js:937:32) 2021-11-04T18:08:05.123Zauth_callback at Function.Module._load (internal/modules/cjs/loader.js:778:12) 2021-11-04T18:08:05.123Zauth_callback at Module.require (internal/modules/cjs/loader.js:961:19) 2021-11-04T18:08:05.123Zauth_callback at require (internal/modules/cjs/helpers.js:92:18) 2021-11-04T18:08:05.123Zauth_callback at Object.<anonymous> (/workspace/node_modules/@google-cloud/express-oauth2-handlers/tokenStorage.js:17:16) 2021-11-04T18:08:05.123Zauth_callback at Module._compile (internal/modules/cjs/loader.js:1072:14) 2021-11-04T18:08:05.123Zauth_callback Could not load the function, shutting down. Note the "Error: Missing required keys: GCP_PROJECT". I tried to set GCP_PROJECT in the env vars, but get an error that it is a protected env var. I renamed the value in index.js to _GCP_PROJECT and did the same for env vars, but the error persisted making me think it is in another lib. A note to the authors, the URLs of auth_callback and the pubsub only differ by GCP project. Several steps could be saved by creating the values in the file at the beginning. Even i had same issue but it was solved by adding GCP_PROJECT in Runtime environment variables in cloud functions You got this error because you are running this on a higher version of node8, adding GCP_PROJECT (in in the auth and watch function/s) with value the gcloud project in the env variables (env_vars.yaml file in this case) should be fixed. for the next problem you will see, you will need this info https://github.com/googlecodelabs/gcf-gmail-codelab/issues/17#issuecomment-1039137404 GCP Node10 runtime introduced some environment variable changes which require modifications of the source code.
gharchive/issue
2021-11-04T21:43:14
2025-04-01T06:38:50.819741
{ "authors": [ "amurciasu", "kbroughton", "pavitra15", "weynhamz" ], "repo": "googlecodelabs/gcf-gmail-codelab", "url": "https://github.com/googlecodelabs/gcf-gmail-codelab/issues/18", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
348124523
Checks upstreams/projects for levels of Variable Font support Observed behaviour Earlier this year, @m4rc1e has privately drafted a set of definitions for the GF team on how well a project supports generating Variable Fonts, and hand-made a spreadsheet table of which levels are supported by which upstreams. Yet the sheet is becoming stale, and I'd like to get away from hand-made sheets and towards relying on the fontbakery dashboard for this kind of status information. Expected behaviour Therefore for fontbakery itself, [ ] https://github.com/googlefonts/fontbakery/blob/master/docs/variable-fonts-levels.md should provide a human readable description of the levels [ ] There should be a some checks for each level. If there are no clearly checkable concrete aspects of an upstream project with which to detect a level, there should be some simple metadata somewhere in the sources that defines the level by hand (like a "custom Custom Parameter"? if that exists? Or, worst case, a glyphs-source-filename.fontbakery.yml in the same directory as the glyphs-source-filename.glyphs, or simply a fontbakery.yml anywhere in the repo?) Perhaps this means making a Lib/fontbakery/specifications/googlefonts-variablefonts.py file that has checks only for this; or perhaps this is just part of a Lib/fontbakery/specifications/googlefonts-upstream.py specification; or perhaps either of those specifications are needed and this should all go into Lib/fontbakery/specifications/googlefonts.py... But, with those 2 things in place, we can replace the hand-made sheet with the fontbakery dashboard. If there are no clearly checkable concrete aspects of an upstream project with which to detect a level, there should be some simple metadata somewhere in the sources that defines the level by hand. To work out if a family could be a VF, I wrote the following algo If the font has 2+ masters and 3 instances+, it can be a VF. However, if the font has the same amount of masters and instances it is not. I dunno, that's just a bloated VF :) Anything with more than 1 master or instance can be a VF.
gharchive/issue
2018-08-07T00:26:15
2025-04-01T06:38:50.850815
{ "authors": [ "davelab6", "m4rc1e" ], "repo": "googlefonts/fontbakery", "url": "https://github.com/googlefonts/fontbakery/issues/2005", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1663052199
cmap formats The default build of Oswald with fontmake has a <cmap_format_4 platformID="0" platEncID="3" language="0"> and a <cmap_format_4 platformID="3" platEncID="1" language="0">. fontmake-rs emits only the latter. We appear to agree on the content so it appears we just need to decide if we should emit the extra one. @behdad suggested we probably don't need the additional entry.
gharchive/issue
2023-04-11T19:02:02
2025-04-01T06:38:50.852447
{ "authors": [ "rsheeter" ], "repo": "googlefonts/fontmake-rs", "url": "https://github.com/googlefonts/fontmake-rs/issues/251", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
143515974
New Belize phone numbers not validating This Belize phone number is a working phone number that you can call successfully, but the libphone is not validating it yet: +5016519885 This number is also reachable +5016529885, so I believe the +501 65x prefix is valid. Can this prefix please be added to the library? Country/region affected: BZ (Belize) Example number(s) affected: +5016519885, +5016529885 The phone number range(s) to which the issue applies: +501 65X XXXX The type of the number(s) ("fixed-line", "mobile", "short code", etc.): mobile The cost, if applicable ("toll-free", "premium rate", "shared cost"): n/a @rzaryan Thank you for taking the time to improve libphonenumber! We've accepted this issue and will fix this in an upcoming release. Can you just confirm what carrier you expect this mobile range 65X to be for and could you provide any evidence for it? Fixed in release 7.3.0 http://libphonenumber.appspot.com/phonenumberparser?number=%2B5016519885
gharchive/issue
2016-03-25T14:47:22
2025-04-01T06:38:50.874562
{ "authors": [ "padmaksha", "rzaryan" ], "repo": "googlei18n/libphonenumber", "url": "https://github.com/googlei18n/libphonenumber/issues/1037", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
683410609
Frontend cleanup - phase one Primarily renaming, more comments and of improved quality Also, fix: <v-row>s should be in <v-containers> according to Vuetify docs. Mobile view has two rows now, all larger ones have one to display the summary and 'apply' button. file diff fix: We shouldn't have been recalculating column values from stored API instances, as they are already stored (they didn't use to, that is why it used to be like that). commit Just to clarify, the file model.ts was split into recommendation_raw.ts and recommendation_extra.ts. The related added lines are not actually new.
gharchive/pull-request
2020-08-21T08:59:31
2025-04-01T06:38:50.880695
{ "authors": [ "s17k" ], "repo": "googleinterns/recomator", "url": "https://github.com/googleinterns/recomator/pull/113", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
659730333
Add card display to srp (updated) new PR to consolidate front end changes is #27 outdated? is #27 outdated? no, there are still features unique to #27 that are needed in the future
gharchive/pull-request
2020-07-17T23:24:39
2025-04-01T06:38:50.882128
{ "authors": [ "AaronLopes", "irisliu77" ], "repo": "googleinterns/step1-2020", "url": "https://github.com/googleinterns/step1-2020/pull/43", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
2264145050
chore: add region tags to 6 samples Tag 6 top-priority samples for new samples pages on developers.google.com Merging with known issues (same comment as https://github.com/googlemaps-samples/android-samples/pull/1551#issuecomment-2076085392)
gharchive/pull-request
2024-04-25T17:38:56
2025-04-01T06:38:50.883730
{ "authors": [ "wangela" ], "repo": "googlemaps-samples/android-samples", "url": "https://github.com/googlemaps-samples/android-samples/pull/1552", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2586447210
How to stop navigation So our flow is like this. You go to task screen where you see some details. You then navigate to navigation where guidance automatically starts. you can go back if you need some more details during this time navigation is still active. We don't call clear or stop guidance here since we still want navigation to be active when user return to navigation screen. in case you close and open the app again on Android, navigation starts giving you directions as well as show notifications with directions. Ios is completely silent. no gps is used. no notifications. no guidance. When you manually navigate back to navigation screen, navigation is not active.(if you go to navigation screen, wait for navigation to start, go back and go to navigation screen again, you can see that navigation is still active from before) Calling cleanup also returns this error Possible Unhandled Promise Rejection (id: 11): Error: Navigation session not initialized so i guess navigation is not active even tho i'm getting guidance? This behavior is very strange and differs between platforms. What is the recommended approach for this? scratch that. On android navigation keeps on going even if you close the app. This is a bug. I can't stop navigation if user force closes the app. This is duplicate of #34
gharchive/issue
2024-10-14T15:53:28
2025-04-01T06:38:50.895426
{ "authors": [ "jokerttu", "ziga-hvalec" ], "repo": "googlemaps/react-native-navigation-sdk", "url": "https://github.com/googlemaps/react-native-navigation-sdk/issues/301", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
280461603
PagingWithNetworkSample - failed Hello guys, I would like to report ANR on your new sample. If you use DB + NETWORK case, and use pull to refresh app crashes. 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] Throwing new exception 'length=246; index=1434' with unexpected pending exception: java.lang.ArrayIndexOutOfBoundsException: length=246; index=1434 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at android.arch.lifecycle.LiveData com.android.example.paging.pagingwithnetwork.reddit.repository.inDb.DbRedditPostRepository.refresh(java.lang.String) (DbRedditPostRepository.kt:75) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at android.arch.lifecycle.LiveData com.android.example.paging.pagingwithnetwork.reddit.repository.inDb.DbRedditPostRepository.access$refresh(com.android.example.paging.pagingwithnetwork.reddit.repository.inDb.DbRedditPostRepository, java.lang.String) (DbRedditPostRepository.kt:39) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at android.arch.lifecycle.LiveData com.android.example.paging.pagingwithnetwork.reddit.repository.inDb.DbRedditPostRepository$postsOfSubreddit$refreshState$1.apply(kotlin.Unit) (DbRedditPostRepository.kt:122) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at java.lang.Object com.android.example.paging.pagingwithnetwork.reddit.repository.inDb.DbRedditPostRepository$postsOfSubreddit$refreshState$1.apply(java.lang.Object) (DbRedditPostRepository.kt:39) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at void android.arch.lifecycle.Transformations$2.onChanged(java.lang.Object) (Transformations.java:133) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at void android.arch.lifecycle.MediatorLiveData$Source.onChanged(java.lang.Object) (MediatorLiveData.java:152) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at void android.arch.lifecycle.LiveData.considerNotify(android.arch.lifecycle.LiveData$LifecycleBoundObserver) (LiveData.java:131) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at void android.arch.lifecycle.LiveData.dispatchingValue(android.arch.lifecycle.LiveData$LifecycleBoundObserver) (LiveData.java:148) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at void android.arch.lifecycle.LiveData.setValue(java.lang.Object) (LiveData.java:294) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at void android.arch.lifecycle.MutableLiveData.setValue(java.lang.Object) (MutableLiveData.java:33) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at void com.android.example.paging.pagingwithnetwork.reddit.repository.inDb.DbRedditPostRepository$postsOfSubreddit$2.invoke() (DbRedditPostRepository.kt:132) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at java.lang.Object com.android.example.paging.pagingwithnetwork.reddit.repository.inDb.DbRedditPostRepository$postsOfSubreddit$2.invoke() (DbRedditPostRepository.kt:39) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at void com.android.example.paging.pagingwithnetwork.reddit.ui.SubRedditViewModel.refresh() (SubRedditViewModel.kt:38) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at void com.android.example.paging.pagingwithnetwork.reddit.ui.RedditActivity$initSwipeToRefresh$2.onRefresh() (RedditActivity.kt:100) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at void android.support.v4.widget.SwipeRefreshLayout$1.onAnimationEnd(android.view.animation.Animation) (SwipeRefreshLayout.java:187) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at void android.support.v4.widget.CircleImageView.onAnimationEnd() (CircleImageView.java:106) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at void android.view.ViewGroup.finishAnimatingView(android.view.View, android.view.animation.Animation) (ViewGroup.java:6125) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at boolean android.view.View.draw(android.graphics.Canvas, android.view.ViewGroup, long) (View.java:16240) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at boolean android.view.ViewGroup.drawChild(android.graphics.Canvas, android.view.View, long) (ViewGroup.java:3621) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at void android.view.ViewGroup.dispatchDraw(android.graphics.Canvas) (ViewGroup.java:3411) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at void android.view.View.draw(android.graphics.Canvas) (View.java:16299) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at android.view.RenderNode android.view.View.updateDisplayListIfDirty() (View.java:15293) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at boolean android.view.View.draw(android.graphics.Canvas, android.view.ViewGroup, long) (View.java:16066) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at boolean android.view.ViewGroup.drawChild(android.graphics.Canvas, android.view.View, long) (ViewGroup.java:3621) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at void android.view.ViewGroup.dispatchDraw(android.graphics.Canvas) (ViewGroup.java:3411) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at android.view.RenderNode android.view.View.updateDisplayListIfDirty() (View.java:15288) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at boolean android.view.View.draw(android.graphics.Canvas, android.view.ViewGroup, long) (View.java:16066) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at boolean android.view.ViewGroup.drawChild(android.graphics.Canvas, android.view.View, long) (ViewGroup.java:3621) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at void android.view.ViewGroup.dispatchDraw(android.graphics.Canvas) (ViewGroup.java:3411) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at android.view.RenderNode android.view.View.updateDisplayListIfDirty() (View.java:15288) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at boolean android.view.View.draw(android.graphics.Canvas, android.view.ViewGroup, long) (View.java:16066) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at boolean android.view.ViewGroup.drawChild(android.graphics.Canvas, android.view.View, long) (ViewGroup.java:3621) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at void android.view.ViewGroup.dispatchDraw(android.graphics.Canvas) (ViewGroup.java:3411) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at android.view.RenderNode android.view.View.updateDisplayListIfDirty() (View.java:15288) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at boolean android.view.View.draw(android.graphics.Canvas, android.view.ViewGroup, long) (View.java:16066) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at boolean android.view.ViewGroup.drawChild(android.graphics.Canvas, android.view.View, long) (ViewGroup.java:3621) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at void android.view.ViewGroup.dispatchDraw(android.graphics.Canvas) (ViewGroup.java:3411) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at android.view.RenderNode android.view.View.updateDisplayListIfDirty() (View.java:15288) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at boolean android.view.View.draw(android.graphics.Canvas, android.view.ViewGroup, long) (View.java:16066) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at boolean android.view.ViewGroup.drawChild(android.graphics.Canvas, android.view.View, long) (ViewGroup.java:3621) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at void android.view.ViewGroup.dispatchDraw(android.graphics.Canvas) (ViewGroup.java:3411) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at android.view.RenderNode android.view.View.updateDisplayListIfDirty() (View.java:15288) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at boolean android.view.View.draw(android.graphics.Canvas, android.view.ViewGroup, long) (View.java:16066) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at boolean android.view.ViewGroup.drawChild(android.graphics.Canvas, android.view.View, long) (ViewGroup.java:3621) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at void android.view.ViewGroup.dispatchDraw(android.graphics.Canvas) (ViewGroup.java:3411) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at void android.view.View.draw(android.graphics.Canvas) (View.java:16299) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at void com.android.internal.policy.PhoneWindow$DecorView.draw(android.graphics.Canvas) (PhoneWindow.java:2740) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at android.view.RenderNode android.view.View.updateDisplayListIfDirty() (View.java:15293) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at void android.view.ThreadedRenderer.updateViewTreeDisplayList(android.view.View) (ThreadedRenderer.java:295) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at void android.view.ThreadedRenderer.updateRootDisplayList(android.view.View, android.view.HardwareRenderer$HardwareDrawCallbacks) (ThreadedRenderer.java:301) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at void android.view.ThreadedRenderer.draw(android.view.View, android.view.View$AttachInfo, android.view.HardwareRenderer$HardwareDrawCallbacks) (ThreadedRenderer.java:336) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at void android.view.ViewRootImpl.draw(boolean) (ViewRootImpl.java:2787) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at void android.view.ViewRootImpl.performDraw() (ViewRootImpl.java:2591) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at void android.view.ViewRootImpl.performTraversals() (ViewRootImpl.java:2191) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at void android.view.ViewRootImpl.doTraversal() (ViewRootImpl.java:1198) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at void android.view.ViewRootImpl$TraversalRunnable.run() (ViewRootImpl.java:6268) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at void android.view.Choreographer$CallbackRecord.run(long) (Choreographer.java:873) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at void android.view.Choreographer.doCallbacks(int, long) (Choreographer.java:676) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at void android.view.Choreographer.doFrame(long, int) (Choreographer.java:606) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at void android.view.Choreographer$FrameDisplayEventReceiver.run() (Choreographer.java:859) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at void android.os.Handler.handleCallback(android.os.Message) (Handler.java:739) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at void android.os.Handler.dispatchMessage(android.os.Message) (Handler.java:95) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at void android.os.Looper.loop() (Looper.java:168) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at void android.app.ActivityThread.main(java.lang.String[]) (ActivityThread.java:5885) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at java.lang.Object java.lang.reflect.Method.invoke!(java.lang.Object, java.lang.Object[]) (Method.java:-2) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at void com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run() (ZygoteInit.java:797) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] at void com.android.internal.os.ZygoteInit.main(java.lang.String[]) (ZygoteInit.java:687) 12-08 12:36:37.183 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/thread.cc:1349] 12-08 12:36:37.293 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/barrier.cc:90] Check failed: count_ == 0 (count_=-1, 0=0) Attempted to destroy barrier with non zero count 12-08 12:36:37.293 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/runtime.cc:366] Runtime aborting --- recursively, so no thread-specific detail! 12-08 12:36:37.293 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/runtime.cc:366] 12-08 12:36:37.293 31074-31074/com.android.example.paging.pagingwithnetwork A/libc: Fatal signal 6 (SIGABRT), code -6 in tid 31074 (gingwithnetwork) that looks like a weird state, 2-08 12:36:37.293 31074-31074/com.android.example.paging.pagingwithnetwork A/art: art/runtime/barrier.cc:90] Check failed: count_ == 0 (count_=-1, 0=0) Attempted to destroy barrier with non zero count I also cannot reproduce this issue, can you add more details on how this happened? thanks. I haven't made any code changes. I have downloaded code samples again and the issue persist. Just run app, click on first button "DB+NETWORK", search works perfect, but when you try to pull to refresh already loaded list I get that error. I tried 2 phones and 2 different laptops and crash is still there. that is so weird :/ i cannot reproduce. here is a video of what i'm trying. paging.mp4.zip Can't reproduce either. I cloned the repo from scratch. What AS are you using to run it? @yigit yes, thats the case I'm getting error :( @JoseAlcerreca latest stable version Btw I've tried on emulator, its working. But when I try on device it fails again. Ok guys, I fixed it by disabling instant run in AS, any suggestion how to avoid disabling it? :D check this out, its on 3.1 canary 5.. still getting error + these warnings I'll report it on issuetracker if you don't have any new idea
gharchive/issue
2017-12-08T11:38:02
2025-04-01T06:38:50.938655
{ "authors": [ "JoseAlcerreca", "vladanSD", "yigit" ], "repo": "googlesamples/android-architecture-components", "url": "https://github.com/googlesamples/android-architecture-components/issues/245", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
471577708
Library files was not copied to distribution_Dir in project hello-libs hello-libs The function add_custom_command add_custom_command(TARGET gperf POST_BUILD COMMAND "${CMAKE_COMMAND}" -E copy "${CMAKE_CURRENT_SOURCE_DIR}/src/gperf.h" "${distribution_DIR}/gperf/include/gperf.h" COMMENT "Copying gperf to output directory") in the file hello-libs\gen-libs\src\main\cpp\gperf\CMakeLists.txt does not run with Android Studio on Windows. I added the module gen-libs in _settings.gradle include ':app' // To generate libs used in this sample: // 1) enable the gen-libs at end of this file // 2) enable module build dependency in app/build.gradle // 3) build the app's APK in Android Studio or on command line // 4) undo step 1) and 2) above include ':gen-libs' And set implementation in the module app's build.gradle implementation project(path: ':gen-libs') And built the apk. I can't find the lib files that I built in the folder distribution_DIR. I think the files were not copied to the folder distribution_DIR But the module gen-libs was built OK because I found .so and .a files in folder gen-libs\build\intermediates\cmake\debug\obj\arm64-v8a thank you for bring it up! I did a try on Windows 10 for the project, it seems to be ok: enable the project inside setting.gradle as you mentioned enable the dependency inside app's build.gradle gradlew assembleDebug ( inside windows command prompt ) your 2) above might be the cause for your issue, refer to the comment in app's build.gradle file for some explanations ( I did not check whether the Latest studio fixed the issue though ) I ran gradlew.bat build in cmd and it worked OK
gharchive/issue
2019-07-23T09:14:00
2025-04-01T06:38:50.944893
{ "authors": [ "ggfan", "zoozooll" ], "repo": "googlesamples/android-ndk", "url": "https://github.com/googlesamples/android-ndk/issues/645", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
157078596
removing comment about gradle.properties, no code changes if we have sourceSets.main.jni.srcDirs = [], then gradle.properties is not needed LGTM
gharchive/pull-request
2016-05-26T21:02:53
2025-04-01T06:38:50.945907
{ "authors": [ "ggfan", "rschiu" ], "repo": "googlesamples/android-ndk", "url": "https://github.com/googlesamples/android-ndk/pull/216", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
225774180
Bundling the natives with the APK? Hi! A feature for bundling the native library with the APK as part of the compilation process, or even by Google Play while uploading the APK - would be great. As in some cases - where people have weird network conditions, or constraints set up by the IT department - the download of the libs may fail. And that's where all hell breaks loose. I'd also love this functionality, for offline mode.
gharchive/issue
2017-05-02T18:02:58
2025-04-01T06:38:50.947268
{ "authors": [ "athornz", "danielgindi" ], "repo": "googlesamples/android-vision", "url": "https://github.com/googlesamples/android-vision/issues/224", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
124194724
Library not found? I haven't been able to reproduce this issue. It was likely an outage in Apps Script. Please try again,
gharchive/issue
2015-12-29T10:57:40
2025-04-01T06:38:50.948494
{ "authors": [ "erickoledadevrel", "sakib118" ], "repo": "googlesamples/apps-script-oauth1", "url": "https://github.com/googlesamples/apps-script-oauth1/issues/21", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
118638393
VR-Mode disabled leads to blackscreen on some devices Hi, our game "Sinister Edge" (https://play.google.com/store/apps/details?id=com.everbytestudio.sinisteredge) is live on google playstore for a few weeks. The player is able to choose (in main menu) if he wants to play in VR mode. We save this data and set the VR mode accordingly. By default VR mode is disabled on the player prefab, since most players dont use it. We received a lot of emails from players (vr-mode disabled) adressing the same problem. Some of them use very powerfull devices (for e.g. Tegra 4 tablets). They tell us their screen simply stays black ingame. i red about problems switching from non vr to vr mode, but this is not the case. Does somebody have an idea what could cause this problem? We support large tablets which are definitely not compatible with cardboard, but we set the vr mode boolean to false by default. Are there any other cardboard actions involved, that could cause the blackscreens? Should we build a second player prefab for non VR players, completely without cardboard? kind regards from a confused team :) A couple of questions: This only happens on some tablets? Is it black screen from launch or only after a certain point in the game? I have this problem as well, using a Samsung S3 for testing my game. No users have reported it to me yet Been meaning to write about this but haven't been able to documented it with a video thanks for the fast answers! blackscreen seems to appear from the moment cardboard sdk is involved (main menu is no problem). the blackscreen is not exclusive on tablets. we got mails from s3 users as well. its hard for us to track the problem since most users simply rate 1 star and uninstall the game. I think we have an S3 in the office. I'll download the game and try it out. Experienced this too, if vr mode is enabled by default I can switch it on and off just fine. If it is disabled by default, my devices (Galaxy Note 2&4) will show a black screen if i try to enable it. Yep that is exactly the same problem here Hans. But sometimes, and in the case of this screenshot below VR was enabled on start up and this occurred when I switched VR off. Sadly got a 3 star review from someone that might be related to this bug: "Zuk Z1. The game looks amazing when in the non vr menus however when turning VR on with the ZukZ1 the whole image turns upsidedown and also zooms in 80% so I can not see anything, I am using a generic vr headset and my own preset for Google cardboard 30mm lens to screen 50mm apart and 50fov for top bottom left and right not sure why it isn't working compatibility issues maybe? I had decided to refund and I just wish it may get updated. WILL CHANGE REVIEW IF IT IS FIXED FOR THE ZUK Z1.." I think it might be related to this issue however the user also complains of it zooming in? Sorry I can't ask the person for more info as it's just a review from the Play Store. Side note As a developer for Cardboard it's hard to see my once 5 star rating fall because of bugs like this that I can't do much about. I hope by bringing it to your attention it helps you guys get the new update working tip top. Wow, there's a lot of stuff going on in that review. I'll take it bit by bit... Does your app allow landscape right orientation? That will cause the upside-down symptom? (There may be other causes too.) The 80% zoom could be due to several factors combining: 1) The change in FOV from mono to whatever profile they had (sounds customized), 2) the phone reporting a wrong DPI. I don't have one of these phones at hand, so I can't say if that is an issue. We are trying to resolve the issue of phone DPI inaccuracy in general since we depend on it to compute the screen size, but it is a tough nut to crack. My app, only allows Landscape Left orientation. So not sure how it would be upside down? Sorry I can't be more helpful but if you want a .apk of my game I can make it available to you. Or you can find it at http://endspacevr.com Let me know how I can help :) I can try recreating the user's profile for one thing. The upside down problem normally happens if you leave on landscape right as an option and they put the phone in the viewer that way too. (Bug in the SDK.). But there could be other causes. I've not heard of the user's phone, so I don't know what chipset it has. Could be an issue there. If that turns out to be the case, you may want to blacklist that phone till we can figure out what the compatibility issues are (or Unity does). This should be fixed in the new v0.6 release of the SDK. There was an exception getting thrown which prevented the eye camera's TargetTexture from getting set. I'm having this issue in my IOS version of my app. If I disable VR mode to use the Gyro tracking on a 2D camera my scene is Black. I'm not sure when this popped up but it was working prior. I found that if I disable the AddCardboardCamera(); when not it vrmode it seems to work. For now...
gharchive/issue
2015-11-24T15:49:41
2025-04-01T06:38:50.958427
{ "authors": [ "HansBernd", "ckrin", "ddutchie", "justinwasilenko", "smdol" ], "repo": "googlesamples/cardboard-unity", "url": "https://github.com/googlesamples/cardboard-unity/issues/121", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
102988359
Can not register token with GCM I copy example code in my project, and run it. catch some error like pic. How to fix it? @smallg have you installed the pods? yes i use pod install to add Google/CloudMessaging @silvolu I think i am already fixed it. modify Other Link Flag value to "$(inherited)" @smallg great! I tried to reproduce it but I couldn't, so I was trying to figure out what to do next :)
gharchive/issue
2015-08-25T09:40:08
2025-04-01T06:38:50.961580
{ "authors": [ "silvolu", "smallg" ], "repo": "googlesamples/google-services", "url": "https://github.com/googlesamples/google-services/issues/49", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
2085799631
🛑 goormEDU is down In 7096fe2, goormEDU (https://edu.goorm.io) was down: HTTP code: 403 Response time: 606 ms Resolved: goormEDU is back up in 949bc9c after 5 minutes.
gharchive/issue
2024-01-17T09:45:17
2025-04-01T06:38:50.985578
{ "authors": [ "rlatjdwn4926" ], "repo": "goorm-dev/goorm-status", "url": "https://github.com/goorm-dev/goorm-status/issues/2039", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
703554523
Attempting to assign any value to LinkedList.front directly results in an error Goost and Godot version: gd3 @ 455ebc02a2268aeefce4e0ad066071a8f2f30560, 3.2. OS/device including version: Windows 10. Issue description: Attempting to assign any value to LinkedList.front directly results in an error: 'Invalid set index 'front' (on base: 'LinkedList') with value of type 'ListNode'.' Steps to reproduce: func test_list_assign_via_front(): list.push_back("A") assert_not_null(list.front) assert_eq(list.front.value, "A") # FIXME: This doesn't work, throws an error: list.front.value = "B" assert_eq(list.front.value, "B") # But this works: var n = list.front n.value = "B" assert_eq(list.front.value, "B") # This also works: list.find("A").value = "B" assert_eq(list.front.value, "B") That's really strange because: we never assign any kind of ListNode in the snippet above. the reported base is wrong, should be ListNode. I've actually stumbled upon this while implementing linked list in #12, but never got to resolve this issue: https://github.com/goostengine/goost/blob/455ebc02a2268aeefce4e0ad066071a8f2f30560/tests/project/goost/core/types/test_list.gd#L597-L616 So, I'm not sure if that's caused by a particular implementation in Goost, or this may actually be a GDScript bug in 3.2, because other workarounds work, I don't understand why it wouldn't work in this case. Minimal reproduction project: list_assign_front.zip Seems like GDScript bug in 3.2 according to godotengine/godot#41319.
gharchive/issue
2020-09-17T12:46:03
2025-04-01T06:38:50.992215
{ "authors": [ "Xrayez" ], "repo": "goostengine/goost", "url": "https://github.com/goostengine/goost/issues/17", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
911798969
gopass insert [name] [key] fails to handle multiline YAML entries Summary With a secret of the format: pass key1: | text moretext inserting a new key will yield pass key1: | key2: content text moretext gopass insert also fails to properly handle inserting multiline YAML strings even manually adding the entries, gopass show fails to output the content: gopass show secret key1 key1: | Steps To Reproduce echo "|\n text\n moretext" | gopass insert secret key1 echo "content" | gopass insert secret key2 ### Expected behavior The secret should respect the YAML pass key1: | text moretext key2: content ### Environment <!-- Please complete the following information (see note below) --> uname -a Linux manjaro 5.11.19-1-MANJARO #1 SMP PREEMPT Fri May 7 17:34:25 UTC 2021 x86_64 GNU/Linux gopass --version gopass 1.12.6 (2021-05-11 15:27:04) go1.16.4 linux amd64 - Installation method: pacman <!-- **PLEASE NOTE** There is a package named gopass in the official Debian repository. This package is not related to this project in any way. If you installed gopass from the Debian archives report any bugs in the Debian BTS. --> ### Additional context <!-- Add any other context about the problem here. --> Hmm, I don't think I've ever tried inserting multi-line keys. If you want to help fix this contributing a test case would be appreciated. I dug into the code and it appears to be a documentation issue: https://github.com/gopasspw/gopass/blob/1629395a8269a822c93d5f8892201bbc9ae96f1d/pkg/gopass/secrets/yaml.go#L21-L32 https://github.com/gopasspw/gopass/blob/1629395a8269a822c93d5f8892201bbc9ae96f1d/pkg/gopass/secrets/kv.go#L38-L72 YAML format explicitly requires the "---" or it will be parsed as KV
gharchive/issue
2021-06-04T20:01:53
2025-04-01T06:38:50.997446
{ "authors": [ "dominikschulz", "innovate-invent" ], "repo": "gopasspw/gopass", "url": "https://github.com/gopasspw/gopass/issues/1940", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1883078597
[cleanup] Math/rand.Seed deprecated Summary Since Go 1.20 release, math/rand.Seed is marked "Deprecated" Looks like five instances to consider updating: find . -name "*.go" -type f -print | xargs grep .Seed ./pkg/pwgen/pwgen_test.go: mrand.Seed(1789) ./pkg/pwgen/rand.go: rand.Seed(time.Now().Unix() + int64(os.Getpid()+os.Getppid())) ./pkg/fsutil/fsutil.go: rand.Seed(time.Now().UnixNano()) ./pkg/gitconfig/config_test.go: rand.Seed(time.Now().Unix()) ./internal/action/binary_test.go: rand.Seed(42) Usually "Deprecated" just means it's finally stable ;) But I'll check what the recommendation is these days. pkg/pwgen/rand.go This one can Go away once we don't need to support Go 1.20 anymore. pkg/fsutil/fsutil.go Same here. For the test files we actually rely on detemernistic output, so we will need some more changes. We need to introduce a global variable with a local generator and seed it like this New(NewSource(seed)). See https://pkg.go.dev/math/rand#Seed @dominikschulz Hi, I'd like to take on this issue if that's okay. @orangekame3 Feel free to do so @dominikschulz Thank you. I've created a PR. Please check it out: https://github.com/gopasspw/gopass/pull/2675 Hello @dominikschulz ! I see the last pull request on this issue is from last year, but I see the good-first-issue tag. Do you mind if I open my own PR? @lhardt Please go ahead. It should be as simple as taking the previous PR and fixing the merge conflicts and then doing the changes that were requested in there if you want. Or you can try and do it your way too if you have another idea to do it. @dominikschulz Can this be marked as closed following #2953? I guess #2873 can be closed following #2954 too Fixed by #2953
gharchive/issue
2023-09-06T02:52:51
2025-04-01T06:38:51.003908
{ "authors": [ "AnomalRoil", "dominikschulz", "lhardt", "n4x2", "orangekame3" ], "repo": "gopasspw/gopass", "url": "https://github.com/gopasspw/gopass/issues/2650", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
787539239
UX update Streamline command output, add unicode icons and as for passphrase during onboarding. Fixes #1698 Signed-off-by: Dominik Schulz dominik.schulz@gauner.org Codecov Report Merging #1715 (add5760) into master (774eb1d) will increase coverage by 0.02%. The diff coverage is 36.21%. @@ Coverage Diff @@ ## master #1715 +/- ## ========================================== + Coverage 58.54% 58.56% +0.02% ========================================== Files 190 191 +1 Lines 9120 9113 -7 ========================================== - Hits 5339 5337 -2 + Misses 3168 3162 -6 - Partials 613 614 +1 Impacted Files Coverage Δ internal/action/delete.go 61.29% <0.00%> (ø) internal/action/edit.go 46.51% <0.00%> (ø) internal/action/generate.go 47.26% <0.00%> (ø) internal/action/repl.go 0.00% <0.00%> (ø) internal/out/print.go 83.33% <0.00%> (ø) internal/store/leaf/recipients.go 39.23% <0.00%> (ø) internal/store/leaf/reencrypt.go 37.31% <0.00%> (ø) internal/store/leaf/write.go 30.61% <0.00%> (ø) internal/store/root/move.go 33.33% <0.00%> (ø) main.go 48.78% <0.00%> (ø) ... and 13 more Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 774eb1d...add5760. Read the comment docs. Codecov Report Merging #1715 (add5760) into master (774eb1d) will increase coverage by 0.02%. The diff coverage is 36.21%. @@ Coverage Diff @@ ## master #1715 +/- ## ========================================== + Coverage 58.54% 58.56% +0.02% ========================================== Files 190 191 +1 Lines 9120 9113 -7 ========================================== - Hits 5339 5337 -2 + Misses 3168 3162 -6 - Partials 613 614 +1 Impacted Files Coverage Δ internal/action/delete.go 61.29% <0.00%> (ø) internal/action/edit.go 46.51% <0.00%> (ø) internal/action/generate.go 47.26% <0.00%> (ø) internal/action/repl.go 0.00% <0.00%> (ø) internal/out/print.go 83.33% <0.00%> (ø) internal/store/leaf/recipients.go 39.23% <0.00%> (ø) internal/store/leaf/reencrypt.go 37.31% <0.00%> (ø) internal/store/leaf/write.go 30.61% <0.00%> (ø) internal/store/root/move.go 33.33% <0.00%> (ø) main.go 48.78% <0.00%> (ø) ... and 13 more Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 774eb1d...add5760. Read the comment docs.
gharchive/pull-request
2021-01-16T19:46:17
2025-04-01T06:38:51.030041
{ "authors": [ "codecov-io", "dominikschulz" ], "repo": "gopasspw/gopass", "url": "https://github.com/gopasspw/gopass/pull/1715", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
812261670
git: disable automatic line ending conversion The main motivation for this PR is to enable using bash scripts on Windows. This disables automatic conversion of line feeds from /n to /r/n for text files We cannot use windows style line endings in bash scripts, they specifically have to be unix-style line endings Hi @jmoguillansky-gpsw I well understand the motivation... Now I discussed with @cboesch-gpsw & @mbouron about that (one month ago I think...) and I think you also had this discussion with them... and the conclusion and best approach was to, for now, let the user manage it in its own git configuration (using autocrlf git property...) By dedault, the ci server was converting the line endings to windows style, which is nor acceptable for bash scripts. With this small change, we disable line ending conversion for bash scripts
gharchive/pull-request
2021-02-19T18:59:35
2025-04-01T06:38:51.113405
{ "authors": [ "jmoguillansky-gpsw" ], "repo": "gopro/gopro-lib-node.gl", "url": "https://github.com/gopro/gopro-lib-node.gl/pull/204", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
171073310
Add support to playlist Can you add support to multiple songs, like a playlist ? @cbfranca Thanks for the suggestion! This is somewhat planned. Hoping to get back to work next week or so. I'm going to start working on this as I have a need for a non-JQuery player. Any suggestions/requests for the implementation? Thinking the input for the playlist will be an array of JSON. And then clicking an entry in the playlist will change the src for the main player object. But I'm not 100% certain how that will work in practice. Design-wise I'll be sticking doggedly to the Material-UI spec. But the real question becomes, which bit of the spec do we want to base the main playlist off of? We can stick to the basic spec outlined by the contacts design which is how Google Play implements the playlists as well. And we'll definitely start with that. But if people want to pack information more densely into the UI we could look at using the notification spec to do a compliant implementation where by default it's just the track name, artist, duration but can be expanded out to list more information, eg. album title, year, subtitle, rating, genre, publisher, and even custom data if we put a mechanism for it. But that's just an idea at this point. Let me know what you think and any wish lists.
gharchive/issue
2016-08-14T19:07:18
2025-04-01T06:38:51.209473
{ "authors": [ "azariah001", "cbfranca", "gorork" ], "repo": "gorork/paper-audio-player", "url": "https://github.com/gorork/paper-audio-player/issues/22", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
81395806
Streamtime module Adds a !streamtime command to Wololobot that displays a countdown to the next stream. Times for the streams are fetched from a Schedule panel in the stream description. It is identified by either having the title 'Schedule' or an image (identified by the schedImage option passed to the module). The output of !streamtime can also be overwritten by mods: !streamtime overwrite <msg>: The output is overwritten with <msg>. If <msg> contains the string $iftime{...}, the part ... is output if there is a stream scheduled. If <msg> contains $time, that is replaced with a countdown to the next stream. !streamtime overwrite_time YYYY-MM-DD hh:mm [AM|PM] [timezone]: Overwrites the time of the next stream. !streamtime overwrite_discard: Discards any overwrites (messages and times) By default, the streamtimes are updated every 5 minutes. An update can be forced with !streamtime update. Sweet :eyes:
gharchive/pull-request
2015-05-27T10:46:49
2025-04-01T06:38:51.213572
{ "authors": [ "goto-bus-stop", "jazzpi" ], "repo": "goto-bus-stop/wololobot", "url": "https://github.com/goto-bus-stop/wololobot/pull/3", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
244560228
Added compatibility with knock 2 & multiple entities authentication Hi I fixed some errors with new version of knock. great! thanks a lot!
gharchive/pull-request
2017-07-21T04:18:07
2025-04-01T06:38:51.214570
{ "authors": [ "gottfrois", "max-konin" ], "repo": "gottfrois/grape-knock", "url": "https://github.com/gottfrois/grape-knock/pull/4", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2412919015
Updating SLO module and amending codeowners Description: Updating SLO dashboard module, amending codeowners and updating service health dashboard Ticket number: Checklist: [ ] Is my change backwards compatible? Please include evidence [ ] I have tested this and added output to Jira Comment: [ ] Documentation added (link) Comment: I'm going to close this PR and encompass the changes into a wider PR to include additional dashboard changes requested by TSD.
gharchive/pull-request
2024-07-17T07:50:41
2025-04-01T06:38:51.227599
{ "authors": [ "chrisdodd93" ], "repo": "govuk-one-login/observability-configuration", "url": "https://github.com/govuk-one-login/observability-configuration/pull/257", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1176616352
MetaLinks won't import Describe the bug The MetaLinks component is not part of the exported components. To Reproduce Steps to reproduce the behavior: Go to 'Storybook govuk-react' Copy the footer snippet Footer with with Meta Links into your code. Import the necessary components See error Expected behavior The MetaLinks component should just import and work just like the other components from govuk-react package. Screenshots Desktop (please complete the following information): OS: Windows 10 Enterprise Browser: Chrome Thanks for reporting this! You should be able to use <Footer.MetaLinks> instead of <MetaLinks> We can fix the documentation by setting the displayName on MetaLinks to Footer.MetaLinks. https://github.com/govuk-react/govuk-react/blob/64df8c00ce6c5f78ca9269742c2feacdeb6afcba/components/footer/src/molecules/meta-links/index.tsx Would you like to raise a PR for this? @penx Thank you so much. That works. Yes, please... Updating the documentation will help future developers working on govuk-react design system.
gharchive/issue
2022-03-22T11:02:04
2025-04-01T06:38:51.233293
{ "authors": [ "cherrelleM1", "penx" ], "repo": "govuk-react/govuk-react", "url": "https://github.com/govuk-react/govuk-react/issues/1063", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
394990934
v0.2 Checklist: [x] Adapt to PHP 7.3 https://github.com/gowork/values/pull/7 [ ] new methods (groupBy, chunk, splice, find, findLast, any, every) https://github.com/gowork/values/pull/8 [ ] IterableValue https://github.com/gowork/values/pull/5 [ ] new methods for IterableValue (groupBy, chunk, splice, find, any) [ ] documentation for IterableValue Anything else to add?
gharchive/issue
2018-12-31T14:52:51
2025-04-01T06:38:51.236319
{ "authors": [ "bronek89" ], "repo": "gowork/values", "url": "https://github.com/gowork/values/issues/9", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
251148114
does it support webpack2? does it support webpack2? I just installed it on a project. It works well on webpack2. You just need to install font-awesome first: npm install font-awesome @KiGniark no, it does not work with webpack 2: https://github.com/gowravshekar/font-awesome-webpack/issues/33
gharchive/issue
2017-08-18T06:05:41
2025-04-01T06:38:51.238006
{ "authors": [ "AndrewRayCode", "KiGniark", "luqingxuan" ], "repo": "gowravshekar/font-awesome-webpack", "url": "https://github.com/gowravshekar/font-awesome-webpack/issues/37", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
95172178
cocoa pods support Please add support for cocoapods :) It's planned! :-) I need to add some tests first... Added! http://cocoadocs.org/docsets/SwiftChart/0.2.0/ 🎉
gharchive/issue
2015-07-15T11:48:51
2025-04-01T06:38:51.240087
{ "authors": [ "bphenriques", "gpbl" ], "repo": "gpbl/SwiftChart", "url": "https://github.com/gpbl/SwiftChart/issues/11", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
941514044
use padding instead of offsets in the display p if might be better to us margins instead of offsets for each font. this has been fixed
gharchive/issue
2021-07-11T19:01:04
2025-04-01T06:38:51.241387
{ "authors": [ "gpend" ], "repo": "gpend/calc-app", "url": "https://github.com/gpend/calc-app/issues/2", "license": "CC0-1.0", "license_type": "permissive", "license_source": "github-api" }
2524632140
gradio web report error after recording #69 85|omni_gradio_web | File "/home/ops/anaconda3/envs/omni/lib/python3.10/site-packages/pydantic/_internal/_generate_schema.py", line 2185, in 85|omni_gradio_web | lambda source, handler: handler(source) 85|omni_gradio_web | File "/home/ops/anaconda3/envs/omni/lib/python3.10/site-packages/pydantic/_internal/_schema_generation_shared.py", line 83, in call 85|omni_gradio_web | schema = self._handler(source_type) 85|omni_gradio_web | File "/home/ops/anaconda3/envs/omni/lib/python3.10/site-packages/pydantic/_internal/_generate_schema.py", line 2088, in inner_handler 85|omni_gradio_web | schema = self._generate_schema_inner(obj) 85|omni_gradio_web | File "/home/ops/anaconda3/envs/omni/lib/python3.10/site-packages/pydantic/_internal/_generate_schema.py", line 929, in _generate_schema_inner 85|omni_gradio_web | return self.match_type(obj) 85|omni_gradio_web | File "/home/ops/anaconda3/envs/omni/lib/python3.10/site-packages/pydantic/_internal/_generate_schema.py", line 1029, in match_type 85|omni_gradio_web | return self._match_generic_type(obj, origin) 85|omni_gradio_web | File "/home/ops/anaconda3/envs/omni/lib/python3.10/site-packages/pydantic/_internal/_generate_schema.py", line 1058, in _match_generic_type 85|omni_gradio_web | return self._union_schema(obj) 85|omni_gradio_web | File "/home/ops/anaconda3/envs/omni/lib/python3.10/site-packages/pydantic/_internal/_generate_schema.py", line 1378, in _union_schema 85|omni_gradio_web | choices.append(self.generate_schema(arg)) 85|omni_gradio_web | File "/home/ops/anaconda3/envs/omni/lib/python3.10/site-packages/pydantic/_internal/_generate_schema.py", line 655, in generate_schema 85|omni_gradio_web | schema = self._generate_schema_inner(obj) 85|omni_gradio_web | File "/home/ops/anaconda3/envs/omni/lib/python3.10/site-packages/pydantic/_internal/_generate_schema.py", line 929, in _generate_schema_inner 85|omni_gradio_web | return self.match_type(obj) 85|omni_gradio_web | File "/home/ops/anaconda3/envs/omni/lib/python3.10/site-packages/pydantic/_internal/_generate_schema.py", line 1038, in match_type 85|omni_gradio_web | return self._unknown_type_schema(obj) 85|omni_gradio_web | File "/home/ops/anaconda3/envs/omni/lib/python3.10/site-packages/pydantic/_internal/_generate_schema.py", line 558, in _unknown_type_schema 85|omni_gradio_web | raise PydanticSchemaGenerationError( 85|omni_gradio_web | pydantic.errors.PydanticSchemaGenerationError: Unable to generate pydantic-core schema for <class 'starlette.requests.Request'>. Set arbitrary_types_allowed=True in the model_config to ignore this error or implement __get_pydantic_core_schema__ on your type to fully support it. 85|omni_gradio_web | If you got this error by calling handler() within __get_pydantic_core_schema__ then you likely need to call handler.generate_schema(<some type>) since we do not call __get_pydantic_core_schema__ on <some type> otherwise to avoid infinite recursion. 85|omni_gradio_web | For further information visit https://errors.pydantic.dev/2.9/u/schema-for-unknown-type which gradio version did you use? gradio version: 4.42.0 please try following versions pydantic 2.8.2 pydantic_core 2.20.1 downgrade to fastapi==0.112.4 as mentioned in https://github.com/jhj0517/Whisper-WebUI/issues/258 fixed it I will close it for now, please feel free to re-open.
gharchive/issue
2024-09-13T11:43:40
2025-04-01T06:38:51.260764
{ "authors": [ "Emotibot5", "mincomp", "mini-omni" ], "repo": "gpt-omni/mini-omni", "url": "https://github.com/gpt-omni/mini-omni/issues/70", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
369848223
[WHLSL] Local variables should be statically allocated Migrated from https://bugs.webkit.org/show_bug.cgi?id=188402: At 2018-08-08T03:03:55Z, tdenney@apple.com wrote: The interpreter conforms to the spec; the Metal compiler doesn't (necessarily). For example, a call to foo() should return 1: thread int* bar(bool flag) { int x = 0; if (flag) x = 1; return &x; } int foo() { thread int* x = bar(false); thread int* y = bar(true); return (*x) * (*y); } The interpreter gets this right; when a VariableDecl is visited in Evaluator.js it only allocates a buffer for that variable if there wasn't one already. The compiler doesn't get it right; local variables are emitted in local scope and therefore the Metal compiler can choose to store them however it likes. They could be independent, or another function bar2() could alias its local variables with bar() if bar and bar2 are never both called. Metal Shading Language only permits constant variables to be static if they are declared in global scope. It doesn't permit statically declared variables in local scope. When functions are inlined this isn't a problem; all variables can be "statically" allocated by having them as local variables declared at the top of a shader function. However, not all functions can be (efficiently) inlined in the MSL output if they don't have reducible control flow. An inefficient solution would "statically" allocate all variables in the main shading functions and pass references to them to non-inlined functions. This awful approach could be mitigated by conservatively finding local variables that references are never created for, but it is far from ideal. It is worth noting that successive executions of the same function cannot rely on previous values stored at the same local variable because local variables are always zero-initialized when they are declared. The compiler and the interpreter behave correctly on the following program: thread int* bar(bool flag) { int x; // x is zero initialized twice if (flag) x = 1; return &x; } int foo() { thread int* x = bar(true); thread int* y = bar(false); return (*x) * (*y); } The result of foo() should be 0. Given that local variable values do not persist between calls, the largest benefit of statically allocating local variables is that references to local variables can be returned from a function, or passed out via "out" parameters. Many programming languages do not permit this, so a possible mitigation would be to disallow it. At 2018-09-06T00:50:56Z, tdenney@apple.com wrote: (In reply to Thomas Denney from comment #1) Created attachment 348987 [details] WIP This patch doesn’t yet support function arguments having their address taken, and I haven’t done any work on array references yet. It also contains my modified version of the standard library for faster parsing, so this is very much WIP. At 2018-09-06T00:49:45Z, tdenney@apple.com wrote: Created attachment 348987 WIP At 2018-09-11T03:16:11Z, tdenney@apple.com wrote: Created attachment 349372 Patch At 2018-09-11T00:10:03Z, tdenney@apple.com wrote: Created attachment 349358 WIP At 2018-09-11T00:11:10Z, tdenney@apple.com wrote: (In reply to Thomas Denney from comment #3) Created attachment 349358 [details] WIP This most recent patch is basically complete, but I’m going to wait on the two dependencies of this bug to be resolved before I put this up for review. At 2018-09-19T21:25:39Z, mmaxfield@apple.com wrote: Comment on attachment 350096 Patch View in context: https://bugs.webkit.org/attachment.cgi?id=350096&action=review Cool patch. Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js:29 const entryPoints = []; This is a fairly self-contained block of code. I'd recommend moving it to its own function. Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js:30 class OnlyVisitFuncDefsThatAreEntryPoints extends Visitor { How about "gatherEntryPointDefs" since visitors visit everything? Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js:41 const allVariablesAndFunctionParameters = new Set(); const functionsThatAreCalledByEntryPoints = new Set(); class FindAllVariablesAndFunctionParameters extends Visitor { This is a fairly self-contained block of code. I'd recommend moving it to its own function. Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js:53 node.func.visit(new FindAllVariablesAndFunctionParameters(node.func)); Doesn't this have exponential runtime because it doesn't dedup functions? Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js:59 node._func = this._currentFunc; A more descriptive name, please Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js:68 node._func = this._currentFunc; if (!this._currentFunc.isEntryPoint) allVariablesAndFunctionParameters.add(node); Why does visitVariableDecl() have a super call but visitFuncParameter not? Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js:83 program.add(ptrToGlobalStructType); This doesn't seem right, because the parser will never put a PtrType at the global level, so we probably shouldn't do that either. Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js:100 let counter = 0; const varToFieldMap = new Map(); for (let varOrParam of allVariablesAndFunctionParameters) { const fieldName = `field${counter++}_${varOrParam._func.name}_${varOrParam.name}`; globalStructType.add(new Field(varOrParam.origin, fieldName, varOrParam.type)); varToFieldMap.set(varOrParam, fieldName); } for (let func of functionsThatAreCalledByEntryPoints) { if (func.returnType.name !== "void") { const fieldName = `field${counter++}_return_${func.name}`; globalStructType.add(new Field(func.origin, fieldName, func.returnType)); func.returnFieldName = fieldName; } } This is a fairly self-contained block of code. I'd recommend moving it to its own function. Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js:139 get func() { return this._func; } Not sure if this is necessary if all callers are local to the class. Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js:157 const possibleAndOverloads = program.globalNameContext.get(Func, functionName); const callExpressionResolution = CallExpression.resolve(node.origin, possibleAndOverloads, functionName, [ this.globalStructVariableRef ], [ ptrToGlobalStructTypeRef ]); It's kind of unfortunate we have to reverse-engineer what would have happened earlier in the compiler. Can we run this stage earlier so we don't have to do this? Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js:171 return super.visitVariableRef(node); Isn't this an error? Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js:180 return new Assignment(node.origin, this._dereferencedCallExpressionForFieldName(node, node.type, varToFieldMap.get(node)), node.initializer.visit(this), node.type); Nodes need to get assigned (zero-filled) even if they don't have an initializer. We should add a test for this. Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js:183 else if (node == this.variableDecl) return node; I'd move this first Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js:199 const anonymousVariable = new AnonymousVariable(node.origin, type); What is the purpose of the anonymous variables? Why not assign directly into the global struct? Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js:208 exprs.push(this._dereferencedCallExpressionForFieldName(node.func, node.func.returnType, node.func.returnFieldName)); Neat. Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js:210 node.argumentList = [ this.globalStructVariableRef ]; Are you sure it's wise for them all to be using the exact same VariableRef? Seems like we should store a creation lambda instead of the raw variable itself. Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js:220 if (node.value && this._func.returnFieldName) If these don't match, seems like this should be an error. Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js:223 return new CommaExpression(node.origin, [ new Assignment(node.origin, this._dereferencedCallExpressionForFieldName(this._func, this._func.returnType, this._func.returnFieldName), node.value, this._func.returnType), new Return(node.origin) ]); Indentation Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js:240 if (node._newParameters) This pollutes the FuncDef nodes. I'd prefer a side-table. Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js:250 if (node.func.returnFieldName) node._returnType = node.resultType = TypeRef.wrap(program.types.get("void")); Cool. Tools/WebGPUShadingLanguageRI/EBufferBuilder.js:-33 constructor(program) { super(); this._program = program; } What? What's the point of this class if you can never construct it? I don't see any other constructors or static functions. Tools/WebGPUShadingLanguageRI/Func.js:57 set parameters(newValue) Not a great variable name. Tools/WebGPUShadingLanguageRI/SynthesizeStructAccessors.js:51 function createFieldType() { return field.type.visit(new Rewriter()); } function createTypeRef() { return TypeRef.wrap(type); } Do we need these? Tools/WebGPUShadingLanguageRI/SynthesizeStructAccessors.js:101 nativeFunc = new NativeFunc( field.origin, "operator." + field.name + "=", createTypeRef(), field.origin, "operator&." + field.name, new PtrType(field.origin, addressSpace, createFieldType()), [ new FuncParameter(field.origin, null, createTypeRef()), new FuncParameter(field.origin, null, createFieldType()) new FuncParameter( field.origin, null, new PtrType(field.origin, addressSpace, createTypeRef())) ], isCast, shaderType); setupImplementationData(nativeFunc, ([base, value], offset, structSize, fieldSize) => { let result = new EPtr(new EBuffer(structSize), 0); result.copyFrom(base, structSize); result.plus(offset).copyFrom(value, fieldSize); return result; setupImplementationData(nativeFunc, ([base], offset, structSize, fieldSize) => { base = base.loadValue(); if (!base) throw new WTrapError(field.origin.originString, "Null dereference"); return EPtr.box(base.plus(offset)); }); program.add(nativeFunc); diff really made a mess of things, didn't it At 2018-09-13T01:51:52Z, webkit-bug-importer@group.apple.com wrote: rdar://problem/44403028 At 2018-09-19T07:24:12Z, tdenney@apple.com wrote: Created attachment 350096 Patch At 2018-09-21T02:10:04Z, commit-queue@webkit.org wrote: Comment on attachment 350175 Patch Rejecting attachment 350175 from commit-queue. Failed to run "['/Volumes/Data/EWS/WebKit/Tools/Scripts/webkit-patch', '--status-host=webkit-queues.webkit.org', '--bot-id=webkit-cq-02', 'land-attachment', '--force-clean', '--non-interactive', '--parent-command=commit-queue', 350175, '--port=mac']" exit_code: 2 cwd: /Volumes/Data/EWS/WebKit Logging in as commit-queue@webkit.org... Fetching: https://bugs.webkit.org/attachment.cgi?id=350175&action=edit Fetching: https://bugs.webkit.org/show_bug.cgi?id=188402&ctype=xml&excludefield=attachmentdata Processing 1 patch from 1 bug. Updating working directory Processing patch 350175 from bug 188402. Fetching: https://bugs.webkit.org/attachment.cgi?id=350175 Failed to run "[u'/Volumes/Data/EWS/WebKit/Tools/Scripts/svn-apply', '--force', '--reviewer', u'Myles C. Maxfield']" exit_code: 1 cwd: /Volumes/Data/EWS/WebKit Parsed 14 diffs from patch file(s). patching file Tools/ChangeLog Hunk #1 succeeded at 1 with fuzz 3. patching file Tools/WebGPUShadingLanguageRI/All.js patching file Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js patching file Tools/WebGPUShadingLanguageRI/CallExpression.js patching file Tools/WebGPUShadingLanguageRI/EBufferBuilder.js patching file Tools/WebGPUShadingLanguageRI/Func.js patching file Tools/WebGPUShadingLanguageRI/FuncDef.js patching file Tools/WebGPUShadingLanguageRI/Prepare.js patching file Tools/WebGPUShadingLanguageRI/Rewriter.js patching file Tools/WebGPUShadingLanguageRI/SPIRV.html patching file Tools/WebGPUShadingLanguageRI/SynthesizeStructAccessors.js Hunk #1 FAILED at 20. 1 out of 1 hunk FAILED -- saving rejects to file Tools/WebGPUShadingLanguageRI/SynthesizeStructAccessors.js.rej patching file Tools/WebGPUShadingLanguageRI/Test.html patching file Tools/WebGPUShadingLanguageRI/Test.js Hunk #1 succeeded at 8081 (offset 164 lines). patching file Tools/WebGPUShadingLanguageRI/index.html Failed to run "[u'/Volumes/Data/EWS/WebKit/Tools/Scripts/svn-apply', '--force', '--reviewer', u'Myles C. Maxfield']" exit_code: 1 cwd: /Volumes/Data/EWS/WebKit Parsed 14 diffs from patch file(s). patching file Tools/ChangeLog Hunk #1 succeeded at 1 with fuzz 3. patching file Tools/WebGPUShadingLanguageRI/All.js patching file Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js patching file Tools/WebGPUShadingLanguageRI/CallExpression.js patching file Tools/WebGPUShadingLanguageRI/EBufferBuilder.js patching file Tools/WebGPUShadingLanguageRI/Func.js patching file Tools/WebGPUShadingLanguageRI/FuncDef.js patching file Tools/WebGPUShadingLanguageRI/Prepare.js patching file Tools/WebGPUShadingLanguageRI/Rewriter.js patching file Tools/WebGPUShadingLanguageRI/SPIRV.html patching file Tools/WebGPUShadingLanguageRI/SynthesizeStructAccessors.js Hunk #1 FAILED at 20. 1 out of 1 hunk FAILED -- saving rejects to file Tools/WebGPUShadingLanguageRI/SynthesizeStructAccessors.js.rej patching file Tools/WebGPUShadingLanguageRI/Test.html patching file Tools/WebGPUShadingLanguageRI/Test.js Hunk #1 succeeded at 8081 (offset 164 lines). patching file Tools/WebGPUShadingLanguageRI/index.html Failed to run "[u'/Volumes/Data/EWS/WebKit/Tools/Scripts/svn-apply', '--force', '--reviewer', u'Myles C. Maxfield']" exit_code: 1 cwd: /Volumes/Data/EWS/WebKit Updating OpenSource From https://git.webkit.org/git/WebKit 2a2836e6631..dee36913aef master -> origin/master Partial-rebuilding .git/svn/refs/remotes/origin/master/.rev_map.268f45cc-cd09-0410-ab3c-d52691b4dbfc ... Currently at 236296 = 2a2836e6631fd50250fbec7774a49ee368daa97b r236297 = 560dda40a46c0fea73db1cb6365debaee9273c3a r236298 = 9ff9defcd0e9c7e3712bd2f28cc498e9e99f7902 r236299 = dee36913aefb932eb3d82e2b3510193dac0212ff Done rebuilding .git/svn/refs/remotes/origin/master/.rev_map.268f45cc-cd09-0410-ab3c-d52691b4dbfc First, rewinding head to replay your work on top of it... Fast-forwarded master to refs/remotes/origin/master. Full output: https://webkit-queues.webkit.org/results/9290392 At 2018-09-20T06:33:31Z, tdenney@apple.com wrote: Created attachment 350175 Patch At 2018-09-20T02:49:41Z, tdenney@apple.com wrote: (In reply to Myles C. Maxfield from comment #8) Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js:53 node.func.visit(new FindAllVariablesAndFunctionParameters(node.func)); Doesn't this have exponential runtime because it doesn't dedup functions? Damn, good catch. Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js:157 const possibleAndOverloads = program.globalNameContext.get(Func, functionName); const callExpressionResolution = CallExpression.resolve(node.origin, possibleAndOverloads, functionName, [ this.globalStructVariableRef ], [ ptrToGlobalStructTypeRef ]); It's kind of unfortunate we have to reverse-engineer what would have happened earlier in the compiler. Can we run this stage earlier so we don't have to do this? Annoyingly we need types for this stage, which are only fully annotated in the Checker stage. An earlier version of this patch tried doing this allocation but I couldn’t get it working reliably. Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js:171 return super.visitVariableRef(node); Isn't this an error? No, anonymous variables can be wrapped in VariableRefs. Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js:199 const anonymousVariable = new AnonymousVariable(node.origin, type); What is the purpose of the anonymous variables? Why not assign directly into the global struct? I’m going to add a comment into the code explaining why not, because it has now caught me out several times. Consider the case foo(foo(a, b), c). We initially evaluate c, and then evaluate foo(a, b) per the RTL calling convention. To evaluate foo(a, b) we evaluate b, then a, and place them in the global struct for the call to foo. However, this would mean that if c had previously been placed in the global struct then the outer foo wouldn’t see the value of evaluating c, but the value of evaluating b. Therefore all the arguments have to be evaluated into anonymous variables, then copied into the global struct, and then the function has to be called. The existing Metal code generator (MSLStatementEmitter.visitCallExpression) and interpreter (Evaluator._evaluateArguments) both respect this behavior and there are tests that catch this. Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js:210 node.argumentList = [ this.globalStructVariableRef ]; Are you sure it's wise for them all to be using the exact same VariableRef? Seems like we should store a creation lambda instead of the raw variable itself. There’s nothing in the compiler/interpreter at the moment that memoizes the evaluation or compilation of a node, so it would always be re-evaluated/re-compiled wherever it occurs, but this change seems harmless. Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js:240 if (node._newParameters) This pollutes the FuncDef nodes. I'd prefer a side-table. Cool, will do. Tools/WebGPUShadingLanguageRI/EBufferBuilder.js:-33 constructor(program) { super(); this._program = program; } What? What's the point of this class if you can never construct it? I don't see any other constructors or static functions. There is a default constructor (equivalent to constructor () { super(); }), and there are no construction sites that actually passed in the program object any more, nor is this._program used anywhere in the class. Tools/WebGPUShadingLanguageRI/SynthesizeStructAccessors.js:51 function createFieldType() { return field.type.visit(new Rewriter()); } function createTypeRef() { return TypeRef.wrap(type); } Do we need these? createFieldType() isn’t necessary (it works fine to continue using field.type) and createTypeRef is literally just used as a utility function 3 times (as you noticed, diff had a bad time with this file — I didn’t write these functions). I’ll get rid of them. At 2018-09-21T06:55:22Z, ews-feeder@webkit.org wrote: Comment on attachment 350175 Patch Rejecting attachment 350175 from commit-queue. tdenney@apple.com does not have committer permissions according to https://trac.webkit.org/browser/trunk/Tools/Scripts/webkitpy/common/config/contributors.json. If you do not have committer rights please read http://webkit.org/coding/contributing.html for instructions on how to use bugzilla flags. If you have committer rights please correct the error in Tools/Scripts/webkitpy/common/config/contributors.json by adding yourself to the file (no review needed). The commit-queue restarts itself every 2 hours. After restart the commit-queue will correctly respect your committer rights. At 2018-09-21T21:02:29Z, tdenney@apple.com wrote: Created attachment 350419 Patch At 2018-09-21T21:46:10Z, commit-queue@webkit.org wrote: Comment on attachment 350421 Patch Clearing flags on attachment: 350421 Committed r236361: https://trac.webkit.org/changeset/236361 At 2018-09-21T21:03:16Z, ews-feeder@webkit.org wrote: Comment on attachment 350419 Patch Rejecting attachment 350419 from commit-queue. tdenney@apple.com does not have committer permissions according to https://trac.webkit.org/browser/trunk/Tools/Scripts/webkitpy/common/config/contributors.json. If you do not have committer rights please read http://webkit.org/coding/contributing.html for instructions on how to use bugzilla flags. If you have committer rights please correct the error in Tools/Scripts/webkitpy/common/config/contributors.json by adding yourself to the file (no review needed). The commit-queue restarts itself every 2 hours. After restart the commit-queue will correctly respect your committer rights. At 2018-09-21T21:07:04Z, tdenney@apple.com wrote: Created attachment 350421 Patch At 2018-09-22T10:03:00Z, mmaxfield@apple.com wrote: *** Bug 189107 has been marked as a duplicate of this bug. ***
gharchive/issue
2018-10-13T22:39:54
2025-04-01T06:38:51.335495
{ "authors": [ "litherum" ], "repo": "gpuweb/WHLSL", "url": "https://github.com/gpuweb/WHLSL/issues/105", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
1484435168
chore: upgrade dev dependencies Description Upgrades dev dependencies. Checklist [x] My code follows the style guidelines of this project [x] I have performed a self-review of my own changes [x] I have run yarn lint to make sure my changes pass all tests [x] I have run yarn test to make sure my changes pass all linters [x] I have pulled the latest changes from the upstream main branch [x] I have tested both the react and the CDN versions on local and integration environments [x] I have added the necessary labels to this PR in case a new release needs to be published after merging into main (e.g. release and patch) Contribution guidelines For contribution guidelines, styleguide, and other helpful information please see the CONTRIBUTING.md file in the root of this project. :rocket: PR was released in v2.15.0 :rocket:
gharchive/pull-request
2022-12-08T11:33:26
2025-04-01T06:38:51.347428
{ "authors": [ "douglaseggleton", "gr4vy-code" ], "repo": "gr4vy/gr4vy-embed", "url": "https://github.com/gr4vy/gr4vy-embed/pull/121", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
257223085
Add test targets for the newer android plugins and gradle versions ugh https://travis-ci.org/gradle-fury/gradle-fury/builds/274866548 so i did make progress with this, but ran into a whole lot of test failures. unreal how difficult it is to get everything working consistently across versions several issues starting with gradle 3.4 gradle api change which affects the dependency check plugin (updated version) https://github.com/jeremylong/dependency-check-gradle/issues/31 gradle api change which affected the maven-support script due to api change, dependencies declared in the pom are now no longer listed causing the validation tests to fail did as much as i can. gradle api changes break everything
gharchive/issue
2017-09-13T00:28:03
2025-04-01T06:38:51.370594
{ "authors": [ "spyhunter99" ], "repo": "gradle-fury/gradle-fury", "url": "https://github.com/gradle-fury/gradle-fury/issues/51", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
2130243321
Improve documentation Improve grammar Improve clarity Fix small mistakes and word duplications Thanks very much for the documentation improvements! @bigdaz you're welcome, thanks for the quick response!
gharchive/pull-request
2024-02-12T14:21:30
2025-04-01T06:38:51.372081
{ "authors": [ "bigdaz", "martinfrancois" ], "repo": "gradle/actions", "url": "https://github.com/gradle/actions/pull/41", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
492249990
Source dependencies from local (file:...) git repo not allowed in --offline mode Git repositories can be defined as a url or a path. As a URL: sourceControl { gitRepository("https://github.com/${repoGroup}/${repoName}.git") { producesModule("${module}") } } As a path: sourceControl { gitRepository("../${repoPath}") { producesModule("${module}") } } Expected Behavior When the --offline parameter is given, I expect URL-based repos to fail, but path-based repos to succeed, since they do not require online functionality. Current Behavior When using a path-based repo, I still receive the following: Could not resolve all artifacts for configuration ':classpath'. Cannot resolve ${module}:1.0 (branch: dev) from Git repository at file:/Users/${repoPath}/ in offline mode. Context Some git repositories are not published online, but should still be usable as a source dependency. More commonly, a machine may not be connected to the internet, but may still have all of the required repositories cloned locally. This is my situation. Steps to Reproduce (for bugs) Attempt to assemble a project with a local, path-based sourceDependency with the --offline flag See the linked issue. I'd very much like to be able to do gradle offline build for nix packaging purposes. Any chance this could be reopened?
gharchive/issue
2018-10-26T16:48:54
2025-04-01T06:38:51.375672
{ "authors": [ "TLATER", "tculp" ], "repo": "gradle/gradle", "url": "https://github.com/gradle/gradle/issues/10588", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1235563215
IdentityTransform fails with FileNotFound after updating to 7.5-rc1 AndroidX Github Build started failing after updating Gradle to 7.5-rc-1 from 7.5-20220421031748+0000 First failure: https://github.com/androidx/androidx/commit/dc4af6559aeea3e9eb285857b06078bb152a56cf Expected Behavior compileReleaseJavaWithJavac task should wait for its dependencies. Current Behavior compileReleaseJavaWithJavac fails in IdentityTransform step with a FileNotFound exception. Unfortunately, the file is there after the build so my guess is that it is not waiting for its dependencies properly. Execution failed for task ':lifecycle:lifecycle-livedata-core:compileReleaseJavaWithJavac'. 2022-05-13T11:22:48.790-0700 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] > Could not resolve all files for configuration ':lifecycle:lifecycle-livedata-core:releaseCompileClasspath'. 2022-05-13T11:22:48.790-0700 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] > Failed to transform lifecycle-common.jar (project :lifecycle:lifecycle-common) to match attributes {artifactType=android-classes-jar, org.gradle.category=library, org.gradle.dependency.bundling=external, org.gradle.jvm.version=8, org.gradle.libraryelements=jar, org.gradle.usage=java-api}. 2022-05-13T11:22:48.790-0700 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] > Execution failed for IdentityTransform: /Users/yboyar/src/androidx/out/activity-playground/activity-playground/lifecycle/lifecycle-common/build/libs/lifecycle-common-2.6.0-alpha01.jar. 2022-05-13T11:22:48.790-0700 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] > File/directory does not exist: /Users/yboyar/src/androidx/out/activity-playground/activity-playground/lifecycle/lifecycle-common/build/libs/lifecycle-common-2.6.0-alpha01.jar Context Worked in 7.5-20220421031748+0000 (and before), started failing with 7.5-rc-1. Steps to Reproduce checkout AndroidX Github Repo Might also need the one time setup instructions cd <checkout-root>/activity ./gradlew --stop rm -rf ~/.gradle // important to reproduce ./gradlew buildOnServer --no-build-cache --no-configuration-cache If you re-run bOS it will succeed. You can also validate that the missing file is there after the first failure. Your Environment Build scan URL: https://ge.androidx.dev/s/sg2yfkiqoxpti/failure#1 also tried 7.5-20220511195339+0000. didn't work. Thanks @yigit. We'll try to reproduce this. Could you give 7.5-20220501001223+0000 a try? I'll try it. meanwhile, I triggered a bunch of versions on CI. Looks like 20220427150934 (build) is the last good one and 20220428002320 is the first failure (build) @big-guy , triggered a new build for 7.5-20220501001223+0000: https://github.com/androidx/androidx/actions/runs/2321565000 That one failed too with the same error: https://github.com/androidx/androidx/runs/6429070066?check_suite_focus=true All the things seem to point to https://github.com/gradle/gradle/commit/0654460d8de07edb1358bfb774c760e47a55cf71 I was able to reproduce. Instead of purging the whole Gradle user home (~/.gradle), I just did: $ rm -rf ../out/activity-playground/activity-playground/lifecycle/lifecycle-common/build/libs from the activity subproject. This was enough to raise the error. I'll continue investigating. Hi @yigit and @liutikas, sorry for the delay in responding. We've been looking at the execution graph optimizations which exposed this issue. The reality is the behavior which analyzed the inputs to transformations has been incorrect since at least Gradle 7.4, but something about the androidx setup combined with our recent optimizations has teased out this bug. Unfortunately the proper long-term fix is too disruptive to add to 7.5-rc-2. Instead I've provided a suggested temporary workaround here. This should allow you to test with Gradle 7.5-rc-1, and we will make changes in either 7.5.1 or 7.6 which will remove the need for the workaround. @bamboo I think Adam is looking/working on this. Do you want us to reassign it to him? This issue is blocked by the root cause, which is being investigated as #20975. This is fixed now via https://github.com/gradle/gradle/pull/21292 Unfortunately, we are still seeing this problem with gradle 8: https://ge.androidx.dev/s/jpkurj73xajqw Unfortunately, we are still seeing this problem with gradle 8: https://ge.androidx.dev/s/jpkurj73xajqw I can confirm we are getting exactly the same problem with Gradle 8.2 and Gradle 8.2.1 only if we enable configuration cache. We detected while executing sonarqube, but the error happens during the kaptGenerateStubsDebugKotlin. Interestingly enough, if we run first kaptGenerateStubsDebugKotlin and then in a separate execution sonarqube, it finishes successfully. This happens for us in our CI (on Linux AMIs, with 16 cores) but not locally (on macOS, with 8-10 cores), not sure if the extra available workers or the OS might be connected. This is a project around 140 modules and it always fails in one of the first ones (required by another 14 modules). Unfortunately, we are still seeing this problem with gradle 8: https://ge.androidx.dev/s/jpkurj73xajqw I can confirm we are getting exactly the same problem with Gradle 8.2 and Gradle 8.2.1 only if we enable configuration cache. We detected while executing sonarqube, but the error happens during the kaptGenerateStubsDebugKotlin. Interestingly enough, if we run first kaptGenerateStubsDebugKotlin and then in a separate execution sonarqube, it finishes successfully. This happens for us in our CI (on Linux AMIs, with 16 cores) but not locally (on macOS, with 8-10 cores), not sure if the extra available workers or the OS might be connected. This is a project around 140 modules and it always fails in one of the first ones (required by another 14 modules). Hey @rolgalan, We are facing exactly the same issue. Were you be able to fix this? Hi @nuhkoca, is that still happening with Gradle 8.9? Hi @bamboo, yes even with Gradle 8.10. I am getting this exception Execution failed for task ':app:kaptGenerateStubsDebugKotlin'. Error while evaluating property 'friendPathsSet$kotlin_gradle_plugin_common' of task ':app:kaptGenerateStubsDebugKotlin'. Could not resolve all files for configuration ':app:debugCompileClasspath'. Failed to transform annotation.jar (project :core:annotation) to match attributes {artifactType=android-classes-jar, org.gradle.category=library, org.gradle.dependency.bundling=external, org.gradle.jvm.environment=standard-jvm, org.gradle.jvm.version=17, org.gradle.libraryelements=jar, org.gradle.usage=java-api, org.jetbrains.kotlin.platform.type=jvm}. Execution failed for IdentityTransform: /home/runner/work/path/to/project/core/annotation/build/libs/annotation.jar. File/directory does not exist: /home/runner/work/path/to/project/core/annotation/build/libs/annotation.jar We have the same scenario as @rolgalan, this exception only occurs in executing sonarqube task regardless of configuration cache. We have only one JVM module in our Android project and this started after adding that module, tho. Hey @bamboo, we figured out that this issue is actually originating from Sonar task. Nothing to do with Gradle. For now, we converted JVM library to Kotlin one and issue got resolved. Thanks for helping tho!
gharchive/issue
2022-05-13T18:35:12
2025-04-01T06:38:51.395984
{ "authors": [ "DPUkyle", "adammurdoch", "bamboo", "big-guy", "liutikas", "nuhkoca", "rolgalan", "yigit" ], "repo": "gradle/gradle", "url": "https://github.com/gradle/gradle/issues/20778", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
362959628
Renaming unrelated task makes test task not up-to-date Expected Behavior Running tests is expensive, so they should be up-to-date when unrelated changes are done in build scripts Current Behavior Tests task is marked as not up-to-date Context I am running tests using Pact library, which produces contract files. I need to clean the output directory ($buildDir/foo in the simplified project below) before running tests, as the library has unpredictable merging mechanism. Steps to Reproduce (for bugs) Use https://github.com/pkubowicz/gradle-tests-uptodate Run ./gradlew barJar --console=verbose - tests are run Repeat ./gradlew barJar --console=verbose - everything up-to-date Edit build.gradle, changing baz123 task name to baz1234 note that barJar is not related to this task in any way Run ./gradlew barJar --console=verbose - tests are re-run, although nothing related to barJar has changed Your Environment Gradle 4.10.2, happens also on 4.8 Build scan URL: https://scans.gradle.com/s/em3pkt2jw7usg This is working as designed, but I understand it's a little confusing. Using doFirst or doLast in a build script adds the build script as an input to the task because that's the implementation of the closure. This means any change to the build script affects the doFirst or doLast actions and that affects up-to-dateness. That's the source of Task ':test' has additional actions that have changed. e.g., if you were doing this: def outputToDelete = file("$buildDir/foo") test { doFirst { delete outputToDelete } } It's not enough to track the plain text contents of the doFirst block, we need to track more. Like a task's class implementation, we track the classloader, which tracks all of the classes/jars that are part of it. For build scripts, this also includes the build script file itself. I think you have a few options: If you're using buildSrc, move the bit of logic that configures the test task into a plugin in buildSrc. This moves/limits the problem to just changes to buildSrc, which may be less frequent than changes to the build script. You could create a pact plugin and publish it somewhere. This could be an evolution from 1 above. This would behave like you would expect and the test task would only be out of date when the pact plugin changed (or any of the usual Gradle things). Delete these files in some other way. Maybe there's a way to configure pact to delete these files automagically? We had talked about deleting outputs automatically on the Gradle side if we knew the task wasn't incremental, but I don't think we ever did that, did we @wolfs?
gharchive/issue
2018-09-23T17:12:42
2025-04-01T06:38:51.404687
{ "authors": [ "big-guy", "pkubowicz" ], "repo": "gradle/gradle", "url": "https://github.com/gradle/gradle/issues/6864", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
193526414
Extending the NativeComponentSpec to build partially linked objects Gradle by default only allows to build executables or libraries(shared and static) for C source files. For my build system, I would like to build a partially linked object from the C source set. For this, I would like to extend the NativeComponentSpec. Expected Behavior So, I would like to have two types of behaviours: To build a partially linked object from the C sources. The generated artifact should have a .o ending irrespective of OS'es. I would like to build a NativeExecutableSpec from the partially linked objects. So, it should just have a link step where the inputs to the linker will be partially linked objects. Current Behavior The current behavior does not support partial linking or providing an "object source set" to the linker. Hence, there is a need to extend the model. Context I have a main project with multiple sub-projects. My idea is to build the main project by linking the partially linked objects of the various sub-projects. Reason for this is because, we do generic embedded software development and one or more of our sub-projects would be re-used in other main projects. So, we would just like to create partially linked objects which can later on be linked with other projects in a second link phase. Steps to Reproduce (for bugs) Your Environment Build scan URL: Git is VCS. We use sparc-rtems-gcc and sparc-rtems-ld for our compilation and linking (version is 3.x). The sub-projects are mostly decoupled. And I am currently using a Gradle Multi-project build structure. I am just a beginner with Gradle. I have just heard the concept of build scans so will try to update my issue as soon as I have learnt to implement it in my current project. Thanks @ImGanesh for this feature request. For more information about build scans, refer to the getting started documentation here.
gharchive/issue
2016-12-05T15:22:42
2025-04-01T06:38:51.409731
{ "authors": [ "ImGanesh", "lacasseio" ], "repo": "gradle/gradle", "url": "https://github.com/gradle/gradle/issues/969", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
750002565
Use a named object container for the catalogs Context Suggestion from @big-guy For consistency with other DSLs in Gradle, use a named object container to declare catalogs. For the Groovy DSL it's actually a bit nicer. For the Kotlin DSL unfortunately it makes things a bit more verbose. @bot-gradle test this OK, I've already triggered ReadyForMerge build for you.
gharchive/pull-request
2020-11-24T20:04:27
2025-04-01T06:38:51.412159
{ "authors": [ "bot-gradle", "melix" ], "repo": "gradle/gradle", "url": "https://github.com/gradle/gradle/pull/15301", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
960481054
Backport: Don't lose task dependencies when zipping against provider with no dependencies Backport #17930 Your PR is queued. See the queue page for details. OK, I've already triggered a build for you.
gharchive/pull-request
2021-08-04T13:47:33
2025-04-01T06:38:51.413866
{ "authors": [ "bamboo", "bot-gradle" ], "repo": "gradle/gradle", "url": "https://github.com/gradle/gradle/pull/17944", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1090017650
Ignore always failing tests This either indicates a bug or a behavior change introduced in Gradle 7.3 @bot-gradle test and merge OK, I've already triggered a build for you.
gharchive/pull-request
2021-12-28T16:55:24
2025-04-01T06:38:51.415124
{ "authors": [ "bot-gradle", "ljacomet" ], "repo": "gradle/gradle", "url": "https://github.com/gradle/gradle/pull/19443", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1110794583
Introduce Task.doNotCacheConfigurationIf For optimistic incremental adoption of the configuration cache. The new API supports the scenario where one would like to use the configuration cache whenever it just works via the configuration-cache-problems=warn setting and have it automatically disabled whenever tasks for which configuration caching has been proven problematic are scheduled. @bot-gradle test ACC Sorry some internal error occurs, please contact the administrator @blindpirate @bot-gradle test this OK, I've already triggered the following builds for you: PullRequestFeedback build @bot-gradle test ACC Sorry some internal error occurs, please contact the administrator @blindpirate @bot-gradle help Currently, the following commands are supported: @bot-gradle test <BuildTrigger1> <BuildTrigger2> ... <BuildTriggerN> A trigger is a special build for this PR on TeamCity, common triggers are: SanityCheck/CompileAll/QuickFeedbackLinux/QuickFeedback/PullRequestFeedback/ReadyForNightly/ReadyForRelease Shortcuts: SC/CA/QFL/QF/PRF/RFN/RFR Specific builds: PT: PerformanceTest, all performance tests for Ready For Nightly stage. APT: AllPerformanceTest, all performance tests, including slow performance tests. AST: AllSmokeTestsPullRequestFeedback AFT: AllFunctionalTestsPullRequestFeedback ASB: AllSpecificBuildsPullRequestFeedback ACC: AllConfigCacheTestsPullRequestFeedback ACT: AllCrossVersionTestsReadyForNightly AFTN: AllFunctionalTestsReadyForNightly ACTR: AllCrossVersionTestsReadyForRelease AFTR: AllFunctionalTestsReadyForRelease @bot-gradle test and merge queues this PR for testing and merges if all tests pass by: Creating a merge commit from your PR branch HEAD and the target branch Running a ReadyForNightly build against the merge commit When it passes, fast-forward the target branch to this merge commit (i.e. merge the PR) The merge commit is called a pre-tested commit, which means that it fully tests the integration of your branch HEAD and latest master, instead of only testing your branch HEAD. @bot-gradle cancel cancel a running pre-tested commit build or remove it from queue @bot-gradle clean clear the conversation history @bot-gradle help display this message To run a command, simply submit a comment. For detailed instructions see here. @bot-gradle test AllConfigCacheTestsPullRequestFeedback @bot-gradle clean @bot-gradle test ACC Sorry some internal error occurs, please contact the administrator @blindpirate Sorry some internal error occurs, please contact the administrator @blindpirate We might consider this again in the future but it's more likely we introduce something at the project level. @bot-gradle test ACC OK, I've already triggered the following builds for you: AllConfigCacheTestsPullRequestFeedback build
gharchive/pull-request
2022-01-21T19:03:18
2025-04-01T06:38:51.430070
{ "authors": [ "bamboo", "bot-gradle" ], "repo": "gradle/gradle", "url": "https://github.com/gradle/gradle/pull/19661", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1154531753
Merge daemon defaults with user-supplied jvmargs Fixes #19750 Context Previously, setting any value for org.gradle.jvmargs caused all default settings to be lost. This often resulted in important defaults like -XXMaxMetaspaceSize being omitted when a user attempted to provide more memory to a build process. With this change, default jvmargs will be retained unless specifically overridden by a user-supplied argument. One exception is that setting either -Xmx or -Xms will cause the default heap size settings to be omitted, preventing user-supplied values from conflicting with default values (like having a min heap larger than max heap). Contributor Checklist [ ] Review Contribution Guidelines [ ] Make sure that all commits are signed off to indicate that you agree to the terms of Developer Certificate of Origin. [ ] Make sure all contributed code can be distributed under the terms of the Apache License 2.0, e.g. the code was written by yourself or the original code is licensed under a license compatible to Apache License 2.0. [ ] Check "Allow edit from maintainers" option in pull request so that additional changes can be pushed by Gradle team [ ] Provide integration tests (under <subproject>/src/integTest) to verify changes from a user perspective [ ] Provide unit tests (under <subproject>/src/test) to verify logic [ ] Update User Guide, DSL Reference, and Javadoc for public-facing changes [ ] Ensure that tests pass sanity check: ./gradlew sanityCheck [ ] Ensure that tests pass locally: ./gradlew <changed-subproject>:quickTest Gradle Core Team Checklist [ ] Verify design and implementation [ ] Verify test coverage and CI build status [ ] Verify documentation [ ] Recognize contributor in release notes @bot-gradle test this OK, I've already triggered the following builds for you: PullRequestFeedback build @bot-gradle test this OK, I've already triggered the following builds for you: PullRequestFeedback build @bot-gradle test this OK, I've already triggered the following builds for you: PullRequestFeedback build @bot-gradle test this @bot-gradle test this @bot-gradle test this OK, I've already triggered the following builds for you: PullRequestFeedback build OK, I've already triggered the following builds for you: PullRequestFeedback build @octylFractal Thanks for the feedback. Unfortunately this PR is hitting some hard-to-understand test failures that will require further investigation before this change could be merged. I'm not sure if/when I'll find time to do that. In the meantime, would it be helpful if I close this PR (or convert to draft?) I'm looking at the test failures now, so don't worry about that. @bot-gradle test this OK, I've already triggered the following builds for you: PullRequestFeedback build @bigdaz I looked into the "hard-to-understand test failures", and the underlying issue is that we used to let anything that set org.gradle.jvmargs run with unlimited Metaspace if not specified, and now we limit it to 256m. After some research, @DPUkyle and I came to the conclusion that it would be a good idea to increase this to perhaps 1G by default, or remove it entirely. What do you think? We should not remove the setting altogether. The entire reason for this PR is that users are setting org.gradle.jvmargs without specifying MaxMetaspaceSize, and the daemon process is consuming more and more memory until the process dies. See #19750 for details. Providing a higher default value might make sense. Or since these failures seem specific to the Kotlin compiler daemon (I think), perhaps the best fix is to ensure an appropriate MaxMetaspaceSize for this process. @eskatos I see you assigned this back to me, but I'm not sure what action can/should be taken. The current behaviour is that if you don't set any jvm args, then we set MaxMetaspaceSize=256m. If a user sets a completely unrelated jvm arg, then we don't provide any default value for MaxMetaspaceSize. Perhaps this makes sense: we provide a default set of jvm args and you can choose to override the entire set, but not one value in the set. But I've seen a number of users struggle with this: they give their build more memory (say -Xmx1024m), and their build starts to run out of memory! It's a slightly different out of memory error, but users don't really know how to fix this. There are a few options to address this: Do nothing, but perhaps improve the documentation so it's clear that setting -Xmx does more than just giving more memory. When a user sets just one of the values that we provide a default for, we only change that value, and leave the others in place. (That's what my PR tries to do). Provide a different (or higher) default value for MaxMetaspaceSize. We should probably document this. Provide a better model for specifying the memory settings for a build, allowing users to set value independently. ??? I could help out with any of 1-3. I think 4 would require more work and should be tackled by the BT team. WDYT? I assigned it back to you while triaging unassigned PRs and adding assignees to all team members/authors. I think this PR is a reasonable improvement and will address the current confusion. But as shown by the failing tests, this may break builds. I'm not sure if raising the default metaspace size to 1g would be a good move. Loots of builds won't need that much. @octylFractal, @big-guy, would the approaching 8.0 be a good time to address this? Can you take over making a decision on this? @bot-gradle test ReadyForNightly OK, I've already triggered the following builds for you: ReadyForNightly build @bot-gradle test this OK, I've already triggered the following builds for you: PullRequestFeedback build @bot-gradle test and merge OK, I've already triggered a build for you. Pre-tested commit build failed. The performance test asserts there's only one daemon involved in multiple iterations. However, this PR seems to change something in daemon compatibility so there're multiple daemon started. The performance test failure is reproducbile. We're not able to look at this until 2025
gharchive/pull-request
2022-02-28T20:45:09
2025-04-01T06:38:51.454740
{ "authors": [ "big-guy", "bigdaz", "blindpirate", "bot-gradle", "eskatos", "jvandort", "octylFractal" ], "repo": "gradle/gradle", "url": "https://github.com/gradle/gradle/pull/20054", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1180832322
Publish 7.4.2-20220325135218+0000 Publish 7.4.2-20220325135218+0000 @bot-gradle test and merge
gharchive/pull-request
2022-03-25T14:02:02
2025-04-01T06:38:51.456360
{ "authors": [ "bot-gradle" ], "repo": "gradle/gradle", "url": "https://github.com/gradle/gradle/pull/20285", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2081682628
Run native related builds on Intel macs only Since 2024 there will be only 2 Intel Mac build agents. Because only native builds are architecture-dependent, this PR only executes native subprojects on Intel Macs. I have squashed this PR as 1f49fed23a4ec031d66d628e833546019d84c605 Can't cherry-pick due to merge conflict. You have to cherry-pick by yourself. Sorry some internal error occurs, please contact the administrator @blindpirate
gharchive/pull-request
2024-01-15T10:07:39
2025-04-01T06:38:51.458363
{ "authors": [ "blindpirate", "bot-gradle" ], "repo": "gradle/gradle", "url": "https://github.com/gradle/gradle/pull/27692", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1567574930
Fix port name not in effect and nodePort changed failed when update When grafana instance is created, the port name configured for the service does not take effect. is also not effective when changed grafana nodeport Description apiVersion: integreatly.org/v1alpha1 kind: Grafana metadata: name: grafana namespace: monitoring spec: client: preferService: true ingress: enabled: False config: log: mode: "console" level: "error" security: admin_user: "root" admin_password: "12345" log.frontend: enabled: true auth: disable_login_form: False disable_signout_menu: False auth.anonymous: enabled: True service: name: "grafana-service" labels: app: "grafana" type: "grafana-service" ports: - { nodePort: 30004, port: 3000, protocol: TCP, name: web } type: NodePort dashboardLabelSelector: - matchExpressions: - { key: app, operator: In, values: [grafana] } resources: # Optionally specify container resources limits: cpu: 800m memory: 800Mi requests: cpu: 100m memory: 100Mi bug1:When spec.service.port name is web,but deploy name is grafana. It's restricted in the program. bug2:If the first grafana deployment nodePort is 30001,when I changed cr nodePort to 30002,It didn't work Relevant issues/tickets Type of change [x] Bug fix (non-breaking change which fixes an issue) [ ] New feature (non-breaking change which adds functionality) [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected) Checklist [ ] This change requires a documentation update [x] I have added tests that prove my fix is effective or that my feature works [ ] I have added a test case that will be used to verify my changes [ ] Verified independently on a cluster by reviewer Verification steps @gitgaoxiang can you please rebase? rebase Already done @pb82 Done. Don't know why the e2e failes. For some reason it has returned [ { "id": 1, "uid": "He5NIuAVk", "title": "grafana-operator-system", "uri": "db/grafana-operator-system", "url": "/dashboards/f/He5NIuAVk/grafana-operator-system", "slug": "", "type": "dash-folder", "tags": [], "isStarred": false, "sortMeta": 0 }, { "id": 3, "uid": "0ed390cdb20229700c1741b72138163ce2214445", "title": "Node' Exporter 'Full", "uri": "db/node-exporter-full", "url": "/d/0ed390cdb20229700c1741b72138163ce2214445/node-exporter-full", "slug": "", "type": "dash-db", "tags": [ "linux" ], "isStarred": false, "folderId": 1, "folderUid": "He5NIuAVk", "folderTitle": "grafana-operator-system", "folderUrl": "/dashboards/f/He5NIuAVk/grafana-operator-system", "sortMeta": 0 }, { "id": 2, "uid": "2150edaf610ab34b8f1050e5bcd5d4ca5903e1c2", "title": "Simple' 'Dashboard", "uri": "db/simple-dashboard", "url": "/d/2150edaf610ab34b8f1050e5bcd5d4ca5903e1c2/simple-dashboard", "slug": "", "type": "dash-db", "tags": [], "isStarred": false, "folderId": 1, "folderUid": "He5NIuAVk", "folderTitle": "grafana-operator-system", "folderUrl": "/dashboards/f/He5NIuAVk/grafana-operator-system", "sortMeta": 0 } ] This haven't happend before and it didn't on another PR that I just ran so I think this should be okay. I will merge the PR and if there is some issue after I will look in to it.
gharchive/pull-request
2023-02-02T08:37:10
2025-04-01T06:38:51.467416
{ "authors": [ "NissesSenap", "gitgaoxiang", "pb82" ], "repo": "grafana-operator/grafana-operator", "url": "https://github.com/grafana-operator/grafana-operator/pull/887", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1765967075
Property Value Aggregate Query Has Double the Expected Number of Data Points The following description was taken from 2 comments in https://github.com/grafana/iot-sitewise-datasource/issues/160 I tested the new version (1.9.2) with Grafana version 10.0.0. With my current setup, I get the following response: By downloading the data and looking at it, there are duplicates of most of the values. Without knowing, I suspect it is the paginated response that is duplicated. Using Boto3 - get_asset_property_aggregates I get the expected value of 600 data points. As the screenshot shows above, the expression fails when using the value aggregates (top right corner), only showing 1 data point. By simply taking the expression query away, the result would be the same as top left corner (as expected). Similarly, the expression does not get all the values for value history either. Again, I suspect that this is something with pagination/next token, not being able to show all the result. Please let me know if anything is unclear or you require more information. Originally posted by @egheie in https://github.com/grafana/iot-sitewise-datasource/issues/160#issuecomment-1592591464 Hi @kevinwcyu, no problem. Here are the screenshots, starting bottom left, going clockwise. Note: every quadrant will have the same query, just the panel is different. (Either Time series, or Stat with Calculation = Count). When doing this, I saw that the bottom left Stat was set to Get property value aggregates and not Get property value history. See new screenshot at the bottom, sorry for that. Screenshots of query editor: Bottom left Top left Top right Bottom right Updated screenshot: Originally posted by @egheie in https://github.com/grafana/iot-sitewise-datasource/issues/160#issuecomment-1594699586 Here's a python script to fetch a count of the data points to compare with the results from the dashboard. import boto3 from datetime import datetime client = boto3.client('iotsitewise') # get asset property values history # https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/iotsitewise/client/get_asset_property_value_history.html # may need a paginator if there are more than 20000 data points response = client.get_asset_property_value_history( assetId='1a2b3c4d-5e6f-1a2b-3c4d-5e6f1a2b3c4d', # <--- replace with actual asset id propertyId='1a2b3c4d-5e6f-1a2b-3c4d-5e6f1a2b3c4d', # <--- replace with actual property id startDate=datetime.fromisoformat('2023-06-13T07:00:00Z'), # <--- set to corresponding from time from the query endDate=datetime.fromisoformat('2023-06-21T06:59:59Z'), # <--- set to corresponding to time from the query timeOrdering='ASCENDING', maxResults=20000 ) print(len(response['assetPropertyValueHistory'])) # get asset property aggregates # https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/iotsitewise/client/get_asset_property_aggregates.html agg_paginator = client.get_paginator('get_asset_property_aggregates') agg_iterator = agg_paginator.paginate( assetId='1a2b3c4d-5e6f-1a2b-3c4d-5e6f1a2b3c4d', # <--- replace with actual asset id propertyId='1a2b3c4d-5e6f-1a2b-3c4d-5e6f1a2b3c4d', # <--- replace with actual property id aggregateTypes=['AVERAGE'], resolution='1m', startDate=datetime.fromisoformat('2023-06-13T07:00:00Z'), # <--- set to corresponding from time from the query endDate=datetime.fromisoformat('2023-06-21T06:59:59Z'), # <--- set to corresponding to time from the query timeOrdering='ASCENDING', maxResults=250, ) agg_count = 0 for p in agg_iterator: agg_count += len(p['aggregatedValues']) print(agg_count)
gharchive/issue
2023-06-20T18:47:39
2025-04-01T06:38:52.335593
{ "authors": [ "kevinwcyu" ], "repo": "grafana/iot-sitewise-datasource", "url": "https://github.com/grafana/iot-sitewise-datasource/issues/200", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2572420634
Allow tuning of .Values.configValidator.pullPolicy It would be really nice to be able to tune the pullPolicy of the configValidator pod. Just adding this to the values {{- with .Values.configValidator.nodeSelector }} and tweak the template validate-configuration.yaml this feature will be going away in v2. So I'd say either turn it off with configValidator.enabled=false. otherwise, if you want to put together a PR, I'd be happy to take a look and merge it in. It does not worth if it's going away. Is there an ETA for V2? Thanks Aiming for a release probably near the end of November, but subject to change.
gharchive/issue
2024-10-08T08:04:15
2025-04-01T06:38:52.342138
{ "authors": [ "josemrs", "petewall" ], "repo": "grafana/k8s-monitoring-helm", "url": "https://github.com/grafana/k8s-monitoring-helm/issues/773", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2432198828
pprof panic on recent toolchains This simple program panics on recent toolchains use pyroscope::PyroscopeAgent; use pyroscope_pprofrs::{pprof_backend, PprofConfig}; fn main() { let pprof_config = PprofConfig::new().sample_rate(100); let pprof_backend = pprof_backend(pprof_config); let pprof_agent = PyroscopeAgent::builder("https://asd.net", "qwe") .basic_auth("xxx", "xxxx") .backend(pprof_backend) .build().unwrap(); let running_agent = pprof_agent.start().unwrap(); let (add_tag, _) = running_agent.tag_wrapper(); let _ = add_tag("connections".to_string(), 10.to_string()); let _ = add_tag("watchers".to_string(), 10.to_string()); running_agent.stop().unwrap().shutdown(); } panic: /home/korniltsev/.cargo/bin/cargo run --color=always --package pg --bin pg Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.06s Running `target/debug/pg` thread 'main' panicked at library/core/src/panicking.rs:221:5: unsafe precondition(s) violated: slice::from_raw_parts requires the pointer to be aligned and non-null, and the total size of the slice not to exceed `isize::MAX` stack backtrace: 0: rust_begin_unwind at /rustc/6292b2af620dbd771ebb687c3a93c69ba8f97268/library/std/src/panicking.rs:661:5 1: core::panicking::panic_nounwind_fmt::runtime at /rustc/6292b2af620dbd771ebb687c3a93c69ba8f97268/library/core/src/panicking.rs:112:18 2: core::panicking::panic_nounwind_fmt at /rustc/6292b2af620dbd771ebb687c3a93c69ba8f97268/library/core/src/panicking.rs:122:5 3: core::panicking::panic_nounwind at /rustc/6292b2af620dbd771ebb687c3a93c69ba8f97268/library/core/src/panicking.rs:221:5 4: core::slice::raw::from_raw_parts::precondition_check at /rustc/6292b2af620dbd771ebb687c3a93c69ba8f97268/library/core/src/ub_checks.rs:68:21 5: core::slice::raw::from_raw_parts at /rustc/6292b2af620dbd771ebb687c3a93c69ba8f97268/library/core/src/ub_checks.rs:75:17 6: <pprof::collector::TempFdArrayIterator<T> as core::iter::traits::iterator::Iterator>::next at /home/korniltsev/.cargo/registry/src/index.crates.io-6f17d22bba15001f/pprof-0.12.1/src/collector.rs:225:26 7: core::iter::traits::iterator::Iterator::fold at /rustc/6292b2af620dbd771ebb687c3a93c69ba8f97268/library/core/src/iter/traits/iterator.rs:2587:29 8: <core::iter::adapters::chain::Chain<A,B> as core::iter::traits::iterator::Iterator>::fold at /rustc/6292b2af620dbd771ebb687c3a93c69ba8f97268/library/core/src/iter/adapters/chain.rs:126:19 9: core::iter::traits::iterator::Iterator::for_each at /rustc/6292b2af620dbd771ebb687c3a93c69ba8f97268/library/core/src/iter/traits/iterator.rs:818:9 10: pprof::report::ReportBuilder::build at /home/korniltsev/.cargo/registry/src/index.crates.io-6f17d22bba15001f/pprof-0.12.1/src/report.rs:110:17 11: pyroscope_pprofrs::Pprof::dump_report at /home/korniltsev/.cargo/registry/src/index.crates.io-6f17d22bba15001f/pyroscope_pprofrs-0.2.7/src/lib.rs:202:22 12: <pyroscope_pprofrs::Pprof as pyroscope::backend::backend::Backend>::add_rule at /home/korniltsev/.cargo/registry/src/index.crates.io-6f17d22bba15001f/pyroscope_pprofrs-0.2.7/src/lib.rs:180:13 13: pyroscope::pyroscope::PyroscopeAgent<pyroscope::pyroscope::PyroscopeAgentRunning>::tag_wrapper::{{closure}} at /home/korniltsev/.cargo/registry/src/index.crates.io-6f17d22bba15001f/pyroscope-0.5.7/src/pyroscope.rs:776:17 14: pg::main at ./src/main.rs:18:13 15: core::ops::function::FnOnce::call_once at /rustc/6292b2af620dbd771ebb687c3a93c69ba8f97268/library/core/src/ops/function.rs:250:5 note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace. thread caused non-unwinding panic. aborting. Process finished with exit code 134 (interrupted by signal 6:SIGABRT) Looks like it's problem in pprof-rs crate, which won't be fixed any soon as it looks abandoned (last update was almost a year ago). As I understood from googling, the problem has always been there, but started panicking only recently (at rust 1.78.0). So, downgrading may be a workaround, maybe? https://github.com/tikv/pprof-rs/issues/232 I receive the same error on M1 with pyroscope for Rust on Macos M1 thread '' panicked at library/core/src/panicking.rs:219:5: unsafe precondition(s) violated: slice::from_raw_parts requires the pointer to be aligned and non-null, and the total size of the slice not to exceedisize::MAXstack backtrace: 0: rust_begin_unwind at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/std/src/panicking.rs:652:5 1: core::panicking::panic_nounwind_fmt::runtime at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/core/src/panicking.rs:110:18 2: core::panicking::panic_nounwind_fmt at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/core/src/panicking.rs:120:5 3: core::panicking::panic_nounwind at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/core/src/panicking.rs:219:5 4: core::slice::raw::from_raw_parts::precondition_check at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/core/src/ub_checks.rs:68:21 5: core::slice::raw::from_raw_parts at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/core/src/ub_checks.rs:75:17 6: pprof::addr_validate::validate at /Users/deanhunter/.cargo/registry/src/index.crates.io-6f17d22bba15001f/pprof-0.12.1/src/addr_validate.rs:93:28 7: <pprof::backtrace::frame_pointer::Trace as pprof::backtrace::Trace>::trace at /Users/deanhunter/.cargo/registry/src/index.crates.io-6f17d22bba15001f/pprof-0.12.1/src/backtrace/frame_pointer.rs:114:17 8: perf_signal_handler at /Users/deanhunter/.cargo/registry/src/index.crates.io-6f17d22bba15001f/pprof-0.12.1/src/profiler.rs:354:13 9: ___simple_bprintf note: Some details are omitted, run withRUST_BACKTRACE=fullfor a verbose backtrace. thread caused non-unwinding panic. aborting.
gharchive/issue
2024-07-26T13:03:38
2025-04-01T06:38:52.399852
{ "authors": [ "DeanHnter", "Yerkwell", "korniltsev" ], "repo": "grafana/pyroscope-rs", "url": "https://github.com/grafana/pyroscope-rs/issues/174", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
527909912
Error loading: myorgid-simple-panel I read the grafana documentation and starting with this example and then working my way towards integrating the funnel part but I can't seem to get it working。I restarted the Server when I tried opening this Panel I got an error saying Error loading: myorgid-simple-panel.My grafana version is 6.4.4, node.js version is 12.13.1;How to solve this? My development environment is Windows x64 Hello,Try npm install before yarn build I have the same error. This is what I found in the browser console: backend.js:6 Error loading panel plugin: myorgid-simple-panel TypeError: r.PanelPlugin is not a constructor at Module.eval (module.js:1) at n (module.js:1) at eval (module.js:1) at eval (module.js:1) at i (system.js:4) at system.js:4 at system.js:4 at O (system.js:4) at k (system.js:4) at system.js:4 Ok, I think you have a problem because you don't have the latest version. Therefore git pull and try again Ok, I think you have a problem because you don't have the latest version. Therefore git pull and try again Hi Samuel, I've pulled the latest version and it's still not working. Still the same Grafana error, and the same console error. Have you tried building and running it on Grafana? Yes but I'm on Linux. You have the same mistake here: https://github.com/grafana/grafana/issues/20338 And the last simple-react-panel pull repair that: https://github.com/grafana/simple-react-panel/commit/ca7f48c685aa94c40eeabf63efbdf5eebe6baa14 (in src/SimplePanel.tsx and src/SimpleEditor.tsx) Did you install Grafana from the sources? Yes but I'm on Linux. You have the same mistake here: grafana/grafana#20338 And the last simple-react-panel pull repair that: ca7f48c (in src/SimplePanel.tsx and src/SimpleEditor.tsx) Did you install Grafana from the sources? I'm on the MacOS. All my files look exactly like how it is in that repair. The "fixes" that the person in the thread you posted were already implemented by me in my files I installed Grafana through homebrew. I just got it working. PanelPlugin should be imported from @grafana/ui and not @grafana/data My old module.ts looked like import { PanelPlugin } from '@grafana/data'; import { SimpleOptions, defaults } from './types'; import { SimplePanel } from './SimplePanel'; import { SimpleEditor } from './SimpleEditor'; export const plugin = new PanelPlugin<SimpleOptions>(SimplePanel).setDefaults(defaults).setEditor(SimpleEditor); My new module.ts looks the same, but with the first line replaced to import { PanelPlugin } from '@grafana/ui' I just got it working. PanelPlugin should be imported from @grafana/ui and not @grafana/data My old module.ts looked like import { PanelPlugin } from '@grafana/data'; import { SimpleOptions, defaults } from './types'; import { SimplePanel } from './SimplePanel'; import { SimpleEditor } from './SimpleEditor'; export const plugin = new PanelPlugin<SimpleOptions>(SimplePanel).setDefaults(defaults).setEditor(SimpleEditor); My new module.ts looks the same, but with the first line replaced to import { PanelPlugin } from '@grafana/ui' I just got it working. PanelPlugin should be imported from @grafana/ui and not @grafana/data My old module.ts looked like import { PanelPlugin } from '@grafana/data'; import { SimpleOptions, defaults } from './types'; import { SimplePanel } from './SimplePanel'; import { SimpleEditor } from './SimpleEditor'; export const plugin = new PanelPlugin<SimpleOptions>(SimplePanel).setDefaults(defaults).setEditor(SimpleEditor); My new module.ts looks the same, but with the first line replaced to import { PanelPlugin } from '@grafana/ui' Module '"../node_modules/@grafana/ui"' has no exported member 'PanelPlugin'. ?? I just got it working. PanelPlugin should be imported from @grafana/ui and not @grafana/data My old module.ts looked like import { PanelPlugin } from '@grafana/data'; import { SimpleOptions, defaults } from './types'; import { SimplePanel } from './SimplePanel'; import { SimpleEditor } from './SimpleEditor'; export const plugin = new PanelPlugin<SimpleOptions>(SimplePanel).setDefaults(defaults).setEditor(SimpleEditor); My new module.ts looks the same, but with the first line replaced to import { PanelPlugin } from '@grafana/ui' I modified the mudule.ts according to your way,it still not working, I input 'yarn dev',got an error "/@grafana/ui"' has no exported member 'PanelPlugin'." Sorry guys. I forgot to share my full module.ts file. You need to add // @ts-ignore on the first line of your module.ts file. So the final version looks like: // @ts-ignore import { PanelPlugin } from '@grafana/ui'; import { SimpleOptions, defaults } from './types'; import { SimplePanel } from './SimplePanel'; import { SimpleEditor } from './SimpleEditor'; export const plugin = new PanelPlugin<SimpleOptions>(SimplePanel).setDefaults(defaults).setEditor(SimpleEditor); That should solve your problem. Don't ask me why that works though. I also can't get this to work. The plugin shows up in the panel list by clicking on it gives the error loading: myorgid-simple-pane the console complains: keybindingSrv.ts:20 Error loading panel plugin: myorygid-simple-panel SyntaxError: Unexpected token '<' at eval (<anonymous>) at st (system.js:4) at system.js:4 at system.js:4 at O (system.js:4) at k (system.js:4) at system.js:4 i solved the problem by upgrading grafana to 6.5.x . In any case, there is a lot of mismatches between alpha plugins (i.e. piechart alpha panel) in Grafana core and this template. I think it would be reviewed and to propose an unified usage of @grafana libraries. if this still exists... post again -- my guess is the plugin was not built
gharchive/issue
2019-11-25T07:47:59
2025-04-01T06:38:52.420062
{ "authors": [ "efvhi", "gretamosa", "karlie93", "manu2194", "ryantxu", "sbelondr", "speg" ], "repo": "grafana/simple-react-panel", "url": "https://github.com/grafana/simple-react-panel/issues/8", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
275220435
Add func (seg MediaSegment)String() related #98 Coverage decreased (-5.05%) to 66.278% when pulling 178325b3c3cfb74ad192bc583db21a09792757f0 on keizo042:media_segment_string into d137fcd412b91fee5939a56a9d0d5a83c81d39d3 on grafov:master. I'm concerned that we'd now have two different methods to write a segment. Can we not move the following to an unexported function, which takes a *m3u8.MediaSegment and *bytes.Buffer and writes to the provided buffer? https://github.com/grafov/m3u8/blob/master/writer.go#L483-L600 As long as performance isn't drastically affected. Coverage decreased (-4.7%) to 66.589% when pulling 9177d906dd8eefa5c7852b4d3725e4091a93af94 on keizo042:media_segment_string into d137fcd412b91fee5939a56a9d0d5a83c81d39d3 on grafov:master. Coverage decreased (-0.5%) to 70.83% when pulling dde4c977e438e088aad4f7f4c85d5c0623ef3e24 on keizo042:media_segment_string into d137fcd412b91fee5939a56a9d0d5a83c81d39d3 on grafov:master. @bradleyfalzon Thank you for your review. I'd fix that the functions writing XXX use provided buffer and replace duplicated code to unexported function. Thanks @keizo042 that change is better, but there's still some duplication of the if statements. We've created a few new unexported functions, which is good, but I'd think there should just be one function that both MediaSegment.String() and *MediaPlaylist.Encode() call. @bradleyfalzon I'd think there should just be one function that both MediaSegment.String() and *MediaPlaylist.Encode() call. Unless there's a specific reason we can't do that? I agree but there is two difference. in (p *MediaPlaylist) String, writing m3u8.Map and m3u8.Key depends on previous media segments. but in the function I request, I think it is good to write buffer if they exist. In addition, (p *MediaPlaylist) String() have previous durations cache. but (seg MediaSegment) String() is stateless. I'd like to remove caching procedure in (seg MediaSegment) String()`. so it needs writeKey and writeMap. if I shoud not break order of segments attribute, writeSCTE also. I feel need to create unexported function as rest part of writing media segment in order to remove duplicated code. I don't have good idea how it maanges caching yet. depends on previous media segments. Ah yes, I see. This is unfortunate. have previous durations cache Yeah, so this is so for very large playlists, we don't continue to call the intensive strconv.FormatFloat. I then understand why you've chosen this method, but I do prefer if we could remove the duplication completely. I don't mind duplicate code, but there's a lot of logic here that's being duplicated and that's what concerns me. What if there was a function like // writeSegment writes a string representation of seg to buf. // // durationCache is required to reduce the number of calls to repetitive strconv formats. // If playlist is non-nil, additional context is derived from the playlist. func writeSegment(seg MediaSegment, buf bytes.Buffer, durationCache map[float]string, playlist *MediaPlaylist, buf bytes.Buffer) Then the same if statements that used information from the playlist would first check if playlist is non-nil. -if p.Map == nil && seg.Map != nil { +if p != nil && p.Map == nil && seg.Map != nil { Or similar? I'm not 100% for my suggestion, just asking your thoughts. That's good idea. I'd like to show all infomation when the context is not provided. I think better like this. func (seg MediaSegment) write(buf bytes.Buffer, p *MediaPlaylist, durationCache map[string]float ) { ... if p != nil { if p.Map == nil && seg.Map != nil { writeMap(buf, seg.Map) } } else { writeMap(buf, seg.Map) } ... if p != nil { // original caching and conversion } else { buf.WriteString(strconv.FormatFloat(seg.Duration, 'f', 3, 32)) } ... } Coverage increased (+0.3%) to 71.644% when pulling 625523432ce599f5375de7ca5912c44bd8c72db0 on keizo042:media_segment_string into d137fcd412b91fee5939a56a9d0d5a83c81d39d3 on grafov:master. This is looking good to me I think. Could you run the benchmarks before and after to check for performance regressions? benchmark resutls Env CentOS 7.4 go version go1.8.3 linux/amd64 result [m3u8]$ go test -bench=. BenchmarkDecodeMasterPlaylist-12 50000 29740 ns/op BenchmarkDecodeMediaPlaylist-12 100 21899250 ns/op BenchmarkEncodeMasterPlaylist-12 1000000 1426 ns/op BenchmarkEncodeMediaPlaylist-12 200 6463614 ns/op PASS ok github.com/grafov/m3u8 7.455s [m3u8]$ git checkout media_segment_string Switched to branch 'media_segment_string' [m3u8]$ go test -bench=. BenchmarkDecodeMasterPlaylist-12 50000 29644 ns/op BenchmarkDecodeMediaPlaylist-12 100 21823718 ns/op BenchmarkEncodeMasterPlaylist-12 1000000 1420 ns/op BenchmarkEncodeMediaPlaylist-12 200 6974774 ns/op PASS ok github.com/grafov/m3u8 7.569s only Encode MediaPlaylist benchmark [m3u8]$ git checkout master Already on 'master' m3u8]$ go test -bench=BenchmarkEncodeMediaPlaylist BenchmarkEncodeMediaPlaylist-12 200 6456721 ns/op PASS ok github.com/grafov/m3u8 2.000s [m3u8]$ git checkout media_segment_string Switched to branch 'media_segment_string' [m3u8]$ go test -bench=BenchmarkEncodeMediaPlaylist BenchmarkEncodeMediaPlaylist-12 200 6951293 ns/op PASS ok github.com/grafov/m3u8 2.131s [m3u8]$ well... I think we prefer that peformance regression is under 0.1sec. I take inlining writeSCTE and invesitage in detail.
gharchive/pull-request
2017-11-20T02:33:44
2025-04-01T06:38:52.438676
{ "authors": [ "bradleyfalzon", "coveralls", "keizo042" ], "repo": "grafov/m3u8", "url": "https://github.com/grafov/m3u8/pull/100", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
128103281
Question: would it be possible to use extensions Having annotation is quite small code change. But would it be possible to have extension which accepts an object and saves/load it from parcel. At least current API could be improved with extension to avoid using util classes What do you mean by extensions? The next feature I'm thinking of adding is a way for apps to customize how a type should be parceled. E.g. by default a date is added to a parcel via serialization, but a more efficient way would be to read/write a long value directly to the parcel. Would this cover what you are after? I was thinking about better API. Instead of: val example = Example(42) val parcel = ExampleParcel.wrap(example) // e.g. use in a bundle someBundle.putParcelable("example", parcel) something like: val example = Example(42) example.putToBundle(someBundle) Oh extension methods. I'm not sure how you can do this since I'm generating java code. Do you know if it's possible? I've posted a question on the Kotlin forums. https://discuss.kotlinlang.org/t/annotating-static-java-methods-so-kotlin-can-pick-them-up-as-extension-functions-of-a-type/1431 Unfortunately this is not supported by Kotlin yet (see linked thread) Actually, I may have dismissed this too early. I'll investigate generating a kotlin class with the extension methods alongside the other generated classes. Not sure it's possible yet though. Could also do this with a paperparcel-kotlin library that is just a thin shim on top of paperparcel that adds the extension methods. That is a good idea. I wanted to try it again after this pull request is released and see if the problem is fixed first, because it sounds like it should fix what I was seeing. :+1: Closing due to new APIs replacing the need for extensions. The wrappers are more hidden to the library consumer now.
gharchive/issue
2016-01-22T07:55:44
2025-04-01T06:38:52.477164
{ "authors": [ "edenman", "emartynov", "grandstaish" ], "repo": "grandstaish/DataParcel", "url": "https://github.com/grandstaish/DataParcel/issues/1", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
310533981
All get lost on updates In every update all I had saved its lost. Is there I way I can recover it all? or don't overwrite by updates, its a nightmare!!!!! Thanks @kikoseijo ! We did put a warning into the release notes about exactly this. Did this happen to you through auto update? You can install an older release (<1.5) and all the data should still be there! Hey @timsuchanek, Yes its the auto update app for Mac, been happening on the last 3-4 updates, yes. Lets hope it will settle eventually... thanks for support,
gharchive/issue
2018-04-02T16:36:17
2025-04-01T06:38:52.484541
{ "authors": [ "kikoseijo", "timsuchanek" ], "repo": "graphcool/graphql-playground", "url": "https://github.com/graphcool/graphql-playground/issues/629", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
308852210
custom scalar Hi All This is more of a question than an issue. How do I create a custom scalar with graphql-yoga? I have attempted to use GraphQLScalarType from 'graphql' along the following lines: import { GraphQLScalarType } from 'graphql' //... const MyScalar = new GraphQLScalarType({ //... }) //... const resolvers = { Query: { //... }, MyScalar: MyScalar } But the scalar value returns as null in all queries. I made it work with something along the lines of this: const NLString = new GraphQLScalarType({ name: 'NLString', description: 'New line terminated string', //invoked to parse client input that was passed through variables. //takes a plain JS object. parseValue(varaible){ return varaible.replace(/\n$/, "") }, //invoked to parse client input that was passed inline in the query. //takes a value AST. parseLiteral(literal) { return literal.value.replace(/\n$/, "") }, //invoked when serializing the result to send it back to a client. serialize: function(value){ return value + "\n" } }) //... const resolvers = { Query: { //... }, NLString } Also, for completeness, here's are some example graphql queries: query { post(title:"example\n"){ title body } } triggers parseLiteral(literal) with literal.value="example\n" query Posts($title: NLString) { post(title:$title){ title body } } with: { "title": "example\n" } triggers parseValue(varaible) with varaible="example\n"
gharchive/issue
2018-03-27T07:31:06
2025-04-01T06:38:52.487982
{ "authors": [ "aogriffiths" ], "repo": "graphcool/graphql-yoga", "url": "https://github.com/graphcool/graphql-yoga/issues/229", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
358351026
carbon-cache memory leak when put under LB Recently we put our graphite service under Load Balance, then seeing lots of log as bellow, the IP address here are from LB connection, carbon-cache process memory increasing slowly and finally OOM, suspect the LB short connection which making carbon-cache memory leak. n. 09/09/2018 05:06:34 :: [listener] MetricLineReceiver connection with 192.117.87.3:41731 lost: Connection to the other side was lost in a non-clean fashio n. 09/09/2018 05:06:34 :: [listener] MetricLineReceiver connection with 192.117.87.131:8879 lost: Connection to the other side was lost in a non-clean fashi on. 09/09/2018 05:06:34 :: [listener] MetricLineReceiver connection with 192.97.72.130:42744 lost: Connection to the other side was lost in a non-clean fashi on. @rmrf : It can be non-related carbon memory can grow above the limit if not configured properly. What's your metric flow? How many carbon-caches are you running? Could you please share carbon config? Are you sure it is caused by the lost connections? Can you reproduce this on a carbon with LB without any actual metrics? Which Twisted and Python version are you using?
gharchive/issue
2018-09-09T05:18:53
2025-04-01T06:38:52.514757
{ "authors": [ "deniszh", "piotr1212", "rmrf" ], "repo": "graphite-project/carbon", "url": "https://github.com/graphite-project/carbon/issues/810", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
396890432
Remove /opt/graphite prefix and use setuptools See https://github.com/graphite-project/carbon/pull/835 for reasoning I'm not finished yet but made some changes to the docs which I would like to get some feedback for. I've tried to simplify the docs, IMO there were too much separate pages which you had to jump back and forward to. I've changed the default install (settings.py) so that running collectstatic is not needed. The static files can be served directly from the app with whitenoise. Serving from whitenoise should be fast enough for most installations. this eliminates the need for configuring the static dir in the webserver (simplifies installation). From what I've read the whole purpose of collectstatic is for organisations which run multiple django apps and have separated their static files from code (in repo), so that they can update static files without having to deploy code and vice versa. As graphite's static files haven't changed in years and they are in the code repo I don't see a point to require collectstatic in the default install. Users can still run collectstatic if they want/need. please ignore the GRAPHITE_ROOT commit, ill remove it later. I think a rebase went a bit wrong, you ended up with a copy of the commit "fix dashboard graph metric list icon paths with URL_PREFIX" from the master branch in master: 0a037db4b2d864734e14dd6302bc71194f53e8d3 in this branch: 1ba4da55c08035cccfcdaae2220f8d384dbd1929 I think a rebase went a bit wrong, you ended up with a copy of the commit "fix dashboard graph metric list icon paths with URL_PREFIX" from the master branch in master: 0a037db in this branch: 1ba4da5 I think I merged instead of rebased. Anyway, cleaned up now. All looks good to me. I had another thought about storage dirs ... I think the original idea behind using /opt/graphite is those storage and log dirs, which would be awkward in the python site-packages directory. Maybe the thing to do is divorce them from the graphite application root, and default to /opt/graphite/storage and /opt/graphite/log regardless of the install prefix? And not mention them in setup.py at all? I suppose the downside is they would not be created by install. Just an idea - I suppose I always customize these dirs anyway. good point. I'll have a look. But busy at the moment with more higher prio stuff. This is a big painful change, but I think it's still relatively important. Just sayin' to appease the stale bot :) If this will fix the pip install, can we get it merged? Unfortunately, it's not that easy. That would break backward compatibility, and need more changes in carbon. Also, not sure if that fix issue too :/ ср, 19 июл. 2023 г., 18:36 Adam @.***>: If this will fix the pip install, can we get it merged? — Reply to this email directly, view it on GitHub https://github.com/graphite-project/graphite-web/pull/2409#issuecomment-1642409417, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAJLTVRNEHAUAT6W7NWQH7LXRAEH3ANCNFSM4GOVTH5Q . You are receiving this because your review was requested.Message ID: @.***>
gharchive/pull-request
2019-01-08T12:43:02
2025-04-01T06:38:52.523957
{ "authors": [ "adamboutcher", "deniszh", "piotr1212", "ploxiln" ], "repo": "graphite-project/graphite-web", "url": "https://github.com/graphite-project/graphite-web/pull/2409", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }