id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
1886888771
|
🛑 ojolink is down
In 8a3676b, ojolink (ojolink.fr) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Ojolink is back up in 69cd2e6 after 8 minutes.
|
gharchive/issue
| 2023-09-08T03:50:34 |
2025-04-01T04:55:08.190591
|
{
"authors": [
"GiuseppeFilingeri"
],
"repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle",
"url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/69879",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2010503865
|
🛑 higou is down
In ff5f21d, higou (higou.net) was down:
HTTP code: 0
Response time: 0 ms
Resolved: higou is back up in 3147b2a after 23 minutes.
|
gharchive/issue
| 2023-11-25T06:55:14 |
2025-04-01T04:55:08.192874
|
{
"authors": [
"GiuseppeFilingeri"
],
"repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle",
"url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/73483",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2047192267
|
🛑 youjizz is down
In 3566a85, youjizz (youjizz.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: youjizz is back up in 513ed09 after 1 hour, 13 minutes.
|
gharchive/issue
| 2023-12-18T18:16:00 |
2025-04-01T04:55:08.195152
|
{
"authors": [
"GiuseppeFilingeri"
],
"repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle",
"url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/74643",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2225433700
|
🛑 rapishare is down
In 30f59bb, rapishare (rapishare.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: rapishare is back up in 14ecc17 after 37 minutes.
|
gharchive/issue
| 2024-04-04T12:57:31 |
2025-04-01T04:55:08.197625
|
{
"authors": [
"GiuseppeFilingeri"
],
"repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle",
"url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/78058",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1007083114
|
🛑 orkut is down
In e23bd31, orkut (orkut.com.br) was down:
HTTP code: 0
Response time: 0 ms
Resolved: orkut is back up in 073258b.
|
gharchive/issue
| 2021-09-25T13:12:38 |
2025-04-01T04:55:08.200128
|
{
"authors": [
"GiuseppeFilingeri"
],
"repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle",
"url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/7983",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2315090926
|
🛑 hcr is down
In a24a680, hcr (hcr.co.uk) was down:
HTTP code: 403
Response time: 892 ms
Resolved: hcr is back up in f19f57a after .
|
gharchive/issue
| 2024-05-24T10:50:28 |
2025-04-01T04:55:08.202511
|
{
"authors": [
"GiuseppeFilingeri"
],
"repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle",
"url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/84067",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2320340152
|
🛑 uzboys is down
In ab103c5, uzboys (uzboys.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: uzboys is back up in 9f772f3 after 3 hours, 19 minutes.
|
gharchive/issue
| 2024-05-28T07:31:14 |
2025-04-01T04:55:08.204737
|
{
"authors": [
"GiuseppeFilingeri"
],
"repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle",
"url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/84548",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1015665129
|
🛑 orkut is down
In a2508f5, orkut (orkut.com.br) was down:
HTTP code: 0
Response time: 0 ms
Resolved: orkut is back up in 52c7e7a.
|
gharchive/issue
| 2021-10-04T21:46:26 |
2025-04-01T04:55:08.207006
|
{
"authors": [
"GiuseppeFilingeri"
],
"repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle",
"url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/8648",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
920231331
|
🛑 orkut is down
In 638a190, orkut (orkut.com.br) was down:
HTTP code: 0
Response time: 0 ms
Resolved: orkut is back up in 70bbf37.
|
gharchive/issue
| 2021-06-14T09:39:10 |
2025-04-01T04:55:08.209683
|
{
"authors": [
"GiuseppeFilingeri"
],
"repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle",
"url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/871",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2363043076
|
🛑 ojolink is down
In 23559b8, ojolink (ojolink.fr) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Ojolink is back up in 9506444 after 9 minutes.
|
gharchive/issue
| 2024-06-19T19:55:33 |
2025-04-01T04:55:08.211982
|
{
"authors": [
"GiuseppeFilingeri"
],
"repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle",
"url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/87561",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2367392386
|
🛑 ojolink is down
In 15f627b, ojolink (ojolink.fr) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Ojolink is back up in 8b3f76d after 1 hour, 6 minutes.
|
gharchive/issue
| 2024-06-22T00:43:02 |
2025-04-01T04:55:08.214296
|
{
"authors": [
"GiuseppeFilingeri"
],
"repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle",
"url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/87976",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2385187731
|
🛑 ojolink is down
In b73499a, ojolink (ojolink.fr) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Ojolink is back up in 81d7e8b after 13 minutes.
|
gharchive/issue
| 2024-07-02T04:29:06 |
2025-04-01T04:55:08.216587
|
{
"authors": [
"GiuseppeFilingeri"
],
"repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle",
"url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/90043",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2405926754
|
🛑 torrentzap is down
In 9907891, torrentzap (torrentzap.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: torrentzap is back up in 64cd8e4 after 45 minutes.
|
gharchive/issue
| 2024-07-12T16:07:54 |
2025-04-01T04:55:08.219183
|
{
"authors": [
"GiuseppeFilingeri"
],
"repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle",
"url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/92207",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2436062237
|
🛑 orkut is down
In 5eccd1e, orkut (orkut.co.in) was down:
HTTP code: 0
Response time: 0 ms
Resolved: orkut is back up in cf40cab after 17 minutes.
|
gharchive/issue
| 2024-07-29T18:22:58 |
2025-04-01T04:55:08.221700
|
{
"authors": [
"GiuseppeFilingeri"
],
"repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle",
"url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/95214",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2459543173
|
🛑 orkut is down
In 1b895ce, orkut (orkut.co.in) was down:
HTTP code: 0
Response time: 0 ms
Resolved: orkut is back up in 20b325e after 35 minutes.
|
gharchive/issue
| 2024-08-11T11:09:18 |
2025-04-01T04:55:08.223978
|
{
"authors": [
"GiuseppeFilingeri"
],
"repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle",
"url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/96836",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
186123939
|
error when use atom-beauty in linux
I have error when use atom-beauty in linux zorin 10, how to fix this error ?
Could not find 'php-cs-fixer'. The program may not be installed.
See https://github.com/FriendsOfPHP/PHP-CS-Fixer for program installation instructions.
Your program is properly installed if running 'which php-cs-fixer' in your Terminal returns an absolute path to the executable. If this does not work then you have not installed the program correctly and so Atom Beautify will not find the program. Atom Beautify requires that the program be found in your PATH environment variable.
Note that this is not an Atom Beautify issue if beautification does not work and the above command also does not work: this is expected behaviour, since you have not properly installed your program. Please properly setup the program and search through existing Atom Beautify issues before creating a new issue. See https://github.com/Glavin001/atom-beautify/search?q=php-cs-fixer&type=Issues for related Issues and https://github.com/Glavin001/atom-beautify/tree/master/docs for documentation. If you are still unable to resolve this issue on your own then please create a new issue and ask for help.
Hide Stack Trace
Error: Could not find 'php-cs-fixer'. The program may not be installed.
at PHPCSFixer.module.exports.Beautifier.commandNotFoundError (/home/kang-cahya/.atom/packages/atom-beautify/src/beautifiers/beautifier.coffee:204:14)
at /home/kang-cahya/.atom/packages/atom-beautify/src/beautifiers/beautifier.coffee:304:22
at tryCatcher (/home/kang-cahya/.atom/packages/atom-beautify/node_modules/bluebird/js/release/util.js:16:23)
at Promise._settlePromiseFromHandler (/home/kang-cahya/.atom/packages/atom-beautify/node_modules/bluebird/js/release/promise.js:510:31)
at Promise._settlePromise (/home/kang-cahya/.atom/packages/atom-beautify/node_modules/bluebird/js/release/promise.js:567:18)
at Promise._settlePromise0 (/home/kang-cahya/.atom/packages/atom-beautify/node_modules/bluebird/js/release/promise.js:612:10)
at Promise._settlePromises (/home/kang-cahya/.atom/packages/atom-beautify/node_modules/bluebird/js/release/promise.js:687:18)
at Async._drainQueue (/home/kang-cahya/.atom/packages/atom-beautify/node_modules/bluebird/js/release/async.js:138:16)
at Async._drainQueues (/home/kang-cahya/.atom/packages/atom-beautify/node_modules/bluebird/js/release/async.js:148:10)
at Async.drainQueues (/home/kang-cahya/.atom/packages/atom-beautify/node_modules/bluebird/js/release/async.js:17:14)
at process._tickCallback (internal/process/next_tick.js:103:7)
I really want to focus on improving the installation experience for users. I have created a new Issue, #1687, to target this problem. Please provide your feedback! Thanks in advance.
|
gharchive/issue
| 2016-10-30T08:34:30 |
2025-04-01T04:55:08.255853
|
{
"authors": [
"Glavin001",
"dyazincahya"
],
"repo": "Glavin001/atom-beautify",
"url": "https://github.com/Glavin001/atom-beautify/issues/1309",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
576738208
|
[Proposal]
Description
可以加个gzip 对传输的内容进行压缩,其中一个页面我的效果是从原本1.8M多大的资源,经压缩后只传输567KB,明显减轻网络的负担,提高页面加载速度和用户体验度
Solution
对于gin框架的案例
examples/gin/main.go
package main
import (
...
"github.com/gin-contrib/gzip"
...
)
func main() {
...
app := gin.Default()
...
app.Use(gzip.Gzip(gzip.DefaultCompression))
...
}
修改标题
|
gharchive/issue
| 2020-03-06T07:02:40 |
2025-04-01T04:55:08.303311
|
{
"authors": [
"laijinhang"
],
"repo": "GoAdminGroup/go-admin",
"url": "https://github.com/GoAdminGroup/go-admin/issues/181",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2300876501
|
Mac M1 / Quest 3 - OpenXR was requested but failed to start - Godot will start in normal mode
Hi all,
I'm getting back into VR and thought I'd give this one a try.
I'm able to push a simple "hello world" style XR app (with Xrorigin / xrcamera) to my Quest 3, however, when it loads, it loads within a window and says OpenXR was requested but failed to start.
Does anyone know if there are specific settings on the device that need to be set?
I don't get the USB debugging popup anymore - not certain if this has anything to do with it or not.
So, to be clear, it does build, and push just fine.
It just has a runtime issue, but not sure where to look for more detailed info - does quest have
log details below - looks like it can't find libopenxr_loader.so
I'm going to dive into that to see what I can find but thought I'd pass through here in case anyone just knows already.
Any thoughts?
Thanks,
Jeff
12:47:32.686
godot
USER ERROR: Can't open dynamic library: libopenxr_loader.so. Error: dlopen failed: library "libopenxr_loader.so" not found.
12:47:32.686
godot
at: open_dynamic_library (platform/android/os_android.cpp:200)
12:47:32.686
godot
USER ERROR: OpenXR loader not found.
12:47:32.686
godot
at: openxr_loader_init (modules/openxr/openxr_api.cpp:1216)
12:47:32.686
godot
USER WARNING: OpenXR was requested but failed to start.
12:47:32.686
godot
Please check if your HMD is connected.
12:47:32.686
godot
When using Windows MR please note that WMR only has DirectX support, make sure SteamVR is your default OpenXR runtime.
12:47:32.686
godot
Godot will start in normal mode.
12:47:32.686
godot
12:47:32.686
godot
at: initialize_openxr_module (modules/openxr/register_types.cpp:141)
hadn't even thought of Macs overzealous download protection marking files as quarantined. Did you download the vendors plugin from the asset library or download it from the github page?
|
gharchive/issue
| 2024-05-16T16:41:58 |
2025-04-01T04:55:08.324145
|
{
"authors": [
"BastiaanOlij",
"JeffGillin"
],
"repo": "GodotVR/godot-xr-tools",
"url": "https://github.com/GodotVR/godot-xr-tools/issues/636",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1317116671
|
Godot 3.4.4 Native Vive Focus 3 Support
Hi.
I am attempting to run Godot natively on the Vive Focus 3 with the OpenXR demo.
Godot 3.4.4
godot_openxr_1.3.0
Godot XR Tools 2.5.0
Following the document to setup as if building for Quest, only changing (following the Vive Unity OpenXR guide):
Android Min SDK 25
Android Target SDK 29
The application starts but as a 2D window, then shortly after dies.
Debugger says:
E 0:00:01.130 xr_result: OpenXR Failed to enumerate number of extension properties []
<C++ Source> ../../../../../src/openxr/OpenXRApi.h:351 @ xr_result()
first_person_controller_vr.gd:68 @ initialise()
first_person_controller_vr.gd:57 @ _ready()
Looking in OpenXRApi.cpp I can see the function call to xrEnumerateInstanceExtensionProperties and subsequent call to xr_result, but its call to xrResultToString doesn't seem to produce an error value inside the square brackets. Which the Khronos document seems to indicate is impossible...
Unless anyone has any suggestions as to what's wrong, I will setup a buildable copy of the library to properly debug.
Hi @Psychophylaxis2 ,
At the moment, the plugin is currently compiled with Meta's OpenXR loader which only works on the Quest.
Hi. I'm still pursuing this. However I'm not all that familiar with building on Android and Gradle.. Learning as I go. I have the ovr version building from the source, now figuring out how the build is constructed. The Focus 3 uses the native Khronos loader as I understand it. Any pointers on the android build appreciated. Thanks
Hi @Psychophylaxis2,
Sorry for the late response, I've been fully pre-occupied with Godot 4 in recent months.
As the Khronos loader is already part of our repo it should be possible to change the build scripts to switch over to it though the binaries are missing.
That said, I've heard that in theory you could swap out the libopenxr_loader.so file in the meta SDK for the Khronos one and recompile the plugin as is and it should work. Lacking a device to test on I don't know if this is true.
It wouldn't be a full solution however as you're missing out on the Java parts of the Khronos setup and the manifest file would still contain all the Meta related settings and a bunch of those are hardcoded in the Godot 3 core.
We're working on a much more flexible system for Godot 4 which hopefully will have the first parts merged into master soon
Hi, I am interested to follow up this thread, it has been quite a long time. I think my case is similar to here. I am new to programming in general so apologies in adv.
I am looking to compile an openxr android application that works on quest but bring it over to the Focus 3, the headset I have.
Can you let me know some steps I might need to consider to get it working?
|
gharchive/issue
| 2022-07-25T17:00:48 |
2025-04-01T04:55:08.331249
|
{
"authors": [
"BastiaanOlij",
"Psychophylaxis2",
"latsfro"
],
"repo": "GodotVR/godot_openxr",
"url": "https://github.com/GodotVR/godot_openxr/issues/226",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2645923549
|
Allow custom servers when using DisableSsl=false
Fixes a regression introduced during the migration to ENet that broke PC BeatTogether clients connecting to BeatUpServer instances
Interesting that this went unnoticed for so long.
PR looks fine I'll merge and see that I can prepare a new release soon.
|
gharchive/pull-request
| 2024-11-09T10:47:27 |
2025-04-01T04:55:08.389376
|
{
"authors": [
"michael-r-elp",
"rcelyte"
],
"repo": "Goobwabber/MultiplayerCore",
"url": "https://github.com/Goobwabber/MultiplayerCore/pull/56",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
96792917
|
Loaded images added to TextureCache with url
I'm currently moving from Phaser to Pixi and I've noticed a difference in the loaders. In Phaser you are able to load an image from cache using the name you give to the asset during load - in Pixi the Texture is stored in TextureCache with url as the key.
Is there a reason this is not name? looking at the code it seems name defaults to url if it is not supplied - I think this would give users the best of both worlds?
I might have missed something but I think its a matter of changing: https://github.com/GoodBoyDigital/pixi.js/blob/master/src/loaders/textureParser.js#L12
to:
core.utils.TextureCache[resource.name] = resource.texture;
The texture cache is an internal mechanism to power the convenience methods of .fromImage().
The loader "cache" is the resources object on the loader. It is keyed by the name you pass in.
PIXI.loader.add('derp', 'file.png').load(function (loader, resources) {
// the resources object is the loader's "cache", keyed by the loaded name.
// resources === loader.resources === PIXI.loader.resources
});
@englercj so to use these resources from PIXI.Sprite.fromImage I need to pass the url? Wouldn't it make more sense to key the TextureCache by name, falling back to url if name is not present?
That isn't what .fromImage() is:
http://pixijs.github.io/docs/PIXI.Texture.html#.fromImage
The first parameter is a URL, it is for creating a texture from a url. If you don't want to use a url then don't use that API.
In fact, if you are using the loader it already creates a texture for you; just use it:
PIXI.loader.add('derp', 'file.png').load(function (loader, resources) {
console.log(resources.derp.texture);
});
Ok thanks, i think this is probably just me getting confused when switching from phaser's api.
I expected to be able to:
PIXI.loader.add('derp', 'file.png').load(function () {
var mySprite = PIXI.Sprite.fromImage('derp');
});
@ahumphreys87 Nope! :)
That isn't what that API was created for, or should be used for. Just use the loader resources, you will create less objects that way anyway.
|
gharchive/issue
| 2015-07-23T11:37:33 |
2025-04-01T04:55:08.395104
|
{
"authors": [
"ahumphreys87",
"englercj"
],
"repo": "GoodBoyDigital/pixi.js",
"url": "https://github.com/GoodBoyDigital/pixi.js/issues/1994",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
516837247
|
bonus popup should have icon at the top
same icon as receive
Look as expected.
The bonus income have the same icon, as a daily income.
|
gharchive/issue
| 2019-11-03T14:39:02 |
2025-04-01T04:55:08.396471
|
{
"authors": [
"AnastasiiaOdnoshevna",
"sirpy"
],
"repo": "GoodDollar/GoodDAPP",
"url": "https://github.com/GoodDollar/GoodDAPP/issues/803",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
145274886
|
Throw errors for incorrectly encoded keys
We should ensure to throw an error if the keys are provided in the wrong encoding or type, e.g. a string instead of a buffer, non-URL-safe Base64 etc.
@owencm was there a particular example where you hit this?
We have some tests on bad input set up, but sounds like we are missing something.
|
gharchive/issue
| 2016-04-01T20:11:13 |
2025-04-01T04:55:08.444507
|
{
"authors": [
"gauntface",
"owencm"
],
"repo": "GoogleChrome/push-encryption-node",
"url": "https://github.com/GoogleChrome/push-encryption-node/issues/21",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
747401862
|
Extend analytics endpoint example with web-vitals-reporter
It's often useful to batch /analytics calls and send as one request at the end of a session.
I added an example, how to use web-vitals-reporter to simplify metrics collection. (I'm the author of the library)
Thanks for the suggestion. I would like add this library to the README so folks are aware of it. I haven't had a chance to take a look yet though, but I wanted to let you know I'm planning to do so next week after the holidays.
Thanks, @philipwalton, for your reply. I'd be happy to hear feedback about the web-vitals-reporter from you.
Have a great holiday :)
Hey @philipwalton, just ping me if you have any question.
importar { getCLS , getFID , getLCP } desde 'web-vitals' ;
función sendToAnalytics ( métrica ) {
cuerpo constante = JSON . stringify ( { [ métrica . Nombre ] : métrica . Valor } ) ; // Use navigator.sendBeacon () si está disponible, volviendo a fetch (). ( navegador . sendBeacon && navegador . sendBeacon ( '/ analytics' , cuerpo ) ) || fetch ( '/ analytics' , {
cuerpo , método : 'POST' , keepalive : true } ) ;
}
getCLS ( sendToAnalytics ) ;
getFID ( sendToAnalytics ) ;
getLCP ( sendToAnalytics ) ;
Puede generar informes por lotes de devoluciones de llamada y enviar métricas con una sola solicitud mediante web-vitals-reporter:
importar { getCLS , getFID , getLCP } desde 'web-vitals' ;
importar { createApiReporter } desde 'web-vitals-reporter' ; // 800 bytes
const sendToApi = createApiReporter ( '/ analytics' ) ;
getLCP ( sendToApi ) ;
getFID ( sendToApi ) ;
getCLS ( sendToApi ) ;
Envíe los resultados a Google Analytics
Google Analytics no admite distribuciones de métricas de informes en ninguno de sus informes integrados; sin embargo, si establece un valor de dimensión único (en este caso, la métrica id) en cada instancia de métrica que envía a Google Analytics, incluir esa dimensión en un informe personalizado le permitirá construir una distribución manualmente.
Sorry to leave this hanging for so long. I ended up adding an integrations section to the README to highlight third-party projects that are not directly maintained by anyone on the Chrome team, and I've included this library in that list.
Hi @philipwalton, no worries, I think the integrations section is an excellent idea!
It separates the core from the ecosystem. And I am sure more integrations will come.
|
gharchive/pull-request
| 2020-11-20T11:09:20 |
2025-04-01T04:55:08.451431
|
{
"authors": [
"alekseykulikov",
"philipwalton",
"tonirmv"
],
"repo": "GoogleChrome/web-vitals",
"url": "https://github.com/GoogleChrome/web-vitals/pull/94",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
440294096
|
Mobile
Doesn't work in mobile (iOS, anyway).
fineuploader handles this well, but I don't know how well that library would fit in with your implementation.
Can you describe how it doesn't work?
Apologies - your question made me go and search for drag and drop on iPad, and I found it was possible. I thought it wasn't.
It's worth mentioning what fine-uploader does on iPad, which I think is more intuitive:
touch the upload area
a popup offers: Take Photo or Video / Photo Library / Browse
touching Photo Library shows the usual iOS panel from which one or more images can be selected, and pressing 'Done' loads them into the upload area for processing.
iPad drag and drop works with this as well.
it is just not useable.if you have a iphone just try it.
canyou make a app of squoosh?
Can you describe how it isn't usable?
Closing until further details are provided
|
gharchive/issue
| 2019-05-04T07:16:39 |
2025-04-01T04:55:08.456142
|
{
"authors": [
"jakearchibald",
"kasusa",
"roygrubb"
],
"repo": "GoogleChromeLabs/squoosh",
"url": "https://github.com/GoogleChromeLabs/squoosh/issues/577",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
310404090
|
Some files are ignored
Here is a simple gulp task:
.task('bundle-service-worker', function (callback) {
swPrecache.write('sw.js', {
staticFileGlobs : [
"dist/*.min.{css,html,js}",
"dist/*.{ico,json,png,svg,wasm,woff2}",
'index.html'
]
}, callback);
})
For some reason, all files are included except dist/*.min.html, dist/*.wasm and dist/*.woff2.
dist/*.json files are fine and index.html is fine too. It is clearly not the size issue, since all of the files combined are ~600KiB and some of JS files included are bigger than these problematic types of files. Order doesn't seem to have any effect either.
I don't see anything in documentation regarding these file types and can't find such extensions in sw-precache's source code.
Was a race condition
|
gharchive/issue
| 2018-04-02T06:07:31 |
2025-04-01T04:55:08.458255
|
{
"authors": [
"nazar-pc"
],
"repo": "GoogleChromeLabs/sw-precache",
"url": "https://github.com/GoogleChromeLabs/sw-precache/issues/352",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
160733092
|
openstack: Provider is not compatible with Keystone v3
The OpenStack provider implementation included in v1.5.0 is not compatible with Keystone v3 API. I created this issue for the sake of helping others and documenting this problem.
If v1.5.0 or earlier is used against Keystone v3 it will report the following error:
File "/opt/PerfKitBenchmarker/v1.5.0/local/lib/python2.7/site-packages/keystoneclient/auth/identity/v2.py", line 88, in get_auth_ref
authenticated=False, log=False)
File "/opt/PerfKitBenchmarker/v1.5.0/local/lib/python2.7/site-packages/keystoneclient/session.py", line 505, in post
return self.request(url, 'POST', **kwargs)
File "/opt/PerfKitBenchmarker/v1.5.0/local/lib/python2.7/site-packages/keystoneclient/utils.py", line 337, in inner
return func(*args, **kwargs)
File "/opt/PerfKitBenchmarker/v1.5.0/local/lib/python2.7/site-packages/keystoneclient/session.py", line 405, in request
raise exceptions.from_response(resp, method, url)
NotFound: The resource could not be found. (HTTP 404) (Request-ID: req-d63f966f-5907-40bf-adee-f84fc247e2a2)
Workaround:
If your OpenStack deployment still supports Keystone v2 and you are using PKB v1.5.0 or earlier, simply set the OS_AUTH_URL to use Keystone v2.0.
Example:
export OS_AUTH_URL=http://172.29.236.10:5000/v2.0
Resolution:
The PR #942 which moves to use the OpenStack CLI and addresses this issue since OpenStack CLI supports both Keystone v2 and v3. Once merged this issue can be closed.
PR #924 has been merged. This can be closed now.
|
gharchive/issue
| 2016-06-16T18:50:11 |
2025-04-01T04:55:08.466765
|
{
"authors": [
"meteorfox"
],
"repo": "GoogleCloudPlatform/PerfKitBenchmarker",
"url": "https://github.com/GoogleCloudPlatform/PerfKitBenchmarker/issues/1017",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
60153480
|
epel-release not available for RHEL images
The epel-release package for RHEL images on GCP is not currently available and prevents bonnie++ benchmark and sysbench oltp benchmark from running on RHEL images. The Centos images have the extras repository enabled and can yum install epel-release without issue. Either the image on GCP should be fixed or the framework must be robust enough to determine that the epel may need to be installed via url as it was prior to PR #99. If a static url is used, the package is different between RHEL 6 & 7.
What do you think is the best course here?
If we go the links route what links do you recommend?
One more q... What is the best way to deyetmine the image RedHat?
On Mar 6, 2015 9:21 PM, "Alex Krzos" notifications@github.com wrote:
The epel-release package for RHEL images on GCP is not currently available
and prevents bonnie++ benchmark and sysbench oltp benchmark from running on
RHEL images. The Centos images have the extras repository enabled and can
yum install epel-release without issue. Either the image on GCP should be
fixed or the framework must be robust enough to determine that the epel may
need to be installed via url as it was prior to PR #99
https://github.com/GoogleCloudPlatform/PerfKitBenchmarker/pull/99. If a
static url is used, the package is different between RHEL 6 & 7.
—
Reply to this email directly or view it on GitHub
https://github.com/GoogleCloudPlatform/PerfKitBenchmarker/issues/153.
https://github.com/GoogleCloudPlatform/PerfKitBenchmarker/pull/154 fixes this - I tried running bonnie++ (which requires EPEL) on RHEL 6/7 and it successfully installed.
@voellm I'll ask if there is a static url for the latest epel-release package per el6 and el7.
@ehankland Looks great, Thanks for the fast PR, hopefullyt it gets merged quick. :+1:
On another note, the iperf only attempts to install the el7 version so it fails on RHEL 6. The package that is built for el6 is at: http://pkgs.repoforge.org/iperf/iperf-2.0.4-1.el6.rf.x86_64.rpm
Do I need to open a second issue for iperf to be fixed?
EPEL6 contains an iperf package - now that EPEL6 gets installed correctly on RHEL 6, iperf will also work. EPEL7 only contains iperf3, which is why we have to directly get the rpm. I verified this by running iperf (with the fix) on RHEL 6.
Merged the fix into dev
|
gharchive/issue
| 2015-03-06T20:21:23 |
2025-04-01T04:55:08.473357
|
{
"authors": [
"akrzos",
"ehankland",
"voellm"
],
"repo": "GoogleCloudPlatform/PerfKitBenchmarker",
"url": "https://github.com/GoogleCloudPlatform/PerfKitBenchmarker/issues/153",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
58981264
|
[WIP] Provides a flag to execute UnixBench with the > 16 cores patch
UnixBench by default is designed to only execute up to 16 jobs
concurrently. If your system has more than 16 CPUs available, UnixBench
will not execute enough jobs to fully utilize all the CPUs in the machine,
which will result in a lower score that what the system is capable of.
A patch was submitted to the UnixBench issues that addresses this issue.
This commit allows the user to optionally use this patch by setting the
following flag:
--unixbench_all_cores
Official link to issue:
https://code.google.com/p/byte-unixbench/issues/detail?id=4
DO NOT MERGE YET!
This is WIP, I just want to get early feedback.
No worries... well just look at the code.
Connor why does the bot keep pinging?
On Feb 25, 2015 2:36 PM, "Carlos Torres" notifications@github.com wrote:
DO NOT MERGE YET!
This is WIP, I just want to get early feedback.
--
Reply to this email directly or view it on GitHub
https://github.com/GoogleCloudPlatform/PerfKitBenchmarker/pull/135#issuecomment-76076207
.
ok to test.
Sorry for the noise from the bot - I'll look at turning it down.
Looks pretty good so far. The only comment is to make the flag more descriptive like "Setting this flag changes the default behavior of Unix bench. It will now scale to the number of processors on the machine."
Is the 2D and 3D line in the patch file adding extra tests?
Ok, I will add that to the flag description.
It doesn't add extra tests, I think the GitHub diff viewer is kind of confusing. If you look at the file directly here you'll see it doesn't add them, they are already there.
https://github.com/meteorfox/PerfKitBenchmarker/blob/unixbench-16-cores-patch/perfkitbenchmarker/data/unixbench-16core-limitation.patch
I'm having an issue with the regex for the score for some reason. I'm debugging it right now, will commit patch soon.
This is the error I'm getting:
NoMatchError: No match for pattern "\n([A-Z][\w\-\(\) ]+)\s+([-+]?[0-9]*\.?[0-9]+) (\w+)\s+\(([-+]?[0-9]*\.?[0-9]+) (\w+), (\d+) samples\)" in "Benchmark Run: Wed Feb 25 2015 23:16:02 - 23:16:02
32 CPUs in system; running 32 parallel copies of tests
"
Here's the full UnixBench output where is failing on:
https://gist.github.com/meteorfox/558408e6138fac5d90ed
I think because the output was longer due to number of processors in the system, it didn't flush all of it or something, the output stops abruptly at the ""Benchmark Run: Wed Feb 25 2015 23:16:02 - 23:16:02 32 CPUs in system; running 32 parallel copies of tests"" line and there's nothing else.
It will be good to add some tests for the output parser :)
On Thu, Feb 26, 2015 at 1:31 AM, Carlos Torres notifications@github.com
wrote:
I think because the output was longer due to number of processors in the
system, it didn't flush all of it or something, the output stops abruptly
at the ""Benchmark Run: Wed Feb 25 2015 23:16:02 - 23:16:02 32 CPUs in
system; running 32 parallel copies of tests"" line and there's nothing else.
--
Reply to this email directly or view it on GitHub
https://github.com/GoogleCloudPlatform/PerfKitBenchmarker/pull/135#issuecomment-76097035
.
--
Anthony F. Voellm (aka Tony)
Google Voice: (650) 516-7382
https://www.google.com/voice/b/0?pli=1#phones
Blog: http://perfguy.blogspot.com/
Ok, I figured out the problem. It wasn't the size of the output with the default UnixBench you cannot simply run more than 16 "copies", which is exactly the problem this pull-request request is trying to address. It's easy to replicate, either run UnixBench as is with no-patch in a box with more than 16 CPUs, or execute it with this flag:
$ ./Run -c 17
-c is the number of copies, which it will not execute beyond 16. If -c is <= 16, it will work as expected. Executing it this way will skip directly to an incomplete output just like above, and produce an exception on PerfKit.
Can someone confirm and reproduce the steps I mentioned on the previous comment?
Setup UnixBench (default, with no-patch)
Execute with ./Run -c N where N > 16
When no flags are specified, UnixBench does a run with -c 1, and then another one with -c M, where M is the number of CPUs in the system.
Expected outcome is that it runs N parallel copies and produces the full results report
Actual outcome, only the following message is printed and benchmarks do not run, hence reports no data.
"M CPUs in system; running N parallel copies of tests"
Regarding the issue about not getting data from the parallel copies test with vanilla UnixBench on a box with more than 16 CPUs, it's actually a limitation on UnixBench and not on PerfKit. This bug is already in master, and it is not introduced by this pull-request.
This pull-request enables a user to optionally overcome this limitation, by adding the --unixbench_all_cores=true flag to the pkb command. It doesn't address the problem of PerfKit throwing the NoMatchError exception when running vanilla UnixBench on >16 CPUs systems, I believe another issue and pull-request should be opened to address that problem.
We will need to decide how will PerfKit deal when executing unixbench on a >16 CPUs system and without this flag.
As of the last commit, I consider this change to be Ready for Review.
Connor,
Can you please help Carlos with this commit.
Thanks,
Tony
On Fri, Feb 27, 2015 at 3:55 PM, Carlos Torres notifications@github.com
wrote:
Regarding the issue about not getting data from the parallel copies test
with vanilla UnixBench on a box with more than 16 CPUs, it's actually a
limitation on UnixBench and not on PerfKit. This bug is already in master,
and it is not introduced by this pull-request.
This pull-request enables a user to optionally overcome this limitation,
by adding the --unixbench_all_cores=true flag to the pkb command. It
doesn't address the problem of PerfKit throwing the NoMatchError exception
when running vanilla UnixBench on >16 CPUs systems, I believe another issue
and pull-request should be opened to address that problem.
We will need to decide how will PerfKit deal when executing unixbench on a
16 CPUs system and without this flag.
As of the last commit, I consider this change to be Ready for Review.
--
Reply to this email directly or view it on GitHub
https://github.com/GoogleCloudPlatform/PerfKitBenchmarker/pull/135#issuecomment-76495031
.
--
Anthony F. Voellm (aka Tony)
Google Voice: (650) 516-7382
https://www.google.com/voice/b/0?pli=1#phones
Blog: http://perfguy.blogspot.com/
Sure, I'm looking now.
Can someone confirm and reproduce the steps I mentioned on the previous comment?
Setup UnixBench (default, with no-patch)
Execute with ./Run -c N where N > 16
When no flags are specified, UnixBench does a run with -c 1, and then another one with -c M, where M is the number of CPUs in the system.
Expected outcome is that it runs N parallel copies and produces the full results report
Actual outcome, only the following message is printed and benchmarks do not run, hence reports no data.
"M CPUs in system; running N parallel copies of tests"
I get the same behavior. I opened #138 to track cleaning up the behavior when running on a machine with >16 cores and without --unixbench_all_cores.
I'm running UnixBench with the patch to verify, otherwise looks great.
@cmccoy Thanks for reviewing this commit.
Thanks, @meteorfox.
|
gharchive/pull-request
| 2015-02-25T22:34:38 |
2025-04-01T04:55:08.496173
|
{
"authors": [
"cmccoy",
"meteorfox",
"voellm"
],
"repo": "GoogleCloudPlatform/PerfKitBenchmarker",
"url": "https://github.com/GoogleCloudPlatform/PerfKitBenchmarker/pull/135",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2221422924
|
hide existing cluster from MP
Hide out the options to choose exisitint GKE clusters and always create new cluster
/gcbrun
|
gharchive/pull-request
| 2024-04-02T20:50:48 |
2025-04-01T04:55:08.497751
|
{
"authors": [
"umeshkumhar"
],
"repo": "GoogleCloudPlatform/ai-on-gke",
"url": "https://github.com/GoogleCloudPlatform/ai-on-gke/pull/538",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
323018429
|
messaging/invalid-argument NodeJSAdmin
1
down vote
favorite
Hi i working with Node JS and Firebase SDK,
Im trying to send FCM notificacion as documentation says: https://firebase.google.com/docs/cloud-messaging/admin/send-messages?hl=es-419,
My code:
function testMessage(){
let token="f8mNlJq4VOY:APA91bF5_8ldOZEm34ajCfNx7hZ9_LhjUBQFDwZbtSnCNEzb1bEtMsXKlM8upyicvmnJ92xELZzDSxTMaeZrCrrau"
let message={
notification: {
title: "Account Deposit",
body: "A deposit to your savings account has just cleared."
},
data: {
score: '850',
time: '2:45'
},
token:token
};
admin.messaging().send(message)
.then((response) => {
// Response is a message ID string.
console.log('Successfully sent message:', response);
})
.catch((error) => {
console.log('Error sending message:', error);
});
}
But when i execute, this show this error:
Error sending message: { Error: Request contains an invalid argument. at FirebaseMessagingError.FirebaseError [as constructor] (C:\Users\Daniel\Documents\Monitora\Demos\BackEnd\APIMonitora\node_modules\firebase-admin\lib\utils\error.js:39:28) at FirebaseMessagingError.PrefixedFirebaseError [as constructor] (C:\Users\Daniel\Documents\Monitora\Demos\BackEnd\APIMonitora\node_modules\firebase-admin\lib\utils\error.js:85:28) at new FirebaseMessagingError (C:\Users\Daniel\Documents\Monitora\Demos\BackEnd\APIMonitora\node_modules\firebase-admin\lib\utils\error.js:241:16) at Function.FirebaseMessagingError.fromServerError (C:\Users\Daniel\Documents\Monitora\Demos\BackEnd\APIMonitora\node_modules\firebase-admin\lib\utils\error.js:271:16) at C:\Users\Daniel\Documents\Monitora\Demos\BackEnd\APIMonitora\node_modules\firebase-admin\lib\messaging\messaging-api-request.js:149:50 at at process._tickCallback (internal/process/next_tick.js:188:7) errorInfo: { code: 'messaging/invalid-argument', message: 'Request contains an invalid argument.' }, codePrefix: 'messaging' }
Please file this issue at the appropriate repository. Judging from the stack trace that seems to be firebase-admin. Thanks!
|
gharchive/issue
| 2018-05-14T23:41:57 |
2025-04-01T04:55:08.515300
|
{
"authors": [
"ALAxHxC",
"kjin"
],
"repo": "GoogleCloudPlatform/cloud-trace-nodejs",
"url": "https://github.com/GoogleCloudPlatform/cloud-trace-nodejs/issues/751",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1129049395
|
[Spanner]: Some tests are flaky
When run conccurrently: https://source.cloud.google.com/results/invocations/23568d2c-60a0-4adc-8a51-cd3d7ec0f493/log
https://source.cloud.google.com/results/invocations/63f4d186-c056-4fdb-af71-f10ba8df9b57/log
https://github.com/googleapis/google-cloud-dotnet/pull/8363 Should fix this.
Failed CopyBackupTest.CopyBackup [51 ms]
Error Message:
Grpc.Core.RpcException : Status(StatusCode="FailedPrecondition", Detail="Cannot copy source backup projects/dotnet-docs-samples-tests/instances/my-instance/backups/my-test-database-backup because the backup is still being created. Please retry the operation once the pending backup is complete.")
Stack Trace:
at Grpc.Net.Client.Internal.HttpClientCallInvoker.BlockingUnaryCall[TRequest,TResponse](Method`2 method, String host, CallOptions options, TRequest request)
at Grpc.Core.Interceptors.InterceptingCallInvoker.<BlockingUnaryCall>b__3_0[TRequest,TResponse](TRequest req, ClientInterceptorContext`2 ctx)
at Grpc.Core.ClientBase.ClientBaseConfiguration.ClientBaseConfigurationInterceptor.BlockingUnaryCall[TRequest,TResponse](TRequest request, ClientInterceptorContext`2 context, BlockingUnaryCallContinuation`2 continuation)
at Grpc.Core.Interceptors.InterceptingCallInvoker.BlockingUnaryCall[TRequest,TResponse](Method`2 method, String host, CallOptions options, TRequest request)
at Google.Cloud.Spanner.Admin.Database.V1.DatabaseAdmin.DatabaseAdminClient.CopyBackup(CopyBackupRequest request, CallOptions options)
at Google.Api.Gax.Grpc.ApiCall.GrpcCallAdapter`2.CallSync(TRequest request, CallSettings callSettings)
at Google.Api.Gax.Grpc.ApiCallRetryExtensions.<>c__DisplayClass1_0`2.<WithRetry>b__0(TRequest request, CallSettings callSettings)
at Google.Api.Gax.Grpc.ApiCall`2.<>c__DisplayClass12_0.<WithCallSettingsOverlay>b__1(TRequest req, CallSettings cs)
at Google.Api.Gax.Grpc.ApiCall`2.Sync(TRequest request, CallSettings perCallCallSettings)
at Google.Cloud.Spanner.Admin.Database.V1.DatabaseAdminClientImpl.CopyBackup(CopyBackupRequest request, CallSettings callSettings)
at CopyBackupSample.CopyBackup(String sourceInstanceId, String sourceProjectId, String sourceBackupId, String targetInstanceId, String targetProjectId, String targetBackupId, DateTimeOffset expireTime) in C:\tmpfs\src\github\dotnet-docs-samples\spanner\api\Spanner.Samples\CopyBackup.cs:line 39
at CopyBackupTest.CopyBackup() in C:\tmpfs\src\github\dotnet-docs-samples\spanner\api\Spanner.Samples.Tests\CopyBackupTest.cs:line 41
and
Failed CreateBackupTest.TestCreateBackup [174 ms]
Error Message:
Assert.Equal() Failure
Expected: AlreadyExists
Actual: FailedPrecondition
Stack Trace:
at CreateBackupTest.TestCreateBackup() in C:\tmpfs\src\github\dotnet-docs-samples\spanner\api\Spanner.Samples.Tests\CreateBackupTest.cs:line 37
https://source.cloud.google.com/results/invocations/861571ba-2084-44cd-87fc-dbd92c9eb030/log
|
gharchive/issue
| 2022-02-09T21:41:37 |
2025-04-01T04:55:08.520961
|
{
"authors": [
"amanda-tarafa"
],
"repo": "GoogleCloudPlatform/dotnet-docs-samples",
"url": "https://github.com/GoogleCloudPlatform/dotnet-docs-samples/issues/1623",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1857026268
|
Update PaLM Docs to reflect Vertex AI default
[READ] Step 1: Are you in the right place?
Issues filed here should be about a feature request for a specific extension in this repository. To file a feature request that affects multiple extensions or the Firebase Extensions platform, please reach out to
Firebase support directly.
[REQUIRED] Step 2: Extension name
This feature request is for extension: palm-firestore-chatbot, palm-firestore-gen-text, palm-firestore-summarize-text, firestore-semantic-search (storage-label-videos, etc)
What feature would you like to see?
The docs still include this line:
⚠️ The PaLM API is currently in public preview. For details and limitations, see the [PaLM API documentation](https://developers.generativeai.google/guide/preview_faq).
Please ensure that you have already signed up for the [waitlist](https://makersuite.google.com/waitlist) and have been approved before installing the extension.
We should remove this from the top because Vertex AI is GA already and doesn't require waitlist.
Possibly let's add a smaller warning that says if you're using the Developer PaLM API, you still need to sign up for the waitlist. Use Vertex AI for production use cases.
https://github.com/GoogleCloudPlatform/firebase-extensions/pull/158/files
I had merged this in which addresses this, but feel free to review the wording/formatting etc to check it's OK?
|
gharchive/issue
| 2023-08-18T17:04:34 |
2025-04-01T04:55:08.524587
|
{
"authors": [
"cabljac",
"huangjeff5"
],
"repo": "GoogleCloudPlatform/firebase-extensions",
"url": "https://github.com/GoogleCloudPlatform/firebase-extensions/issues/162",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
358226416
|
Improve Naming Quality of the Docker Setup and Install Scripts
From the update in #1993, we need to make the scripts' naming align with the documentation, where the install is confusing with setup, and install is really doing starting or running.
The smallest change would be to change:
current: install/scripts/docker_install_forseti.sh
proposed: install/scripts/docker_run_forseti.sh
#1998
This is fixed by @rvandegrift
|
gharchive/issue
| 2018-09-07T22:10:12 |
2025-04-01T04:55:08.526909
|
{
"authors": [
"blueandgold",
"jjawad-google"
],
"repo": "GoogleCloudPlatform/forseti-security",
"url": "https://github.com/GoogleCloudPlatform/forseti-security/issues/1996",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
172313782
|
Using Google Cloud Datastore API in Python Code
Hi @dhermes ,
I am trying to Implement Google Cloud DataStore in my Python Django Project not running on Google App Engine.
Can it be possible to use Google Datastore without having the project run on Google App Engine ?
If yes, Can you please tell how to retrieve the complete entity object or execute the query successfully, if possible ?
The below code snippet prints the query object but throws an error after that.
( I have a valid project. It prints the correct project if i try to print the value of key)
Code Snippet:
from gcloud import datastore
entity_kind = "EntityKind"
numeric_id = 1234xxx89
client = datastore.Client()
key = client.key(entity_kind, numeric_id)
query = client.query(kind=entity_kind)
print(query)
results = list(query.fetch())
print(results)
Error:
NotFound: 404 The project gxxxp does not exist or it does not contain an active App Engine application. Please visit http://console.developers.google.com to create a project or https://console.developers.google.com/appengine?project=gxxxp to add an App Engine application. Note that the app must not be disabled.
Thanks
Did you follow the recommendation in the error?
Thanks @dhermes for your reply.
I have a valid Google Project. But No Active App running on Google App Engine. I wish to use Google Datastore API without using the Google App Engine. Is it possible ?
I don't think so. You should file an issue on https://github.com/GoogleCloudPlatform/google-cloud-datastore to double check (and also people in the future can find out there).
|
gharchive/issue
| 2016-08-21T09:59:03 |
2025-04-01T04:55:08.533982
|
{
"authors": [
"dhermes",
"naveensinghal"
],
"repo": "GoogleCloudPlatform/gcloud-python",
"url": "https://github.com/GoogleCloudPlatform/gcloud-python/issues/2152",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
155385992
|
Update README.rst
Returns None:
blob = bucket.get_blob('/remote/path/to/file.txt')
Returns blob:
blob = bucket.get_blob('remote/path/to/file.txt')
LGTM.
@jgeewax this is certainly not an issue where copyright assignment would matter: do we need to wait for CLA to be happy before merging?
I signed it!
Thank you for the patch!
@dhermes : In general, the safe bet is to get a CLA signed on everything. I agree it seems silly to require this for things like typo fixes, but it's still important.
|
gharchive/pull-request
| 2016-05-17T23:31:38 |
2025-04-01T04:55:08.536282
|
{
"authors": [
"davidraleigh",
"jgeewax",
"tseaver"
],
"repo": "GoogleCloudPlatform/gcloud-python",
"url": "https://github.com/GoogleCloudPlatform/gcloud-python/pull/1805",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
38996262
|
core: make gcloud-ruby repo public
Anyone should be able to read this message once this is public.
The gcloud-ruby repo is now public.
|
gharchive/issue
| 2014-07-29T14:49:37 |
2025-04-01T04:55:08.537119
|
{
"authors": [
"beriberikix",
"blowmage"
],
"repo": "GoogleCloudPlatform/gcloud-ruby",
"url": "https://github.com/GoogleCloudPlatform/gcloud-ruby/issues/5",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
701352934
|
storage: missing samples
In Storage Sample Coverage following samples are missing or have wrong region tag:
[x] storage_bucket_delete_default_kms_key
[x] storage_generate_encryption_key
[x] storage_view_bucket_iam_members
[x] storage_add_bucket_label
[x] storage_compose_file
[x] storage_remove_bucket_label
[x] storage_change_file_storage_class
[x] storage_enable_versioning
[ ] storage_set_metadata
[x] storage_set_bucket_public_iam
[x] storage_define_bucket_website_configuration
[x] storage_download_public_file
[x] storage_get_service_account
[x] storage_cors_configuration
[x] storage_disable_versioning
[x] storage_copy_file_archived_generation
[x] storage_object_csek_to_cmek
[ ] storage_change_default_storage_class
[x] storage_delete_file_archived_generation
[x] storage_remove_cors_configuration
@tritone @crwilcox What is the next step for this issue?
Reassigning to @JesseLovelace who is working on sample completeness. Jesse, is there any remaining work to be completed here or should this be closed out?
|
gharchive/issue
| 2020-09-14T19:06:14 |
2025-04-01T04:55:08.543864
|
{
"authors": [
"AlisskaPie",
"tbpg",
"tritone"
],
"repo": "GoogleCloudPlatform/golang-samples",
"url": "https://github.com/GoogleCloudPlatform/golang-samples/issues/1707",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
207842606
|
upload function validate option
The 'validate' boolean option in the optional array provided in the 'upload' function does not seem to actually validate the data being uploaded to GCS, mistakenly executed the function with an undefined variable as the data object to be uploaded and the upload was executed without any exception, checked GCS and the 'file' had been uploaded although it was empty when opened. In my case it was a txt file.
Hey there @migueru! Thanks for filing this issue.
In the case of an undefined variable being passed in, the underlying code would see that as a value of null (which actually would be valid to upload). The validate option actually calculates a hash of the provided data to be sent along while uploading, once the upload is complete if the provided hash doesn't match that on the server it will be rejected. The option doesn't prevent specific values passed in through $data from being sent upstream. I'll update the documentation to make sure null is listed as a valid type.
Noted, thank you @dwsupplee !
|
gharchive/issue
| 2017-02-15T15:49:38 |
2025-04-01T04:55:08.558217
|
{
"authors": [
"dwsupplee",
"migueru"
],
"repo": "GoogleCloudPlatform/google-cloud-php",
"url": "https://github.com/GoogleCloudPlatform/google-cloud-php/issues/331",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
263257701
|
Fix tests and update log cmdlets
There are some changes in the Logging API so we have to change some of the logging cmdlets.
This change is
@ILMTitan PTAL, I added warnings for the parameters.
Thanks!
|
gharchive/pull-request
| 2017-10-05T20:35:58 |
2025-04-01T04:55:08.560338
|
{
"authors": [
"quoctruong"
],
"repo": "GoogleCloudPlatform/google-cloud-powershell",
"url": "https://github.com/GoogleCloudPlatform/google-cloud-powershell/pull/558",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
346779236
|
Release BigQuery 1.5.0
For clustering feature.
@alixhami Do you want to try your hand at releasing this one?
We might want to wait for https://github.com/GoogleCloudPlatform/google-cloud-python/pull/5711
@tswast We likely also need to fix the missing bigquery-1.4.0 tag (I don't know if releasetool will work without it).
Thanks for taking care of the tag.
I think we need to land #5714 first, as it fixes non-flaky BQ system tests.
Seth is OOO today, so I'll take a stab at the open comments.
when can I expect new version with clustering?
https://pypi.org/project/google-cloud-bigquery/1.4.0/#description
@chenleiwhu Track PR 5735.
The CI for doing the auto-release just failed due to a failure in the spanner tests https://circleci.com/gh/GoogleCloudPlatform/google-cloud-python/7549
Assigning to Chris to cut a manual release.
The tag build just completed. The package is now live. https://pypi.org/project/google-cloud-bigquery/#history
|
gharchive/issue
| 2018-08-01T21:38:44 |
2025-04-01T04:55:08.565140
|
{
"authors": [
"chenleiwhu",
"tseaver",
"tswast"
],
"repo": "GoogleCloudPlatform/google-cloud-python",
"url": "https://github.com/GoogleCloudPlatform/google-cloud-python/issues/5726",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
352933084
|
./google-cloud-sdk/install.sh error
Trying to install the google cloud sdk locally. Using Mac 64 bit. just installed python 2.7.15.
c02s10gwg8wp:~ szb7493$ /Applications/google-cloud-sdk/install.sh
Welcome to the Google Cloud SDK!
Traceback (most recent call last):
File "/Applications/google-cloud-sdk/bin/bootstrapping/install.py", line 12, in
import bootstrapping
File "/Applications/google-cloud-sdk/bin/bootstrapping/bootstrapping.py", line 32, in
import setup # pylint:disable=g-import-not-at-top
File "/Applications/google-cloud-sdk/bin/bootstrapping/setup.py", line 66, in
DoAllRequiredChecks()
File "/Applications/google-cloud-sdk/bin/bootstrapping/setup.py", line 62, in DoAllRequiredChecks
properties.VALUES.core.allow_py3.GetBool()):
File "/Applications/google-cloud-sdk/lib/googlecloudsdk/core/properties.py", line 1740, in GetBool
value = _GetBoolProperty(self, named_configs.ActivePropertiesFile.Load(),
File "/Applications/google-cloud-sdk/lib/googlecloudsdk/core/configurations/named_configs.py", line 400, in Load
force_create=False).file_path])
File "/Applications/google-cloud-sdk/lib/googlecloudsdk/core/configurations/named_configs.py", line 439, in _ActiveConfig
config_name = _EffectiveActiveConfigName()
File "/Applications/google-cloud-sdk/lib/googlecloudsdk/core/configurations/named_configs.py", line 464, in _EffectiveActiveConfigName
config_name = _ActiveConfigNameFromFile()
File "/Applications/google-cloud-sdk/lib/googlecloudsdk/core/configurations/named_configs.py", line 504, in _ActiveConfigNameFromFile
.format(path, config.Paths().named_config_directory), exc)
googlecloudsdk.core.configurations.named_configs.NamedConfigFileAccessError: Active configuration name could not be read from: [/Users/szb7493/.config/gcloud/active_config]. Ensure you have sufficient read permissions on required active configuration in [/Users/szb7493/.config/gcloud/configurations].
Unable to read file [/Users/szb7493/.config/gcloud/active_config]: [Errno 13] Permission denied: '/Users/szb7493/.config/gcloud/active_config'
c02s10gwg8wp:~ szb7493$
Any clues on what this means and how to fix it?
@jens86 I'm sorry you are having trouble installing the Cloud SDK. Unfortunately, this repository is not where that SDK is maintained / supported. Please use one of the resources on the SDK support page.
|
gharchive/issue
| 2018-08-22T12:31:56 |
2025-04-01T04:55:08.571749
|
{
"authors": [
"jens86",
"tseaver"
],
"repo": "GoogleCloudPlatform/google-cloud-python",
"url": "https://github.com/GoogleCloudPlatform/google-cloud-python/issues/5833",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
364242338
|
BigQuery: make QueryJob._query_results.total_rows public
Request to have total_rows property public, under QueryJob() along with other stats
My initial preference would be against exposing this directly as property of the QueryJob, as its part of the jobs.getQueryResults / tabledata.list response and not a member of the query stats metadata.
Is the interest in fetching the table statistics for the results without the extra fetch of destination table metadata, or are you interested in it related to progress while fetching row data?
The interest is to determine how many rows a query, when completed, has generated.
Fetching the destination table metadata is not valid if records are appended to an existing table - I would get the total number of rows, not the addition.
Note that totalRows exists in both jobs.getQueryResults and jobs.query. There is no need to call the former (except if job is inserted) so request is not unreasonable. Also, https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/query#totalBytesProcessed made it as property of QueryJob() so why not total_rows.
Thanks for clarifying the interest is in the effective delta.
For the case where you run a query with a destination table and a WRITE_APPEND disposition, the results (and total_rows) will represent the whole table. For mutation DML queries (e.g. UPDATE/DELETE), num_dml_affected_rows may help, but we should consider exposing something like the statistics.load.outputRows equivalent for query jobs (likely contingent on statement type).
If that sounds like what you're after, I'll request it for inclusion in the backend response. There's insufficient information in the existing response to correctly report the delta for the append case.
Let's cancel the request as I just suggested making the existing - and perfectly working - _query_results.total_rows a public property of QueryJob() or another object. I thought https://googlecloudplatform.github.io/google-cloud-python/latest/bigquery/generated/google.cloud.bigquery.job.QueryJob.result.html#google.cloud.bigquery.job.QueryJob.result would work but it doesn't, total_rows is None (myqueryjob.result().total_rows).
This was reported on SO too: https://stackoverflow.com/questions/48799898/bigquery-py-client-library-v0-28-fetch-result-from-table-query-job
I will just use the private _query_results.total_rows.
Thanks.
|
gharchive/issue
| 2018-09-26T22:50:08 |
2025-04-01T04:55:08.578006
|
{
"authors": [
"shollyman",
"yiga2"
],
"repo": "GoogleCloudPlatform/google-cloud-python",
"url": "https://github.com/GoogleCloudPlatform/google-cloud-python/issues/6117",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
216809680
|
Speech: Expose the GAPIC client config on the manual layer.
This PR is a response to #3165 but I somewhat doubt we should take it exactly as-is. This is meant to be a discussion starter.
Context
Definitely go read #3165 to understand the whole thing, but the basic situation is that the OP had a problem with the Speech API that he got around by overriding the GAPIC client_config. He did this by modifying the actual JSON file in the installed GAPIC, which probably is something we should not force our users to do.
Solution
The first and most obvious thing we should do is set a more sane default for this one configuration option in Speech. I am going to do that momentarily.
Additionally, I assert that we should expose the client_config as an option in our Client classes, so that our users do have the option to override these things.
This PR makes this change only for Speech, which is almost certainly the wrong final solution. Either we should do this in every API that depends on a GAPIC or we should do it in none of them. I wanted to put this in place so we can see what the solution looks like and discuss whether it should be done everywhere or nowhere. (If we do it everywhere, then the self._client_config part in Speech's Client class would migrate into core.)
Discussion
Should we do this?
@dhermes Can I get some feedback on the substance of the change? :-) As noted in the text, I am not sure whether we should actually do this.
@lukesneeringer Sorry I didn't notice the text. (I hate it when people ignore my carefully crafted text, so big time hypocrite there.)
ISTM a better option would be to take the entire GAPIC client as an optional Client argument? It's strange having constructor args that only apply to one of the transports.
@lukesneeringer I 2nd @dhermes. We should also test to see if this is happening on the HTTP side as well. If it is, I'm not sure how to pass the config parameters?
@dhermes I like that idea. Sold.
So, the API would look something like...
class Client(BaseClient):
def __init__(self, credentials=None, http=None, gapic=None, use_gax=None):
I have to say I am kind of not a fan of a separate http and gapic argument, only one of which makes sense based on the ultimate _use_gax value. But, I hate backwards-incompatible changes more, so I think that ship has sailed.
(Also, it is a crying shame that we use the terms "gapic" and "gax" nearly interchangably, but that ship has also sailed.)
Should it be spelled gapic or gapic_client? The former would match http but the latter would be clearer.
@lukesneeringer Why don't we bring the ship back to harbor? I'm fine dumping usage of GAX (GAPIC was not a term in use when the first use_gax argument was added).
I'm also fine overloading what WAS http as transport_object and then verifying that object based on which transport has been chosen.
Editor's Note: Pulling this discussion into a chat for expediency.
Discussed with @dhermes offline and we decided to punt on this question.
Summary:
We should fix the default client config for Speech. (googleapis/googleapis#297)
We should take a custom GAPIC, but we are not quite sure yet how we want that to look, based on possible features that we hope the GAPICs will get in Q2.
|
gharchive/pull-request
| 2017-03-24T14:53:45 |
2025-04-01T04:55:08.586228
|
{
"authors": [
"daspecster",
"dhermes",
"lukesneeringer"
],
"repo": "GoogleCloudPlatform/google-cloud-python",
"url": "https://github.com/GoogleCloudPlatform/google-cloud-python/pull/3204",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1870366708
|
Windows fio
Add windows fio testing
Note that the windows startup script is not currently being used because the guest attribute name for the scripts is different from linux
|
gharchive/pull-request
| 2023-08-28T19:42:19 |
2025-04-01T04:55:08.590576
|
{
"authors": [
"koln67"
],
"repo": "GoogleCloudPlatform/guest-test-infra",
"url": "https://github.com/GoogleCloudPlatform/guest-test-infra/pull/763",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
359380301
|
Switching to beta version
Is there any way to use the beta version of the APIs? I am mainly interested in speech-to-text punctuation API.
Regards
@ishida66 Use the following proto file instead of version 1 proto file:
https://github.com/googleapis/google-cloud-java/blob/master/google-api-grpc/proto-google-cloud-speech-v1p1beta1/src/main/proto/google/cloud/speech/v1p1beta1/cloud_speech.proto
Then follow this doc to implement the code:
https://cloud.google.com/speech-to-text/docs/automatic-punctuation
What code do I use to use the beta version of the APIs with the Swift iOS sample project? Thanks!
@ishida66 Use the following proto file instead of version 1 proto file:
https://github.com/googleapis/google-cloud-java/blob/master/google-api-grpc/proto-google-cloud-speech-v1p1beta1/src/main/proto/google/cloud/speech/v1p1beta1/cloud_speech.proto
Then follow this doc to implement the code:
https://cloud.google.com/speech-to-text/docs/automatic-punctuation
|
gharchive/issue
| 2018-09-12T08:59:00 |
2025-04-01T04:55:08.593862
|
{
"authors": [
"farhanarrafi",
"ishida66",
"share9"
],
"repo": "GoogleCloudPlatform/ios-docs-samples",
"url": "https://github.com/GoogleCloudPlatform/ios-docs-samples/issues/95",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
590604646
|
Add missing helloworld samples + delete excess logging code
✅Add functions_helloworld_error
✅Add functions_helloworld_gcs_generic
✅Delete excess logging code in SnippetsTests.java
@lesv FYI: added a functions_helloworld_method sample that I realized I forgot.
|
gharchive/pull-request
| 2020-03-30T21:21:24 |
2025-04-01T04:55:08.595204
|
{
"authors": [
"ace-n"
],
"repo": "GoogleCloudPlatform/java-docs-samples",
"url": "https://github.com/GoogleCloudPlatform/java-docs-samples/pull/2531",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1799816629
|
docs(samples): Add Dataflow BigQueryIO write snippets
Adds BigQuery I/O snippets for a Dataflow doc in progress
@reuvenlax @johnjcasey FYI
Please fix lint error. Thanks!
Please fix lint error. Thanks!
Done, thanks!
|
gharchive/pull-request
| 2023-07-11T22:00:53 |
2025-04-01T04:55:08.596803
|
{
"authors": [
"VeronicaWasson",
"averikitsch"
],
"repo": "GoogleCloudPlatform/java-docs-samples",
"url": "https://github.com/GoogleCloudPlatform/java-docs-samples/pull/8414",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
63124524
|
Kubelet fails to provide stats for static pods
Since static pods are now exposed via the apiserver, users (heapster) naturally expect kubelet to provide stats for the static pods.
This is breaking heapster.
cc @yujuhong
I am not familiar with stats collection in kubelet. What type of status are you referring here?
Not status, it is resource related stats here. This is one of initial reason I want to sync static pods back to master.
I think we'll need to support pod translation for the following functions.
GetContainerInfo()
GetKubeletContainerLogs()
I will work on that.
On a side note, is Heapster breaking, breaking? Or just complaining loudly?
@vmarmol: Heapster fails stats collection all together if kubelet errors out. I have a filed a
heapster issue to fix that.
@vishh how come the monitor e2e test couldn't detect such failure at the first place?
The e2e test looks for all monitoring pods to be present. I went the way of
looking for all pods to be picked up heapster, but was told that the pods
in the cluster could be very dynamic in the future and so the test should
only look for the pods it controls. The test cannot launch any static pods
and so it did not catch this issue.
One option is to run the monitoring test serially and then run other tests in parallel. @zmerlynn do you think it is possible to never run the monitoring e2e test in parallel with other e2e tests?
We currently don't run any e2e tests in parallel, but presumably we'll have to make such exceptions at some point. Why, though?
I intend the monitoring e2e test to ensure all the pods existing in the
cluster irrespective of , to be captured by the monitoring pipeline.
On Fri, Mar 20, 2015 at 9:28 AM, Zach Loafman notifications@github.com
wrote:
We currently don't run any e2e tests in parallel, but presumably we'll
have to make such exceptions at some point. Why, though?
—
Reply to this email directly or view it on GitHub
https://github.com/GoogleCloudPlatform/kubernetes/issues/5688#issuecomment-84061431
.
|
gharchive/issue
| 2015-03-20T00:28:43 |
2025-04-01T04:55:08.609228
|
{
"authors": [
"dchen1107",
"vishh",
"vmarmol",
"yujuhong",
"zmerlynn"
],
"repo": "GoogleCloudPlatform/kubernetes",
"url": "https://github.com/GoogleCloudPlatform/kubernetes/issues/5688",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
66957595
|
Integration tests are failing consistently with "FAILED: pods never started running timed out waiting for the condition"
On my local test cluster:
E0407 17:08:46.338567 752 reflector.go:143] Failed to watch *api.Node: field label not supported: metadata.name
I0407 17:08:46.681162 752 integration.go:267] Check whether pod nginx-controller-tdiha.test exists on node ""
I0407 17:08:46.681835 752 integration.go:269] Pod nginx-controller-tdiha.test is not bound to a host yet
I0407 17:08:46.892656 752 integration.go:285] Error on creating endpoints: endpoints "service1" not found
I0407 17:08:47.680573 752 integration.go:267] Check whether pod nginx-controller-tdiha.test exists on node ""
I0407 17:08:47.680722 752 integration.go:269] Pod nginx-controller-tdiha.test is not bound to a host yet
I0407 17:08:47.894832 752 integration.go:285] Error on creating endpoints: endpoints "service1" not found
I0407 17:08:48.679410 752 integration.go:267] Check whether pod nginx-controller-tdiha.test exists on node ""
I0407 17:08:48.679497 752 integration.go:269] Pod nginx-controller-tdiha.test is not bound to a host yet
F0407 17:08:48.679523 752 integration.go:436] FAILED: pods never started running timed out waiting for the condition
!!! Error in hack/test-integration.sh:47
'"${KUBE_OUTPUT_HOSTBIN}/integration" --v=2 --apiVersion="$1"' exited with status 255
Call stack:
1: hack/test-integration.sh:47 runTests(...)
2: hack/test-integration.sh:60 main(...)
Exiting with status 1
+++ [0407 17:08:48] Integration test cleanup complete
!!! Error in /Users/quinton/code/go/src/github.com/GoogleCloudPlatform/kubernetes/hack/e2e-internal/../../cluster/gce/../../cluster/../build/../build/common.sh:405
'"${docker_cmd[@]}" "$@"' exited with status 1
Call stack:
1: /Users/quinton/code/go/src/github.com/GoogleCloudPlatform/kubernetes/hack/e2e-internal/../../cluster/gce/../../cluster/../build/../build/common.sh:405 kube::build::run_build_command(...)
2: /Users/quinton/code/go/src/github.com/GoogleCloudPlatform/kubernetes/hack/e2e-internal/../../cluster/gce/../../cluster/../build/release.sh:36 main(...)
Exiting with status 1
2015/04/07 10:08:53 Error running build-release: exit status 1
2015/04/07 10:08:53 Error building. Aborting.
exit status 1
This is the version I'm running:
commit ef3cdb2f18d9bfb1c6fd35f631d68a78694a778d
Merge: 621e41e ba1ad9f
Author: Victor Marmol vmarmol@google.com
Date: Tue Apr 7 11:00:16 2015 -0700
Merge pull request #6491 from yifan-gu/depreciate_getkubeletdockercontainers
kubelet: Refactor RunInContainer/ExecInContainer/PortForward.
Curiously travis does not seem to be running into the same problem:
https://travis-ci.org/GoogleCloudPlatform/kubernetes/builds/57520499 Succeeded.
@vmarmol: Can you take a look at the failure?
Taking a look.
Passes locally with hack/test-integration.sh but fails when running before the e2e. Taking a deeper look.
Could be related to #6495
I believe @yujuhong is correct. The error logs are spammed with:
E0407 19:43:13.848128 749 reflector.go:143] Failed to watch *api.Node: field label not supported: metadata.name
While the success ones don't have any traces of it. The failure also happens when I sync before the PR in question.
Will run one more test to verify and close as a dup.
Closing as a dup of #6495
I don't think this is related to the node watch issue because the test just failed on my desktop with v1beta3 (which should be the working version).
Some relevant messages from the log:
I0407 19:56:09.611047 1366 kubelet.go:1294] Creating pod infra container for "nginx-controller-6qgot_test"
I0407 19:56:09.611476 1366 event.go:200] Event(api.ObjectReference{Kind:"Pod", Namespace:"test", Name:"nginx-controller-6qgot", UID:"124c545c-dd60-11e4-b87f-0242ac110251", APIVersion:"v1beta3", ResourceVersion:"201", FieldPath:"implicitly required container POD"}): reason: 'pulled' Successfully pulled image "gcr.io/google_containers/pause:0.8.0"
I0407 19:56:09.611880 1366 event.go:200] Event(api.ObjectReference{Kind:"Pod", Namespace:"test", Name:"nginx-controller-6qgot", UID:"124c545c-dd60-11e4-b87f-0242ac110251", APIVersion:"v1beta3", ResourceVersion:"201", FieldPath:"implicitly required container POD"}): reason: 'created' Created with docker id /k8s_POD.d41d03ce_nginx-controller-6qgot_test_124c545c-dd60-11e4-b87f-0242ac110251_cd431e36
I0407 19:56:09.611908 1366 event.go:200] Event(api.ObjectReference{Kind:"Pod", Namespace:"test", Name:"nginx-controller-6qgot", UID:"124c545c-dd60-11e4-b87f-0242ac110251", APIVersion:"v1beta3", ResourceVersion:"201", FieldPath:"implicitly required container POD"}): reason: 'started' Started with docker id /k8s_POD.d41d03ce_nginx-controller-6qgot_test_124c545c-dd60-11e4-b87f-0242ac110251_cd431e36
I0407 19:56:09.612329 1366 event.go:200] Event(api.ObjectReference{Kind:"Pod", Namespace:"test", Name:"nginx-controller-6qgot", UID:"124c545c-dd60-11e4-b87f-0242ac110251", APIVersion:"v1beta3", ResourceVersion:"201", FieldPath:"spec.containers{nginx}"}): reason: 'created' Created with docker id /k8s_nginx.bbc10af3_nginx-controller-6qgot_test_124c545c-dd60-11e4-b87f-0242ac110251_b0c11156
I0407 19:56:09.612410 1366 event.go:200] Event(api.ObjectReference{Kind:"Pod", Namespace:"test", Name:"nginx-controller-6qgot", UID:"124c545c-dd60-11e4-b87f-0242ac110251", APIVersion:"v1beta3", ResourceVersion:"201", FieldPath:"spec.containers{nginx}"}): reason: 'started' Started with docker id /k8s_nginx.bbc10af3_nginx-controller-6qgot_test_124c545c-dd60-11e4-b87f-0242ac110251_b0c11156
I0407 19:56:10.016397 1366 integration.go:267] Check whether pod nginx-controller-6qgot.test exists on node ""
I0407 19:56:10.016451 1366 integration.go:269] Pod nginx-controller-6qgot.test is not bound to a host yet
I0407 19:56:11.016362 1366 integration.go:267] Check whether pod nginx-controller-6qgot.test exists on node ""
I0407 19:56:11.016389 1366 integration.go:269] Pod nginx-controller-6qgot.test is not bound to a host yet
I0407 19:56:11.621338 1366 kubelet.go:1124] No Infra Container for "nginx-controller-k3n14_test" found. All containers will be restarted.
...
I0407 19:56:12.016385 1366 integration.go:267] Check whether pod nginx-controller-6qgot.test exists on node ""
I0407 19:56:12.016459 1366 integration.go:269] Pod nginx-controller-6qgot.test is not bound to a host yet
I0407 19:56:13.016337 1366 integration.go:267] Check whether pod nginx-controller-6qgot.test exists on node ""
I0407 19:56:13.016395 1366 integration.go:269] Pod nginx-controller-6qgot.test is not bound to a host yet
I0407 19:56:14.016284 1366 integration.go:267] Check whether pod nginx-controller-6qgot.test exists on node ""
I0407 19:56:14.016327 1366 integration.go:269] Pod nginx-controller-6qgot.test is not bound to a host yet
E0407 19:56:14.729656 1366 event.go:182] Server rejected event '&api.Event{TypeMeta:api.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"127.0.0.1.13d2d37c162a5914", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", CreationTimestamp:util.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*util.Time)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil)}, InvolvedObject:api.ObjectReference{Kind:"Node", Namespace:"", Name:"127.0.0.1", UID:"127.0.0.1", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"online", Message:"Node 127.0.0.1 is now online", Source:api.EventSource{Component:"kubelet", Host:"127.0.0.1"}, FirstTimestamp:util.Time{Time:time.Time{sec:63564033361, nsec:731148052, loc:(*time.Location)(0x19a3020)}}, LastTimestamp:util.Time{Time:time.Time{sec:63564033361, nsec:731148052, loc:(*time.Location)(0x19a3020)}}, Count:1}': 'the server responded with the status code 405 but did not return more information (post events)' (will not retry!)
I0407 19:56:15.016332 1366 integration.go:267] Check whether pod nginx-controller-6qgot.test exists on node ""
I0407 19:56:15.016397 1366 integration.go:269] Pod nginx-controller-6qgot.test is not bound to a host yet
I0407 19:56:16.016290 1366 integration.go:267] Check whether pod nginx-controller-6qgot.test exists on node ""
I0407 19:56:16.016321 1366 integration.go:269] Pod nginx-controller-6qgot.test is not bound to a host yet
I0407 19:56:17.016399 1366 integration.go:267] Check whether pod nginx-controller-6qgot.test exists on node ""
I0407 19:56:17.016480 1366 integration.go:269] Pod nginx-controller-6qgot.test is not bound to a host yet
F0407 19:56:17.016508 1366 integration.go:436] FAILED: pods never started running timed out waiting for the condition
Note that nginx-controller-6qgot.test has already started on kubelet (see the events about staring containers), but the integration test still did not see the Host filed set on the pod.
I am not familiar with the list&watch usage. Could it be that at the time of listing, the Host of the pods were not set yet? @lavalamp
https://github.com/GoogleCloudPlatform/kubernetes/blob/master/cmd/integration/integration.go#L430
#6495 is unrelated
@yujuhong yeah that code is subtly broken; the replication controller can now count pods before they've been assigned. So that list is getting a pod with a "" for its host, and then the podsOnMinions function will never meet its goal.
Possible fix: change podsOnMinions to first wait for the pods to be assigned. Or do that in a separate step.
I suspect https://github.com/GoogleCloudPlatform/kubernetes/issues/6525#issuecomment-90719360 is a race that has always existed, and not the reason the test is suddenly failing consistently. The rc never waited for host assignment to count toward its replicas, even in the fillCurrentState world we just did a list for the selectos followed by filter active pods (https://github.com/GoogleCloudPlatform/kubernetes/blob/master/pkg/controller/replication_controller.go#L182). So it shouln't be suddenly failing for that list race condition?
This is the most common flake now that I've been looking at these all day.
There may be more than one cause.
@lavalamp really? I've been unable to replicate this for about a day. Can you provide some logs (or links) for the failures to help debugging?
@lavalamp, #6582 did not fix the race condition?
Here's a recent example: https://app.shippable.com/builds/552704855cb2750c00d8b0ff
It probably has a different cause.
@lavalamp, there is no message such as is not bound to a host yet in that build, so it should be a different bug. In fact, it's likely just another manifestation of the performance/timeout problem in #6651. I just submitted a PR to (hopefully) address that by limiting the number of concurrent tests. We can probably move the discussion over there, unless there is evidence suggests otherwise (e.g. kubelet's dropping pods, etc).
More examples:
https://app.shippable.com/builds/552865b0892aba0c00bd682e
https://travis-ci.org/GoogleCloudPlatform/kubernetes/jobs/58030039
Wait, those are both from my PR and I think I caused that. It's been a long day...
|
gharchive/issue
| 2015-04-07T18:13:36 |
2025-04-01T04:55:08.628303
|
{
"authors": [
"bprashanth",
"lavalamp",
"quinton-hoole",
"vishh",
"vmarmol",
"yujuhong"
],
"repo": "GoogleCloudPlatform/kubernetes",
"url": "https://github.com/GoogleCloudPlatform/kubernetes/issues/6525",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
75732052
|
added ISCSI volume plugin to PersistentVolumeSource
Second attempt. Previous attempt #7734.
@ArtfulCoder better luck this time?
quick scan LGTM. still @ArtfulCoder for more detail
@ArtfulCoder PTAL? Thanks!
Rebased, generated conversions, and fixed a missing PV support thing in CanSupport func.
@ArtfulCoder PTAL? This should be good to go.
Abhi is out for a few days, sorry.
LGTM
This is breaking travis now, it doesn't have generated funds.
"deep copy funcs"
Looking into it.
Got them. I'll send another PR for the generated funcs.
|
gharchive/pull-request
| 2015-05-12T20:41:52 |
2025-04-01T04:55:08.632356
|
{
"authors": [
"markturansky",
"smarterclayton",
"thockin"
],
"repo": "GoogleCloudPlatform/kubernetes",
"url": "https://github.com/GoogleCloudPlatform/kubernetes/pull/8133",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
177316167
|
Issue #11 Create logging directory for cloud debugger
This fixes #11 in openjdk-runtime, but the mkdir in jetty-runtime needs to be removed (separate PR coming)
@meltsufin please review
Does the cloud debugger need a logging directory?
Don't mind my previous comment.
LGTM
|
gharchive/pull-request
| 2016-09-15T23:39:06 |
2025-04-01T04:55:08.705326
|
{
"authors": [
"gregw",
"meltsufin"
],
"repo": "GoogleCloudPlatform/openjdk-runtime",
"url": "https://github.com/GoogleCloudPlatform/openjdk-runtime/pull/12",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1465442111
|
Metrics example not working
I have tried the metrics example in this repository, metrics seemed to be sent. See below
❯ go build -o metrics
❯ ./metrics
2022/11/27 14:25:41 Most recent data: counter 190, observer 15.14; histogram 2.8
2022/11/27 14:25:51 Most recent data: counter 151, observer 14.69; histogram 2.35
2022/11/27 14:26:01 Most recent data: counter 155, observer 13.54; histogram 1.2
2022/11/27 14:26:11 Most recent data: counter 110, observer 13.39; histogram 1.05
2022/11/27 14:26:21 Most recent data: counter 102, observer 12.84; histogram 0.5
2022/11/27 14:26:31 Most recent data: counter 103, observer 12.49; histogram 0.15
2022/11/27 14:26:41 Most recent data: counter 165, observer 14.84; histogram 2.5
2022/11/27 14:26:51 Most recent data: counter 128, observer 15.34; histogram 3
2022/11/27 14:27:01 Most recent data: counter 136, observer 16.34; histogram 4
2022/11/27 14:27:11 Most recent data: counter 168, observer 16.29; histogram 3.95
2022/11/27 14:27:21 Most recent data: counter 157, observer 15.29; histogram 2.95
2022/11/27 14:27:31 Most recent data: counter 183, observer 15.34; histogram 3
2022/11/27 14:27:41 Most recent data: counter 140, observer 13.89; histogram 1.55
2022/11/27 14:27:51 Most recent data: counter 197, observer 14.74; histogram 2.4
2022/11/27 14:28:01 Most recent data: counter 114, observer 12.44; histogram 0.1`
However nothing is displayed in the dasboard in Cloud Monitoring.
Am I doing something wrong. Cloud Monitoring API is enabled.
@apichick were you able to resolve the issue? Is there anything we can improve in the docs to have helped you?
|
gharchive/issue
| 2022-11-27T13:31:23 |
2025-04-01T04:55:08.707287
|
{
"authors": [
"apichick",
"dashpole"
],
"repo": "GoogleCloudPlatform/opentelemetry-operations-go",
"url": "https://github.com/GoogleCloudPlatform/opentelemetry-operations-go/issues/536",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2443277540
|
[Testing] DCGM tests: Install CUDA toolkit for exercise script
Description
The new DCGM exercise script needs CUDA toolkit. Copy over the NVML install script to DCGM for RHEL to install CUDA.
Related issue
b/356897271
How has this been tested?
Integration tests for DCGM pass on RHEL and CentOS.
Checklist:
Unit tests
[x] Unit tests do not apply.
[ ] Unit tests have been added/modified and passed for this PR.
Integration tests
[ ] Integration tests do not apply.
[x] Integration tests have been added/modified and passed for this PR.
Documentation
[x] This PR introduces no user visible changes.
[ ] This PR introduces user visible changes and the corresponding documentation change has been made.
Minor version bump
[x] This PR introduces no new features.
[ ] This PR introduces new features, and there is a separate PR to bump the minor version since the last release already.
[ ] This PR bumps the version.
NVML and DCGM tests start to fail on Ubuntu 2004 after the new 12.6 CUDA release. It fails for certain GPU models so probably a driver compatibility issue. https://screenshot.googleplex.com/4baa2aJaDVdAiLU
Will merge this change for now.
|
gharchive/pull-request
| 2024-08-01T19:45:16 |
2025-04-01T04:55:08.712957
|
{
"authors": [
"LujieDuan"
],
"repo": "GoogleCloudPlatform/ops-agent",
"url": "https://github.com/GoogleCloudPlatform/ops-agent/pull/1771",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
405493127
|
Quick question: why the number of cores for the driver pod is 0.1?
driver:
cores: 0.1
coreLimit: "200m"
memory: "512m"
Not quite understand the 0.1 here.
This means 100 milicpus in Kubernetes. The field driver.cores gets translated into the cpu request of the driver pod.
|
gharchive/issue
| 2019-01-31T23:56:18 |
2025-04-01T04:55:08.720865
|
{
"authors": [
"JoeyLuffa",
"liyinan926"
],
"repo": "GoogleCloudPlatform/spark-on-k8s-operator",
"url": "https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/issues/386",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
253461829
|
[CLOSED] Fix typo in master kubeadm.conf podSubset->podSubnet
Issue by viglesiasce
Thursday Aug 17, 2017 at 22:58 GMT
Originally opened as https://github.com/danisla/terraform-google-k8s-gce/pull/1
viglesiasce included the following code: https://github.com/danisla/terraform-google-k8s-gce/pull/1/commits
Comment by danisla
Thursday Aug 17, 2017 at 23:04 GMT
No wonder why that wasn't working...
|
gharchive/issue
| 2017-08-28T21:25:54 |
2025-04-01T04:55:08.724891
|
{
"authors": [
"danisla"
],
"repo": "GoogleCloudPlatform/terraform-google-k8s-gce",
"url": "https://github.com/GoogleCloudPlatform/terraform-google-k8s-gce/issues/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1386650293
|
Bqml vertex model registry
REQUIRED: Add a summary of your PR here, typically including why the change is needed and what was changed. Include any design alternatives for discussion purposes.
Removed two repeated Vertex AI resources in the notebook objective section for clarity.
REQUIRED: Fill out the below checklists or remove if irrelevant
If you are opening a PR for Official Notebooks under the notebooks/official folder, follow this mandatory checklist:
[x] Use the notebook template as a starting point.
[x] Follow the style and grammar rules outlined in the above notebook template.
[x] Verify the notebook runs successfully in Colab since the automated tests cannot guarantee this even when it passes.
[ ] Passes all the required automated checks. You can locally test for formatting and linting with these instructions.
[x] You have consulted with a tech writer to see if tech writer review is necessary. If so, the notebook has been reviewed by a tech writer, and they have approved it.
[x] This notebook has been added to the CODEOWNERS file under the Official Notebooks section, pointing to the author or the author's team.
[x] The Jupyter notebook cleans up any artifacts it has created (datasets, ML models, endpoints, etc) so as not to eat up unnecessary resources.
1 DISPLAY_NAME = "video_classification" + UUID
Step #5: ----> 3 job = aip.PipelineJob(
Step #5: 4 display_name=DISPLAY_NAME,
Step #5: 5 template_path="video_classification_pipeline.json",
Step #5: 6 parameter_values=parameters,
Step #5: 7 enable_caching=False,
Step #5: 8 )
Step #5: 10 job.run(service_account=SERVICE_ACCOUNT)
Step #5:
Step #5: NameError: name 'aip' is not defined
|
gharchive/pull-request
| 2022-09-26T20:09:50 |
2025-04-01T04:55:08.736694
|
{
"authors": [
"andrewferlitsch",
"soheilazangeneh"
],
"repo": "GoogleCloudPlatform/vertex-ai-samples",
"url": "https://github.com/GoogleCloudPlatform/vertex-ai-samples/pull/993",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1781269450
|
No Server communication
Based on a very brief look at your code it looks like any processing of the death screen is done completely on the client side. This means that anyone modifying the client has complete control over the death screen's behaviour.
As you are adding timeouts and the options to hide some buttons it appears that you are trying to prevent players from doing some actions normally possible in vanilla. Even if you are not trying to do so server admins will probably use this mod for said purpose. This means you may want to implement some sort of communication between server and client.
My idea is as follows:
Whenever a player dies the server sends the death screen configuration within the death packet. The client then uses tis configuration to display the death screen while the server keeps track of disallowed actions. When the client attempts to send wrong packets the server simply opens the death screen again to resynchronise the client.
This would prevent players from cheating and add the ability to change the death screen based on different conditions.
Never trust the client. Ever.
The death screen is completely client-side in vanilla, this just adds an animation before showing the vanilla one for novelty, there are no game mechanics tied to this.
Based on a very brief look at your code it looks like any processing of the death screen is done completely on the client side. This means that anyone modifying the client has complete control over the death screen's behaviour.
This is what this mod does, it modifies the death screen and has complete control over its behavior like any other mod could do.
As you are adding timeouts and the options to hide some buttons it appears that you are trying to prevent players from doing some actions normally possible in vanilla.
There are no options to control the death screen, as mentioned before it just shows an animation before showing the vanilla screen, it just delays the ability to press the "Respawn"/"Main Menu" buttons, not hiding them.
Even if you are not trying to do so server admins will probably use this mod for said purpose. This means you may want to implement some sort of communication between server and client.
What do you mean by preventing "actions" by server admins, what "actions" are you talking about and how would they do that as it's completely client-side and don't require a server install of the mod, it does nothing on a server.
Whenever a player dies the server sends the death screen configuration within the death packet. The client then uses tis configuration to display the death screen while the server keeps track of disallowed actions. When the client attempts to send wrong packets the server simply opens the death screen again to resynchronise the client.
This would prevent players from cheating and add the ability to change the death screen based on different conditions.
Never trust the client. Ever.
Vanilla sends a packet to the client to show the death screen when the player dies, when opening the death screen I add mine in between.
I have no idea what you are talking about with sending the wrong packet related to the death screen and cheating. This is no different from what vanilla does it or any other mod could do it.
By "actions" I meant stuff like respawning. The buttans are hidden so they are essentially prevented. By "wrong packets" I mean packets like the respawn packet sent before the timer ran out.
Note: This is just a suggestion. Feel free to close the issue if you don't want to implement it.
The buttons are just hidden until after the animation, as I said it's not meant to be a game mechanic to delay respawning,
The purpose of it is to add custom death screen that the player can add to any instance for novelty use and not some complicated anti "cheat" to skip a ~8 seconds death animation for some strange game mode where this specific mod it required.
If this was a concern there is probably a mod/plugin specific to handling this, if not it's probably created by the game mode creators.
If a mod would add a special death screen before mine that one would be shown after the animation rather then the vanilla one, for example in OldSchoolHardcore I replace the screen to change the buttons and it's fully compatible with this mod.
Like you said "Never trust the client" as any mod could replace this or modify it any way, better would be to block respawning until the time is over on the server and inform the player about it via a specialized mod.
|
gharchive/issue
| 2023-06-29T18:10:03 |
2025-04-01T04:55:08.781494
|
{
"authors": [
"Bimi124",
"GoryMoon"
],
"repo": "GoryMoon/YouDied",
"url": "https://github.com/GoryMoon/YouDied/issues/31",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2334478441
|
Fix cargo config file
since cargo v1.38, .cargo/config has been deprecated in favour of config.toml. This updates the config file to stop later versions of cargo from complaining, while also putting a symlink at .cargo/config to ensure continued compatibility with versions < 1.38.
apologies for the spam, wasn't sure how to get the CI checks to run without turning this into a pull request
|
gharchive/pull-request
| 2024-06-04T22:33:05 |
2025-04-01T04:55:08.802334
|
{
"authors": [
"QuantumBJump"
],
"repo": "GothenburgBitFactory/taskchampion",
"url": "https://github.com/GothenburgBitFactory/taskchampion/pull/393",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
288385690
|
Improve flat_db
remove support + tests for non-braced content;
write cms files in parentheses;
better handling of empty files with whitespace.
Strictly, this is a breaking change.
Wrapping all the cms code in parentheses makes the code look like this:
({
oh: 'hi'
})
so it becomes a valid JS code and IDEs stop complaining on them. *.js files without parentheses is still a valid format for Enduro.
This is mainly inspired by this issue.
Since all the tests are failing now, I can't tell if this pull request is safe. It works good on my projects, though.
let's give it a try :-)
tests are passing and stuff seems to be working for me, thank you @CosmoMyzrailGorynych for this 👍
|
gharchive/pull-request
| 2018-01-14T05:58:59 |
2025-04-01T04:55:08.813949
|
{
"authors": [
"CosmoMyzrailGorynych",
"Gottwik"
],
"repo": "Gottwik/Enduro",
"url": "https://github.com/Gottwik/Enduro/pull/214",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1186620185
|
Design pattern template
As a: Design system team
I want to: Create a pattern/framework that can be used by the pattern squads to document their patterns
so that: Documentation outputs will be consistent from team to team, making it easier for a consumer of patterns to navigate between patterns.
Acceptance criteria
[ ] Pattern template will reference co-deisgn session outputs
[ ] Pattern template is reviewed by all squad team leaders
Presentable outcomes
[ ] This 1st draft can be used to document a pattern from one of the pattern squads
Sprint Ready Checklist
Acceptance criteria defined
Team has defined steps to satisfy acceptance criteria
Acceptance criteria is verifiable / testable
External / 3rd Party dependencies identified
Related issues:
Resources:
Mockups:
Testing URL: [If applicable a URL to the testing branch]
As discussed as a group, I have outlined a proposal for a change to the structure of the pattern working groups, and a reprioritization of the work overall to incorporate the increased capacity of the design system team and a new method of engagement from the service teams.
https://goa-dio.atlassian.net/wiki/spaces/DS/pages/2286551153/WIP+-+Pattern+squad+working+groups+-+Outline+of+proposal+for+a+change+to+the+structure+of+the+pattern+working+groups
I created a template as a starting point for documentation within Figma, but as each pattern has a different scope and applicable guidance, it will only be so helpful as a common starting point. https://www.figma.com/file/UbIKwjbxuPUvoFNYvMBGpe/?node-id=1610%3A207868
|
gharchive/issue
| 2022-03-30T15:19:13 |
2025-04-01T04:55:08.821857
|
{
"authors": [
"Spark450",
"twjeffery"
],
"repo": "GovAlta/ui-components",
"url": "https://github.com/GovAlta/ui-components/issues/534",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1718249683
|
Add point position inputs to Path tool options bar
In place of these red boxes, I'd like to have two number inputs for X and Y values that display the position of the currently selected point.
Other tools have widgets there, such as the Pen tool:
That bar where the widgets go is called the options bar.
If no points are selected, or if multiple points are selected, the number inputs should be disabled.
I'll give this a go :)
.
Hi @hexofyore @olit123 just checking in how things are going and if you need any help :)
@Keavon I was busy. I am going to do it this weekend.
No worries, thanks for the update :)
@Keavon Hey yeh it's been a busy couple of weeks for me, slowly trying to get to grips with this project and prerequisite knowledge in my spare time but I am very noob. Will definitely be asking questions when I'm ready.
I can try this if it's still available
Go for it @omagdy7!
I would like to try this if there is if it's available. As a side note, I got the feature working on my fork to learn the code base, if that helps anyone who is trying this in the future.
Awesome, good to hear @mobile-bungalow! I've assigned you to this issue. Please open a PR when you're ready!
|
gharchive/issue
| 2023-05-20T18:28:35 |
2025-04-01T04:55:08.860622
|
{
"authors": [
"Keavon",
"hexofyore",
"mobile-bungalow",
"olit123",
"omagdy7"
],
"repo": "GraphiteEditor/Graphite",
"url": "https://github.com/GraphiteEditor/Graphite/issues/1229",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1511208227
|
Bezier-rs: Add self_intersection to subpath
This PR adds a naive brute-force implementation of the self_intersections function to the Subpath library. This PR also updates the function description of the intersections function to include the minimum_separation variable.
Hmm, so I'm seeing two issues unfortunately.
First, the endpoints are occasionally being counted as intersections. I'm not 100% sure why, but just wiggling the endpoints around on the visualization and you'll see that they occasionally flash red.
Second, I'm seeing this rust error (in the above screenshot) which is causing the visualization to crash. The conditions for the crash seem unclear, but it's very easy to trigger - just try and make any of the segments a straight line.
For the endpoints showing up as an intersection point, I need to manually filter them out because the way they are currently stored, the endpoint of 1 bezier curve will intersect with the startpoint of the next bezier curve in the subpath. I can solve this by increasing the minimum separation t-values must satisfy as a stopgap solution.
For the recursive structure error, I think it's originating from the cubic bezier itself. It's reproducable if you make a cubic bezier a straight line. Will investigate
Unfortunately, I'm still seeing flickering on the endpoints. I'm not sure if modifying the minimum separation argument will work, because that ensures there won't be two points at a particular distance from one another - but I don't think thats the issue, because I don't know when there will be two intersections at the endpoint. I'm thinking we should filter intersections which occur at within some epsilon from t=0, and have that be a configurable argument or something. Otherwise, we should add an argument called include_endpoints to the intersections function, which by default is true.
Tested it locally and flickering is gone, even for low error. Great job @Androxium!
Rebase work onto this new PR https://github.com/GraphiteEditor/Graphite/pull/1035
|
gharchive/pull-request
| 2022-12-26T20:07:40 |
2025-04-01T04:55:08.865195
|
{
"authors": [
"Androxium",
"RobNadal"
],
"repo": "GraphiteEditor/Graphite",
"url": "https://github.com/GraphiteEditor/Graphite/pull/915",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
110634517
|
Wrong plugin directory location for .deb
graylag-server plugin directory is located at /usr/share/graylog-server/plugin but graylog-plugin-slack .deb package installed the .jar file at /usr/share/graylog2-server/plugin/graylog-plugin-slack-1.1.5.jar. Therefore, slack alarm callback doesn't show up in grayling web interface.
Thank you for the report!
|
gharchive/issue
| 2015-10-09T10:51:43 |
2025-04-01T04:55:08.929503
|
{
"authors": [
"bernd",
"favadi"
],
"repo": "Graylog2/graylog-plugin-slack",
"url": "https://github.com/Graylog2/graylog-plugin-slack/issues/6",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2438986775
|
🛑 Server Gaming is down
In 5a1a3b6, Server Gaming (https://game.greathost.ro) was down:
HTTP code: 403
Response time: 1017 ms
Resolved: Server Gaming is back up in daef623 after 7 minutes.
|
gharchive/issue
| 2024-07-31T02:37:03 |
2025-04-01T04:55:08.948830
|
{
"authors": [
"GreathostRo"
],
"repo": "GreathostRo/upptime",
"url": "https://github.com/GreathostRo/upptime/issues/2228",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2680365858
|
🛑 Server Gaming is down
In d53b21b, Server Gaming (https://game.greathost.ro) was down:
HTTP code: 403
Response time: 1209 ms
Resolved: Server Gaming is back up in 20fffd6 after 6 minutes.
|
gharchive/issue
| 2024-11-21T17:45:37 |
2025-04-01T04:55:08.951328
|
{
"authors": [
"GreathostRo"
],
"repo": "GreathostRo/upptime",
"url": "https://github.com/GreathostRo/upptime/issues/3722",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1422345112
|
🛑 Server Ultra is down
In c5ddf6c, Server Ultra (https://ultra.greathost.ro) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Server Ultra is back up in 3cc233c.
|
gharchive/issue
| 2022-10-25T11:57:29 |
2025-04-01T04:55:08.953634
|
{
"authors": [
"GreathostRo"
],
"repo": "GreathostRo/upptime",
"url": "https://github.com/GreathostRo/upptime/issues/374",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2737029364
|
🛑 Server Gaming is down
In ad09fa5, Server Gaming (https://game.greathost.ro) was down:
HTTP code: 403
Response time: 1449 ms
Resolved: Server Gaming is back up in 6855059 after 26 minutes.
|
gharchive/issue
| 2024-12-12T22:54:53 |
2025-04-01T04:55:08.955891
|
{
"authors": [
"GreathostRo"
],
"repo": "GreathostRo/upptime",
"url": "https://github.com/GreathostRo/upptime/issues/4248",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
292142452
|
Question: Dice Pools and counting hits
What would be the roll notation for 13 10-sided dice, count hits where the number on the dice is 5 or higher?
e.g. [10,5,5,4,3,1,8,8,10,4,2,10,1] = 7 hits
What would it be where 10s count as double? In the previous example, it would instead be 10 hits.
Closing as separate issue for functionality exists (#23)
Thank you for looking into this as a feature. I will take a look at looping through the log as you suggested.
Let me know how you get on.
This issue on next on my todo.
Sweet christmas! Thanks!
|
gharchive/issue
| 2018-01-27T18:38:24 |
2025-04-01T04:55:08.991909
|
{
"authors": [
"GreenImp",
"manchuwook"
],
"repo": "GreenImp/rpg-dice-roller",
"url": "https://github.com/GreenImp/rpg-dice-roller/issues/22",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1083613542
|
Issue displaying Supermarket / Grocery Store building reports
I tried opening 5 or 6 Supermarket / Grocery Store building reports and all showed this:
see examples: id = 464, 25914, or 802
seems similar to the issue you saw with Harvard Market
┆Issue is synchronized with this Asana task
Looks like the 2020 data has both Supermarket / Grocery and Supermarket / Grocery Store.
Then the thresholds defined in seattle.json includes only Supermarket / Grocery
From that it looks like the way forward is to change all Supermarket / Grocery to Supermarket / Grocery Store in the 2020 data and update seattle.json to match.
But this may also depend on how this type is coded in previous years. I see that 2019 has Supermarket / Grocery Store, but I haven't checked years prior to that.
@tomay I updated the data so it's consistently 'Supermarket / Grocery' now since that's more concise. The reports now display but there are other issues. The main map expects 'Supermarket / Grocery Store' in the filter. And the emissions intensity card reference might need updating:
Same with the ENERGY USE COMPARED TO AVERAGE chart
I believe these issues are resolved now after a fresh deploy. If not, let me know
I tried opening 5 or 6 Supermarket / Grocery Store building reports and all showed this:
see examples: id = 464, 25914, or 802
seems similar to the issue you saw with Harvard Market
┆Issue is synchronized with this Asana task
This original issue was showing up with a few other types - but was because a bunch of the data had returns/spaces after the text. Should all be cleaned up now.
And the filter and emissions intensity data looks good too.
|
gharchive/issue
| 2021-12-17T21:03:00 |
2025-04-01T04:55:08.997812
|
{
"authors": [
"seattle-benchmarking",
"tomay"
],
"repo": "GreenInfo-Network/seattle-building-dashboard",
"url": "https://github.com/GreenInfo-Network/seattle-building-dashboard/issues/25",
"license": "ISC",
"license_type": "permissive",
"license_source": "github-api"
}
|
1754148018
|
chore: add debug log for heartbeat
I hereby agree to the terms of the GreptimeDB CLA
What's changed and what's your intention?
Add debug log for heartbeat.
Checklist
[ ] I have written the necessary rustdoc comments.
[ ] I have added the necessary unit tests and integration tests.
Refer to a related PR or issue link (optional)
Please fix cargo clippy.
|
gharchive/pull-request
| 2023-06-13T06:36:37 |
2025-04-01T04:55:09.036490
|
{
"authors": [
"Fengys123",
"WenyXu"
],
"repo": "GreptimeTeam/greptimedb",
"url": "https://github.com/GreptimeTeam/greptimedb/pull/1770",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2391963513
|
bug[next]: Add assert for OOB access
In case a stencil contains an Out-Of-Bounds accesses the derivation of the dtype in ITIR embedded can fail with an obscure error like
ValueError: DType 'DType(scalar_type=<class 'numpy.object_'>, tensor_shape=())' not supported.
This PR adds an assert to catch this earlier and fail more gracefully.
I think no, that's why I picked an assert. The position is taken from the domain so the result should never be _UNDEFINED unless the user wrote something unvalid.
|
gharchive/pull-request
| 2024-07-05T06:53:11 |
2025-04-01T04:55:09.060889
|
{
"authors": [
"tehrengruber"
],
"repo": "GridTools/gt4py",
"url": "https://github.com/GridTools/gt4py/pull/1571",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
2505461893
|
fix[cartesian]: Fix serialize default behavior when Pickled property was not saved
Description
DaCe has a behavior of not saving properties if they have a default (newish behavior) which leads our 2-step process of library node caching to fail. Since we have decided to rely on pickling to cache out the content of the LibraryNode we respond to this change in DaCe by making sure default values pass to the pickle deserializer are returned plain
Requirements
[ ] All fixes and/or new features come with corresponding tests.
[ ] Important design decisions have been documented in the approriate ADR inside the docs/development/ADRs/ folder.
Poke @romanc
It seems related to the change I made to the test code tests/cartesian_tests/integration_tests/multi_feature_tests/test_dace_parsing.py when I upgraded the DaCe package version. Is it possible that with your fix we can undo my change?
It seems related to the change I made to the test code tests/cartesian_tests/integration_tests/multi_feature_tests/test_dace_parsing.py when I upgraded the DaCe package version. Is it possible that with your fix we can undo my change?
Should be, pushing code now
|
gharchive/pull-request
| 2024-09-04T13:59:10 |
2025-04-01T04:55:09.065017
|
{
"authors": [
"FlorianDeconinck",
"edopao"
],
"repo": "GridTools/gt4py",
"url": "https://github.com/GridTools/gt4py/pull/1629",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1719683187
|
comments
comments on db config
ok pour senior
|
gharchive/pull-request
| 2023-05-22T13:19:32 |
2025-04-01T04:55:09.071780
|
{
"authors": [
"GrimalDev"
],
"repo": "GrimalDev/Portfolio",
"url": "https://github.com/GrimalDev/Portfolio/pull/20",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2582709868
|
Create changedsript.jss
change ui
Description
Modern Color Palette:
Redesigned with a striking combination of blue, black, and purple to create a visually appealing interface.
Responsive Design:
Fully responsive layouts that adapt to various screen sizes, ensuring an optimal user experience on desktops, tablets, and smartphones.
Smooth Transitions:
Implemented seamless transitions between elements for a polished look and feel, enhancing navigation and user interaction.
Engaging Animations:
Added subtle animations that bring the interface to life, making it more interactive and enjoyable for users.
User-Friendly Interface:
Intuitive design principles ensure that users can easily navigate and interact with the application, regardless of their technical expertise.
Fixes #(12)
Type of change
Please delete options that are not relevant.
[no ] Bug fix (non-breaking change)
[ yes] New feature (non-breaking change)
[yes ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
[yes ] This change requires a documentation update
How Has This Been Tested?
Please describe the tests that you ran to verify your changes. Provide instructions so that they can be reproduced.
[no] Test A
[yes ] Test B
Checklist:
[yes ] My code follows the style guidelines of this project
[yes ] I have performed a self-review of my code
[ yes] I have commented my code, particularly in hard-to-understand areas
[no ] I have made corresponding changes to the documentation
[no ] My changes generate no new warnings
[ no] Any dependent changes have been merged and published in downstream modules
@Bhumika-00 these are steps to follow in order to pass all the checks.
all the contributors should follow these else their PR will be closed
go to your fork and click on sync fork and click Update Branch
then go to the directory on your local machine
Run git fetch origin
Run git pull origin main
Then resolve the conflicts if any.
Then force push your branch by git push origin branch-name -f
|
gharchive/pull-request
| 2024-10-12T09:01:33 |
2025-04-01T04:55:09.085701
|
{
"authors": [
"Bhumika-00",
"bryans-go"
],
"repo": "Groverio/To-Do-List",
"url": "https://github.com/Groverio/To-Do-List/pull/71",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2267292698
|
顺序
show difference和转换为setter时可以按照target类的字段顺序排序吗
可以支持 将在下一个版本更新
|
gharchive/issue
| 2024-04-28T01:52:08 |
2025-04-01T04:55:09.097496
|
{
"authors": [
"Dreaming9420",
"GuangYiDing"
],
"repo": "GuangYiDing/BeanUtilHelper",
"url": "https://github.com/GuangYiDing/BeanUtilHelper/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
53627674
|
linking
(for README) Tiled Map Editor link is : http://www.mapeditor.org (not .com)
Thanks for alerting me to this. Fixed.
|
gharchive/issue
| 2015-01-07T12:30:48 |
2025-04-01T04:55:09.132375
|
{
"authors": [
"GnoStiC",
"GymbylCoding"
],
"repo": "GymbylCoding/Dusk-Engine",
"url": "https://github.com/GymbylCoding/Dusk-Engine/issues/40",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2477490794
|
⚠️ API Next - first chapter has degraded performance
In 409f052, API Next - first chapter (https://h-edu.cz/content.api/book/book-a?includeChapterContentParts) experienced degraded performance:
HTTP code: 200
Response time: 16538 ms
Resolved: API Next - first chapter performance has improved in 41bca06 after 18 minutes.
|
gharchive/issue
| 2024-08-21T09:03:01 |
2025-04-01T04:55:09.134797
|
{
"authors": [
"MilanLempera"
],
"repo": "H-edu-dev/upptime",
"url": "https://github.com/H-edu-dev/upptime/issues/133",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1634236624
|
可以给绑定功能增加玩家名称绑定吗?
玩家UID太长了一般没什么人能记住
能让 绑定 在检测到玩家名称的时候绑定玩家名称而不是玩家UID吗?在检测到玩家UID的时候还是照常绑定UID
玩家UID太长了一般没什么人能记住
能让 绑定 在检测到玩家名称的时候绑定玩家名称而不是玩家UID吗?在检测到玩家UID的时候还是照常绑定UID
UID 能在 /玩家 里能查看到。
因为玩家的UID是唯一的,所以我更偏向于使用这个来绑定账号。
而且在代码中判断 玩家名称 和 UID 的情况我没有想到一个很好的解决方案。所以这个应该会保持不变。 (除非可能出现更好的解决方案
可以考虑废弃uid绑定的功能,直接用名称绑定。或者代码中先去用名称去get一次uid,然后用uid去绑定
可以考虑废弃uid绑定的功能,直接用名称绑定。或者代码中先去用名称去get一次uid,然后用uid去绑定
由于 API 的问题使用 玩家名称查询 只能使用 EA 的用户名查询,对于仅使用 Steam 游玩 Apex 的玩家此项功能可能会产生错误。
仅使用 名称绑定 可以实现,我只是纠结这个查询的玩家名称问题。
但是uid不是也要先用用户名查询uid吗?
记uid比记名字要难吧,要说获取难度的话缺实uid可以在游戏内看到,相对来说更容易获取一些。 但是根据uid去获取ea的名字感觉比根据ea的名字获取uid感觉上更舒服一点吧。 在日常使用中,更加趋向于使用ea的名字,而不是uid
但是对于使用 Steam 进行游戏的玩家可能大多数都并不知道自己的 EA 用户名。如果在绑定中输入了 Steam 的用户名称,那大概率会绑定失败。
自己在游戏中查看自己的 UID 并直接进行绑定,并不需要麻烦的重新找回自己的 EA 用户名便可绑定。假如不知道如何找回自己 EA 用户名称的玩家不就无法绑定了嘛?
或许有更好的其他方法?
记uid比记名字要难吧,要说获取难度的话缺实uid可以在游戏内看到,相对来说更容易获取一些。 但是根据uid去获取ea的名字感觉比根据ea的名字获取uid感觉上更舒服一点吧。 在日常使用中,更加趋向于使用ea的名字,而不是uid
但是对于使用 Steam 进行游戏的玩家可能大多数都并不知道自己的 EA 用户名。如果在绑定中输入了 Steam 的用户名称,那大概率会绑定失败。
自己在游戏中查看自己的 UID 并直接进行绑定,并不需要麻烦的重新找回自己的 EA 用户名便可绑定。假如不知道如何找回自己 EA 用户名称的玩家不就无法绑定了嘛?
或许有更好的其他方法?
记uid比记名字要难吧,要说获取难度的话缺实uid可以在游戏内看到,相对来说更容易获取一些。 但是根据uid去获取ea的名字感觉比根据ea的名字获取uid感觉上更舒服一点吧。 在日常使用中,更加趋向于使用ea的名字,而不是uid
但是对于使用 Steam 进行游戏的玩家可能大多数都并不知道自己的 EA 用户名。如果在绑定中输入了 Steam 的用户名称,那大概率会绑定失败。
自己在游戏中查看自己的 UID 并直接进行绑定,并不需要麻烦的重新找回自己的 EA 用户名便可绑定。假如不知道如何找回自己 EA 用户名称的玩家不就无法绑定了嘛?
或许有更好的其他方法?
分离绑定uid和绑定名字,搞成两个指令,
on_command 绑定uid
on_command 绑定名称
在帮助菜单中可以写成一个
绑定uid / 绑定名称 +uid/名称
绑定名字的时候可以get一下uid然后去绑定,查询的时候统一用uid,这么干的缺陷就是会浪费api的请求次数
绑定名字的时候可以get一下uid然后去绑定,查询的时候统一用uid,这么干的缺陷就是会浪费api的请求次数
如果期望使用 玩家名称 查询的话,下个版本会修改使用 玩家名称进行绑定。
绑定名字的时候可以get一下uid然后去绑定,查询的时候统一用uid,这么干的缺陷就是会浪费api的请求次数
如果期望使用 玩家名称 查询的话,下个版本会修改使用 玩家名称进行绑定。
主要是我认识的玩家查询的时候基本都用名称,所以我感觉使用玩家名称查询会方便一些,但是为了让不记得名称的玩家也能查询,uid查询的功能也是很棒的,可以保留
ps:我今天又把之前的内容写了一遍,就不pr了,写的太烂了_(:з」∠)_
主要是我认识的玩家查询的时候基本都用名称,所以我感觉使用玩家名称查询会方便一些
主要是很多人会去使用 Steam 用户名称来进行查询导致失败…所以纠结这个。
但是为了让不记得名称的玩家也能查询,uid查询的功能也是很棒的,可以保留
但是一般又会有谁去用uid呢…我会打算移除uid相关功能的。
|
gharchive/issue
| 2023-03-21T16:03:47 |
2025-04-01T04:55:09.143215
|
{
"authors": [
"Dr-WeiAL",
"H-xiaoH",
"veadex"
],
"repo": "H-xiaoH/nonebot-plugin-apex-api-query",
"url": "https://github.com/H-xiaoH/nonebot-plugin-apex-api-query/issues/17",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1079049573
|
Automatically publicize imported Assembly-CSharp.dll
Publicizing assemblies is super handy when writing code and has no downsides so this should be done by default.
Will be included in MeatKit 1.1
|
gharchive/issue
| 2021-12-13T21:28:24 |
2025-04-01T04:55:09.177610
|
{
"authors": [
"nrgill28"
],
"repo": "H3VR-Modding/MeatKit",
"url": "https://github.com/H3VR-Modding/MeatKit/issues/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
924636696
|
[BUG] man compile jib:dockerBuild fails on Windows
Describe the bug
Fails with error tags cannot assign string 1.0.1 to tags
Error Message
To Reproduce
Cannot reproduce on Mac
Desktop (please complete the following information):
OS: Windows
I tried to build on CentOS 7.9 not Windows
Updated the subject. Latest release includes the fix
|
gharchive/issue
| 2021-06-18T07:43:58 |
2025-04-01T04:55:09.192530
|
{
"authors": [
"eknori",
"paulswithers"
],
"repo": "HCL-TECH-SOFTWARE/domino-online-meeting-integration",
"url": "https://github.com/HCL-TECH-SOFTWARE/domino-online-meeting-integration/issues/46",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1590188811
|
🛑 SkipTheTrailers is down
In df0c5b5, SkipTheTrailers ($STT) was down:
HTTP code: 0
Response time: 0 ms
Resolved: SkipTheTrailers is back up in 2e6d397.
|
gharchive/issue
| 2023-02-18T03:30:34 |
2025-04-01T04:55:09.224177
|
{
"authors": [
"HDVinnie"
],
"repo": "HDVinnie/TrackerHub",
"url": "https://github.com/HDVinnie/TrackerHub/issues/10473",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1601151758
|
🛑 Telly is down
In f2fc1e0, Telly ($TLY) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Telly is back up in c58ab51.
|
gharchive/issue
| 2023-02-27T13:06:22 |
2025-04-01T04:55:09.226450
|
{
"authors": [
"HDVinnie"
],
"repo": "HDVinnie/TrackerHub",
"url": "https://github.com/HDVinnie/TrackerHub/issues/11720",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1634547435
|
🛑 SkipTheCommericals is down
In 0503b20, SkipTheCommericals ($STC) was down:
HTTP code: 0
Response time: 0 ms
Resolved: SkipTheCommericals is back up in 926c6f4.
|
gharchive/issue
| 2023-03-21T19:09:00 |
2025-04-01T04:55:09.228447
|
{
"authors": [
"HDVinnie"
],
"repo": "HDVinnie/TrackerHub",
"url": "https://github.com/HDVinnie/TrackerHub/issues/14597",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1642906609
|
🛑 SkipTheCommericals is down
In 84efb73, SkipTheCommericals ($STC) was down:
HTTP code: 0
Response time: 0 ms
Resolved: SkipTheCommericals is back up in c703a9f.
|
gharchive/issue
| 2023-03-27T22:22:18 |
2025-04-01T04:55:09.230431
|
{
"authors": [
"HDVinnie"
],
"repo": "HDVinnie/TrackerHub",
"url": "https://github.com/HDVinnie/TrackerHub/issues/15391",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1659593938
|
🛑 Telly is down
In eaabfbd, Telly ($TLY) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Telly is back up in adeb17e.
|
gharchive/issue
| 2023-04-08T17:44:05 |
2025-04-01T04:55:09.232572
|
{
"authors": [
"HDVinnie"
],
"repo": "HDVinnie/TrackerHub",
"url": "https://github.com/HDVinnie/TrackerHub/issues/16975",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1707505095
|
🛑 SkipTheCommericals is down
In 7e0021f, SkipTheCommericals ($STC) was down:
HTTP code: 0
Response time: 0 ms
Resolved: SkipTheCommericals is back up in d8c582e.
|
gharchive/issue
| 2023-05-12T12:00:18 |
2025-04-01T04:55:09.234800
|
{
"authors": [
"HDVinnie"
],
"repo": "HDVinnie/TrackerHub",
"url": "https://github.com/HDVinnie/TrackerHub/issues/21572",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1713521735
|
🛑 SkipTheCommericals is down
In 2ddd4d7, SkipTheCommericals ($STC) was down:
HTTP code: 0
Response time: 0 ms
Resolved: SkipTheCommericals is back up in 30b16fa.
|
gharchive/issue
| 2023-05-17T09:42:34 |
2025-04-01T04:55:09.236926
|
{
"authors": [
"HDVinnie"
],
"repo": "HDVinnie/TrackerHub",
"url": "https://github.com/HDVinnie/TrackerHub/issues/22159",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2044839093
|
⚠️ Nebulance has degraded performance
In 244cf83, Nebulance ($NBL) experienced degraded performance:
HTTP code: 200
Response time: 1714 ms
Resolved: Nebulance performance has improved in f334619 after 9 minutes.
|
gharchive/issue
| 2023-12-16T16:05:48 |
2025-04-01T04:55:09.238970
|
{
"authors": [
"HDVinnie"
],
"repo": "HDVinnie/TrackerHub",
"url": "https://github.com/HDVinnie/TrackerHub/issues/25712",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2064662844
|
🛑 Empornium is down
In cd1e85c, Empornium ($EMP) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Empornium is back up in e68ab62 after 5 minutes.
|
gharchive/issue
| 2024-01-03T21:20:39 |
2025-04-01T04:55:09.240996
|
{
"authors": [
"HDVinnie"
],
"repo": "HDVinnie/TrackerHub",
"url": "https://github.com/HDVinnie/TrackerHub/issues/25870",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1510239783
|
🛑 SkipTheTrailers is down
In 47484fb, SkipTheTrailers ($STT) was down:
HTTP code: 0
Response time: 0 ms
Resolved: SkipTheTrailers is back up in e51d451.
|
gharchive/issue
| 2022-12-25T03:36:34 |
2025-04-01T04:55:09.243403
|
{
"authors": [
"HDVinnie"
],
"repo": "HDVinnie/TrackerHub",
"url": "https://github.com/HDVinnie/TrackerHub/issues/2990",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1511407156
|
🛑 Telly is down
In c377132, Telly ($TLY) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Telly is back up in 1ccd76e.
|
gharchive/issue
| 2022-12-27T03:55:41 |
2025-04-01T04:55:09.245387
|
{
"authors": [
"HDVinnie"
],
"repo": "HDVinnie/TrackerHub",
"url": "https://github.com/HDVinnie/TrackerHub/issues/3264",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1514323781
|
🛑 SkipTheTrailers is down
In 841d1f2, SkipTheTrailers ($STT) was down:
HTTP code: 0
Response time: 0 ms
Resolved: SkipTheTrailers is back up in b1cc153.
|
gharchive/issue
| 2022-12-30T07:58:39 |
2025-04-01T04:55:09.247398
|
{
"authors": [
"HDVinnie"
],
"repo": "HDVinnie/TrackerHub",
"url": "https://github.com/HDVinnie/TrackerHub/issues/3674",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1528536491
|
🛑 SkipTheCommericals is down
In 348b6ef, SkipTheCommericals ($STC) was down:
HTTP code: 0
Response time: 0 ms
Resolved: SkipTheCommericals is back up in db4d06f.
|
gharchive/issue
| 2023-01-11T07:05:19 |
2025-04-01T04:55:09.249411
|
{
"authors": [
"HDVinnie"
],
"repo": "HDVinnie/TrackerHub",
"url": "https://github.com/HDVinnie/TrackerHub/issues/5222",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1575515331
|
🛑 Telly is down
In e3e03bd, Telly ($TLY) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Telly is back up in 1a3d121.
|
gharchive/issue
| 2023-02-08T05:36:07 |
2025-04-01T04:55:09.251610
|
{
"authors": [
"HDVinnie"
],
"repo": "HDVinnie/TrackerHub",
"url": "https://github.com/HDVinnie/TrackerHub/issues/9134",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2421161610
|
Privacy Policy?
Since the bot temporarily stores attachments, like videos, it might need a privacy policy (hopefully not).
I am not a legal expert so plz help
You probably need a privacy policy since you are handling user information.
Like explaining what your bot collects, how the data will be used, how long the data will be stored, if the data will be shared with any other parties, etc.
|
gharchive/issue
| 2024-07-21T01:30:58 |
2025-04-01T04:55:09.252824
|
{
"authors": [
"Filip55561",
"HEJOK254"
],
"repo": "HEJOK254/Discord-QuickEdit",
"url": "https://github.com/HEJOK254/Discord-QuickEdit/issues/15",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2442326953
|
[Bug]: Duplicate token request causes login to fail
Plugin Version
7.0.0
PHP Version
8.2.21
Shopware Version
6.6.4.0
Installation method
Composer
Identity provider
Keycloak
What happened?
If clicking on the SSO button in the admin, I see the following in the requests:
Unfortunately it looks like the first (canceled) token requests leads to Heptacom\AdminOpenAuth\Service\Login->pop() and therefore the second token request doesn't have a LoginState.
The result is
throw OAuthServerException::invalidRequest('one_time_token', 'Expired');
Not sure what is causing this.
Probable useful hint: We're building our projects with shopware-cli project ci .
Relevant log output
No response
There is a redirect from /admin?state=SOME_STATE# to /admin?state=SOME_STATE#/login/ which is causing the issue.
Most likely came with new VUE version in SW 6.6, in 6.5 the url was changing but wasn't considered as redirect in the browser.
In our case we have fixed it with a patch file for ClientRedirectRoute by adding
$targetUrl = $this->enrichRedirectUrl($targetUrl, $requestState);
$targetUrl .= '/login/'; // addition
Ugly fix as we are short on time with the upgrade, would be great to have a new plugin version with a proper fix!
We just had a similarly reported issue on
Plugin Version
6.0.0
6.0.3
Shopware Version
6.5.8.14
Identity provider
Microsoft Azure
So this does not seem to be a provider bug but likely a compatibility issue with Shopware since 6.5.8.?
|
gharchive/issue
| 2024-08-01T12:27:58 |
2025-04-01T04:55:09.259037
|
{
"authors": [
"JoshuaBehrens",
"htuscher",
"pbalcerzak"
],
"repo": "HEPTACOM/HeptacomShopwarePlatformAdminOpenAuth",
"url": "https://github.com/HEPTACOM/HeptacomShopwarePlatformAdminOpenAuth/issues/26",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
338762178
|
update readme to clarify what's under development
plus some other doc improvements
fixes #382
@dannyjacobs this can't be merged until the phasing fixes are merged, but can you look at it and see if it is clear?
@dannyjacobs can you review this small PR updating the readme?
After talking to Phil, I agreed to clarify what is considered to be part of the pyuvdata API and what is not (and therefore not guaranteed to be stable) by prepending an underscore to the beginning of functions & methods that are not part of the API. I've added those changes to this PR as well.
|
gharchive/pull-request
| 2018-07-05T23:58:13 |
2025-04-01T04:55:09.261030
|
{
"authors": [
"bhazelton"
],
"repo": "HERA-Team/pyuvdata",
"url": "https://github.com/HERA-Team/pyuvdata/pull/394",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
372457704
|
I wrote a more friendly interface to convert tokens into vectors.
All I done is adding a file elmo.py.
By using Embedder python object in elmo.py, you can easily merge ELMo into your own code like this:
#import it outside the top directory of this repo
from ELMoForManyLangs import elmo
e = elmo.Embedder()
sents = [['今', '天', '天氣', '真', '好', '阿'],
['潮水', '退', '了', '就', '知道', '誰', '沒', '穿', '褲子']]
# the list of lists which store the sentences
# after segment if necessary.
e.sents2elmo(sents)
# will return a list of numpy arrays
# each with the shape=(seq_len, embedding_size)
the parameters to init Embedder:
class Embedder(model_dir='zht.model/', batch_size=64):
model_dir: the relative path from the repo top dir to you model dir. (default: zht.model/)
batch_size: the batch_size you want when the model inference, you can specify it properly according to your gpu/cpu ram size. (default: 64)
the parameters of the function sents2elmo:
def sents2elmo(sents, output_layer=-1):
sents: the list of lists which store the sentences after segment if necessary.
output_layer: the target layer to output.
0 for the word encoder
1 for the first LSTM hidden layer
2 for the second LSTM hidden layer
-1 for an average of 3 layers. (default)
Many thanks! We will review this PR ASAP.
I've slightly changed the API and readme. Please check!
So efficient! Thanks for the work!
|
gharchive/pull-request
| 2018-10-22T09:51:57 |
2025-04-01T04:55:09.295805
|
{
"authors": [
"Oneplus",
"voidism"
],
"repo": "HIT-SCIR/ELMoForManyLangs",
"url": "https://github.com/HIT-SCIR/ELMoForManyLangs/pull/12",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
842480076
|
[Frontend] footer get left aligned on lower resolutions
Describe the bug
A clear and concise description of what the bug is.
Footer get left aligned at lower resolutions
To Reproduce
Steps to reproduce the behavior:
Go to landing page
Reduce resolutions to mobile view
See error
Expected behavior
Footer should be center aligned
Screenshots
Desktop (please complete the following information):
OS: all
Browser all
Version all
Smartphone (please complete the following information):
Device: all
OS: [e.g. all
Browser all
Version all
Additional context
I would like to resolve this issue.
Add any other context about the problem here.
Have you read the Code of Conduct?
Write your answer here.
Yes
/assign
@nlok5923 Thanks for pointing this. Please go ahead with this issue
Happy learning :)
Is this still an issue. Can I work on this ?
May I take up this issue? if no one is working on it?
@nlok5923 @Kajol-Kumari
May I take up this issue? if no one is working on it?
@nlok5923 @Kajol-Kumari
@hardikshah197 you can go ahead with it.
Can I take this Issue or is it fixed?
@aritroCoder it's still open. Please feel free to go with it.
Hi,
If this issue is currently not assigned to anyone or has not been fixed yet, I would like to give it a try.
Cheers!
Is this solution OK?
https://user-images.githubusercontent.com/92646038/144732500-40a5f2ab-4a22-4580-b194-8fbab1210c7a.mp4
@aritroCoder yes the changes looks good.
What's the status of this? If there is no PR, Can I work on this?
@Kajol-Kumari
Hey Can you assign me this issue for GSsoc22
Plz Assign it to me. I am GSSOC 22 participant
Closing this one as this should be covered under https://github.com/HITK-TECH-Community/Community-Website/issues/748
|
gharchive/issue
| 2021-03-27T11:05:54 |
2025-04-01T04:55:09.305788
|
{
"authors": [
"CodingwithMe123",
"Kajol-Kumari",
"Mridul07Sharma",
"aritroCoder",
"hardikshah197",
"ikavyajain",
"nlok5923",
"pushkar2112",
"sulogna2001",
"tanishq-arya"
],
"repo": "HITK-TECH-Community/Community-Website",
"url": "https://github.com/HITK-TECH-Community/Community-Website/issues/566",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.