id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
497960550
(3.0.77) Go to containing dictionary ... doesn't BEFORE: AFTER: The reason Go to containing dictionary didn't go anywhere was because the containing dictionary appears to be the empty string. On one hand, it's nice that there was no walkback. On the other, it's frustrating because it's not clear how the class's dictionary could be empty. It's possible the class needed an update from the server, but without a reproduction case it's going to be difficult to track down. What was done before the browser got into this state? Turning on logging could provide a nice 'paper trail' to follow. A reproduction case would be better. @LisaAlmarode If you can get logs of this happening or a reproduction case, I'd appreciate it. I've had no luck so far. @ericwinger where is the code that defines the "containing dictionary"? is it in the Services code? If so, add a pointer to the place where you determine the containg dictionary and I shoul be able to provide you with a test case ... the problem reported above fails consistently, and it is in a private stone of mine: If you want to login, you have to use rogue-vpn to find my machine when I am at home working ... let me know if you can't reproduce the problem and I will do the debugging on my end @dalehenrich Take a look at RowanClassService>>setDictionary:. See if that sheds any light. Note that Rowan knows which dictionary the class is is (Globals), when it creates the class definition template: Looking at senders of RowanClassService>>setDictionary:, I see that both RowanClassService>>basicRefreshFrom: and RowanClassService>>minimalRefreshFrom: can return the meta class and with a little bit of testing in my image, I see that (Rowan image symbolList dictionariesAndSymbolsOf: GsQueryPredicate) first first name produces#'Globals' ... which is the correct answer and (Rowan image symbolList dictionariesAndSymbolsOf: GsQueryPredicate class) produces anArray( ) ... which is the wrong answer. Anything other than a class returns an empty list, so I would suspect that Jadeite is either passing in a non-Class theClass or perhaps metaClass is defaulting to true. I'm pretty sure that this happens when I have just opened up fresh project browser on the class. Here's an example where explicitly selecting the class button creates and empty list: Ahh, you may have found it. Try this - If you click on the Class tab, I'll bet you don't see the dictionary. That would be a nice reproduction case. Take a look at the final image in my previous comment ... I did select the 'Class' button ... howver, in the original report, the 'Instance' button is clearly selected, which is why I mentioned the the metaClass iv might default to true or you might be passing in a non-class object ... This image? The class side is selected. Exactly ... and as predicted ... the containing dictionary is empty ... it shouldn't be ... at a minimum this is wrong ... have you tried connecting to my stone? then you can do whatever experiments you need to do here ... AFAICT, both class tabs are already selected ... is there another class tab I am missing? The problem is the meta class that you pointed out. I've got a fix ready for 3.0.78 for you to try. ... your fix won't necessarily fix the original problem (or at best it will mask the problem) as in the original bug report the instance tab is clearly selected ... so I think it might be an initial condition problem as well ... Now that I've been workin gin this project ... I've added two more packages and the problem does not reproduce (instance tab ending up with empty string for dictionary name) ... so there is clearly something else going wrong here ... I am suspicious of this line of code: classOrMeta := meta == true ifTrue:[theClass class] ifFalse:[theClass]. if meta is nil, you will return the meta class for theClass, which is what I am seeing ... and the fact that you aren't using meta directly implies that in some circumstances meta will be set to a non-boolean value (nil perhaps) ... you've got an initialize method that sets meta to false, but the behavior that I'm seeing in my initial case, implies that someone is using the setter (meta:) to set meta to a non-boolean value --- or to true ... I've lost my "reproducable test case" ... so I cannot test your code for you ... so I think that unless additional work is done this bug will still be lurking around waiting to hit us again ... Looking at all references in the ServiceClass and verifying that the setters all look good and then looking at all senders of meta: is probably a good idea, since your fix will just cause another part of the code to go haywire, if meta is not being handled correctly --- which appears to be the case ... Yes, meta can legitimately be nil in some cases especially when the class service originates in Dolphin. e.g. Doing a find class doesn't yet know it's meta so it assumes false. fyi - None of the images you've posted on this issue show the instance tab selected with the problem issue. If you can send me a picture of the instance tab selected & the problem dictionary I'll look into it further. Ahh, I did not notice that ... I don't believe that I did anything other than open the project browser and select the class in question (without touching any of the class tabs) I assumed that I was looking at the instance side in my original bug report. So presumably there is another condition (closely related to my original bug report, which does not reproduce now:) where the class side of a class gets selected when I open a fresh project browser and simply select the class ... I see that the class tab is sticky: ... this is unexpected to me ... is this "stickiness" expected behavior? Earlier today I wasted a fair amount of time writing code on the class side of a class when I thought I was on the instance side ... I couldn't understand why the project browser "put code on the class-side of a class" ... I expect to be on the instance side when I switch classes, so I guess I'm going to have to start training myself to worry about the instance/class tab whenever I am working with Jadeite, unless this is considered to be a bug by you and/or @LisaAlmarode :) ... a couple more times of wasting time will probably be enough ... It's a bit of a surprise, that I just found this out after working in Jadeite for a year and a half, but clearly I haven't been bit by this assumption until today:) So... it is very possible that I was working on another class when I switched to the bad boy and I just assumed that the instance side was selected... sorry about that ... ... this is unexpected to me ... is this "stickiness" expected behavior? The class tab "stickiness" is actually intended in some scenarios. For example, if you have an open browser on a class with class tab selected then use Jadeite menu option New Projects Browser (Ctrl+N). It will open up the new browser on that class with the class tab selected. The assumption is that you want to open a new browser in the same place you are currently rather than starting from a fresh browser. Alternatively, if you do a Find Class ... from the console's Jadeite>Browse menu, it will open on the instance side. The assumption is that you are starting your work from scratch and most work is done on the instance side of classes. Yet, it's possible that there might be a bug or enhancements in this area. Since I've got a fix for the specific problem coming in 3.0.78, can we close this issue for review by @LisaAlmarode and open a new issue to examine class tab "stickiness" behavior? Well the case where I have observed the "stickiness" is when I switch to a different class in the same browserr ... I also did a Find Class ... earlier today and ended up with the class tab selected ... of course it didn't reproduce --- neither of these use cases fall under your "intentional Use case" ... so there does appear to be some more bugs in this area ... This issue got a bit muddled. The bulk of this is discussion about the selection of class or instance doesn't seem to have any problems (other than new issue #555 reported). The case where RowanClassService on the class side showed an empty dictionary is fixed in 3.0.79. There were cases long ago in which some menu operations acted wonky (e.g. drag and drop moved methods to the class side), due to internal defaults to meta. It seems possible that the initial report on this case may be due to a leftover case of this kind. However, we'd need some details as to the steps preceding to hope to reproduce this... I tried a few cases but there are too many possibilities. Given the one issue fixed, and the large amount of digressions, if we have another case of a class's go-to-dictionary turning up empty, a new issue should be opened.
gharchive/issue
2019-09-24T22:49:34
2025-04-01T06:37:01.261218
{ "authors": [ "LisaAlmarode", "dalehenrich", "ericwinger" ], "repo": "GemTalk/Jadeite", "url": "https://github.com/GemTalk/Jadeite/issues/530", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2031591863
Request changing the background color on edit/error In JforD, when you start editing a method, the background turns green. If you save with an error, it turns pink. I really like these color changes (the green, at least). The pharo corner color is completely inadequate. The GBS big red dot on the Source tab is at least right there in your face, where the code is that you are staring at, not way off to the far side of the windows. This mentioned in #44 but deserves its own issue. Should have the green background working in https://github.com/GemTalk/JadeiteForPharo/commit/c9bd0ba6c9fcc584b2db711500ccc454a1b4a9ad Added red dot that shows up if method source, class comment, or class definition panes changed. https://github.com/GemTalk/JadeiteForPharo/commit/f006867fe29dadab863b0774c83096bc4b32040e https://github.com/GemTalk/JadeiteForPharo/commit/919183710b732385e041cd13899918807d98b4a7
gharchive/issue
2023-12-07T21:53:23
2025-04-01T06:37:01.264691
{ "authors": [ "LisaAlmarode", "ericwinger" ], "repo": "GemTalk/JadeiteForPharo", "url": "https://github.com/GemTalk/JadeiteForPharo/issues/55", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
526339429
Updates fail with concurrent query / updates Describe the bug Hi I believe I have found a bug which appears to rear its head when queries and updates happen at the same time, I have also identified an area on the code which may be causing this problem and a simple fix which seems to prevent this behaviour and may also help save up to 50% of memory storage in redis. This is potentially loosely related to #201 and may suggest behaviour described in this issue is not intended. I found when polling a certain resource on my server that if I do an update to that resource the cache will fail to update 100% of the time even though the database field clearly has changed. When polling is disabled the cache works as expected. I found the problem exists in GeneaLabs\LaravelModelCaching\Traits\Buildable in the retrieveCachedValue(...) function. It appears a race condition can happen in this function if a query and update are performed at the same time. The problem happens when a first() query is performed which calls cachedValue() > retrieveCachedValue(). This actually results in two results being stored in the cache. When the cache is found to be empty at rememberForever() the parent::{$method}(...$arguments) function is called which resolves to the first() method in Illuminate\Database\Concerns\BuildsQueries as intended. The breakdown happens here as the first() method in this class actually runs $this->take(1)->get($columns)->first(), which unintentionally calls back GeneaLabs\LaravelModelCaching\Traits\Buildable::get() as the CachedBuilder is overriding the get() method. This actually results in an additional call to retrieveCachedValue() which adds in another key/val pair to redis, but most dangerously this tries to recall the cache when we actually intended it to skip the cache and call the database. Because of this recursive behaviour a dirty read from the cache can happen at exactly the same time as a flush and the dirty read ends up becoming the new value instead of the updated value in the database. This problem can be fixed by simply adding $this->disableModelCaching(); to the anonymous function in the retrieveCachedValue() method as below: return $this->cache($cacheTags) ->rememberForever( $hashedCacheKey, function () use ($arguments, $cacheKey, $method) { $this->disableModelCaching(); return [ "key" => $cacheKey, "value" => parent::{$method}(...$arguments), ]; } ); This prevents any recursive calls back to the CachedBuilder, or at least causes the calls to fall back to the EloquentBuilder instead. As a result of adding this it also prevents the extra get() query from being unintentionally cached saving some space in redis. Stack Trace A stack trace of the recursive behaviour as explained above: [0] => Array ( [file] => /srv/application/vendor/laravel/framework/src/Illuminate/Cache/Repository.php [line] => 422 [function] => GeneaLabs\LaravelModelCaching\Traits\{closure} [class] => GeneaLabs\LaravelModelCaching\CachedBuilder [type] => -> ) [1] => Array ( [file] => /srv/application/vendor/genealabs/laravel-model-caching/src/Traits/Buildable.php [line] => 300 [function] => rememberForever [class] => Illuminate\Cache\Repository [type] => -> ) [2] => Array ( [file] => /srv/application/vendor/genealabs/laravel-model-caching/src/Traits/Buildable.php [line] => 231 [function] => retrieveCachedValue [class] => GeneaLabs\LaravelModelCaching\CachedBuilder [type] => -> ) [3] => Array ( [file] => /srv/application/vendor/genealabs/laravel-model-caching/src/Traits/Buildable.php [line] => 100 [function] => cachedValue [class] => GeneaLabs\LaravelModelCaching\CachedBuilder [type] => -> ) [4] => Array ( [file] => /srv/application/vendor/laravel/framework/src/Illuminate/Database/Concerns/BuildsQueries.php [line] => 77 [function] => get [class] => GeneaLabs\LaravelModelCaching\CachedBuilder [type] => -> ) [5] => Array ( [file] => /srv/application/vendor/genealabs/laravel-model-caching/src/Traits/Buildable.php [line] => 293 [function] => first [class] => Illuminate\Database\Eloquent\Builder [type] => -> ) [6] => Array ( [file] => /srv/application/vendor/laravel/framework/src/Illuminate/Cache/Repository.php [line] => 422 [function] => GeneaLabs\LaravelModelCaching\Traits\{closure} [class] => GeneaLabs\LaravelModelCaching\CachedBuilder [type] => -> ) [7] => Array ( [file] => /srv/application/vendor/genealabs/laravel-model-caching/src/Traits/Buildable.php [line] => 300 [function] => rememberForever [class] => Illuminate\Cache\Repository [type] => -> ) [8] => Array ( [file] => /srv/application/vendor/genealabs/laravel-model-caching/src/Traits/Buildable.php [line] => 231 [function] => retrieveCachedValue [class] => GeneaLabs\LaravelModelCaching\CachedBuilder [type] => -> ) [9] => Array ( [file] => /srv/application/vendor/genealabs/laravel-model-caching/src/Traits/Buildable.php [line] => 79 [function] => cachedValue [class] => GeneaLabs\LaravelModelCaching\CachedBuilder [type] => -> ) [10] => Array ( [file] => /srv/application/vendor/cloudcreativity/laravel-json-api/src/Eloquent/AbstractAdapter.php [line] => 218 [function] => first [class] => GeneaLabs\LaravelModelCaching\CachedBuilder [type] => -> ) [11] => Array ( [file] => /srv/application/vendor/cloudcreativity/laravel-json-api/src/Store/Store.php [line] => 245 [function] => find [class] => CloudCreativity\LaravelJsonApi\Eloquent\AbstractAdapter [type] => -> ) [12] => Array ( [file] => /srv/application/vendor/cloudcreativity/laravel-json-api/src/Store/Store.php [line] => 257 [function] => find [class] => CloudCreativity\LaravelJsonApi\Store\Store [type] => -> ) [13] => Array ( [file] => /srv/application/vendor/cloudcreativity/laravel-json-api/src/Routing/Route.php [line] => 93 [function] => findOrFail [class] => CloudCreativity\LaravelJsonApi\Store\Store [type] => -> ) Environment PHP: 7.3.9 OS: Alpine Laravel: 5.8.19 Model Caching: 0.7.0 Hope this isn't too lengthy. Thanks for your help. @saernz Thanks for this detailed report! Very interesting find, indeed! I will implement the fix you suggest and run it against the unit tests and report back. Please give me a few days to get back to you on this, I will try to get to it on Friday at the latest. @mikebronner Thanks man. Hopefully I'm not wrong, though I'll leave it up to the experts to decide :) Unfortunately I don't think my fix has worked as I ran into the same problem again. I believe my DB lock is not working some how and when the cache updates after a flush it does a dirty read of the DB some how as it ends up caching the old value. I think the bug I described above still exists, though I'm not sure if it's causing the race condition or not. From looking at the code and stack traces I believe the recursive call may still be a bug but I'm not sure how it's all relating to the race condition I'm having, will need to investigate this next week to know for sure. I've looked into this a bit more now and I believe I understand the problem a lot better. There isn't a race condition as I initially described in my original post, though I believe stopping that recursive behaviour will still help stop an intermediate value being stored in the cache caused by the eloquent builders call to get(). I have found I'm getting the wrong value returned from cache after an update if I use transactions. What seems to happen is when my update is performed in a transaction the value is correctly updated in the DB, and the cache gets flushed, but if I don't commit the transaction quick enough before the next read the cache will still get the old DB value as the transaction has not committed. I was a bit confused before as I saw the cache was flushed only after the DB had been changed and what I was seeing should technically not be possible, though because I'm also using transactions technically the flush can happen before the transaction has been committed to the database. This mainly happens because I have to dispatch an event after my record has been updated, unfortunately the framework I'm using only provides a hook to update events within the database transaction, as a result dispatching the event delayed the commit of the transaction so a read could sneak in between the database flush and the transaction being committed. The workaround for this was to perform another flush by using $myModel->flushCache() either after the transaction or as close as you can get to the transaction being closed. I'm not sure if a more permanent fix for this library would be to somehow run a flush after a transaction has been committed, though I'm not sure how easy this will be. Possibly just documenting how to use the library with transactions might be enough. @saernz Thanks for the follow-up. I will update the documentation to explain the work-around with transactions. Also, I was unable to implement your suggested fix in your first post, as it breaks unit tests. It might have worked well for you specific use-case, but it seems to break other areas.
gharchive/issue
2019-11-21T03:17:19
2025-04-01T06:37:01.311921
{ "authors": [ "mikebronner", "saernz" ], "repo": "GeneaLabs/laravel-model-caching", "url": "https://github.com/GeneaLabs/laravel-model-caching/issues/305", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1134093358
how to use? Hello there. How do we use this program, can you provide some basic steps, I'M not a programmer. You must have a Mac and build this app by yourself. You must have an experience of develop iOS app. Download this project Open this project on Xcode Build an app and install to iOS devices Is there any other way I can use this app?
gharchive/issue
2022-02-12T13:55:11
2025-04-01T06:37:01.339047
{ "authors": [ "DoomPtrl", "GenjiApp", "audioses" ], "repo": "GenjiApp/RingerVolume", "url": "https://github.com/GenjiApp/RingerVolume/issues/17", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
617332346
How to delete the attached adb script I wrote some adb scripts to control the phone, but the accompanying adb scripts conflicted, such as using 'w''s''d''a' to control, and the right mouse button. At the same time I found that my adb script has a short time (~ 0.05s) delay, and sometimes does not work, I use the python + adb method. Please tell me how to delete the internal adb, if you can, can you try to solve my problem? Thank you What do you mean by "adb script"? What do you mean by "conflicted"? adb code:db shell input tap 1280 560 Because scrpy comes with some adb operations, such as the right mouse button is the return button, how do I delete them scrcpy --no-control I want to assign a new adb command to the right mouse button ( Scrcpy does not use adb shell input … to inject events: https://github.com/Genymobile/scrcpy/issues/231#issuecomment-414111753 For now, it is not possible to reassign mouse buttons. But there is already a feature request for that: https://github.com/Genymobile/scrcpy/issues/1302 I created a new python script to output adb commands through os, but the right mouse button is occupied. Could you share your script, because I don't understand what you mean by "the right mouse button is occupied". If you execute adb shell input ... commands, it's totally independant of scrcpy. https://github.com/lcx19950201/Python-/blob/master/adb.py
gharchive/issue
2020-05-13T10:27:52
2025-04-01T06:37:01.357423
{ "authors": [ "lcx19950201", "rom1v" ], "repo": "Genymobile/scrcpy", "url": "https://github.com/Genymobile/scrcpy/issues/1387", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1162582908
Scrcpy starts and then stops, no error is being thrown [x] I have read the FAQ. [x] I have searched in existing issues. Environment OS: Windows scrcpy version: 1.23 installation method: extracted from the package device model: Google Pixel 2 Android version: 11 Describe the bug The scrcpy executable runs and stops without any indication of the problem. I attached the logcat.log file as well to help you with finding a root cause. .\scrcpy.exe scrcpy 1.23 <https://github.com/Genymobile/scrcpy> C:\scrcpy\scrcpy-server: 1 file pushed, 0 skipped. 27.6 MB/s (41123 bytes in 0.001s) [server] INFO: Device: Google Pixel 2 (Android 11) logcat.log Hi, I ran the command a second time and redirected the logcat output to the file. Let me know if it is useful for you. Thanks, Martin logcat.log Try with another encoder: https://github.com/Genymobile/scrcpy#encoder
gharchive/issue
2022-03-08T12:12:46
2025-04-01T06:37:01.362681
{ "authors": [ "rom1v", "smoqmilus" ], "repo": "Genymobile/scrcpy", "url": "https://github.com/Genymobile/scrcpy/issues/3096", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
277407210
Error: Cannot find module 'irc-framework' module.js:471 throw err; ^ Error: Cannot find module 'irc-framework' at Function.Module._resolveFilename (module.js:469:15) at Function.Module._load (module.js:417:25) at Module.require (module.js:497:17) at require (internal/module.js:20:19) at Object. (/irc-discord/index.js:1:75) at Module._compile (module.js:570:32) at Object.Module._extensions..js (module.js:579:10) at Module.load (module.js:487:32) at tryModuleLoad (module.js:446:12) at Function.Module._load (module.js:438:3) Try npm i or yarn? It's included in the package.json so the dependencies should work fine... Also wew someone else is using this? Time to make it work properly and make setup functionality I guess... npm i: [root@vultr irc-discord]# sudo npm install npm WARN less-shitty-irc@0.0.0 No repository field.
gharchive/issue
2017-11-28T14:37:07
2025-04-01T06:37:01.366349
{ "authors": [ "Geo1088", "haxxus" ], "repo": "Geo1088/irc-discord-bridge", "url": "https://github.com/Geo1088/irc-discord-bridge/issues/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1418403178
Migration command Describe the new feature A simple discord command for migrating the replit database to the new postgres database by providing the value of the REPLIT_DB_URL env var from replit. Describe alternatives you've considered Another option would be to simply get all settings and such and resetup them in the new version, but that has a higher expense depending on how much users might need to do that. Planning I thought of some command like /migrate <replit_db_url> which can be enabled in the .env for migrating the old replit database to postgres. We can probably even write the command to get it (echo $REPLIT_DB_URL) in the description of the command. API Simply like with every other command to give them their own file and just call their main function (in this case probably migrate in the file migrate.js) Useful links Replit Database FAQ Using Databases in Replit Replit's Database Module on NPM I was just going through the issues and wanted to see if we need to keep this one open. As far as I know, there is one user who is officially using the replit version. If that is the case, it might not make sense to write a feature that is only going to be used once. Assuming this user is going to switch to the new version once it is released, is data going to need to be transferred, or can it be done following a "championship" so that a new game can start from scratch in the new version? Also, is there any changes that need to happen on the discord bot settings? I know that the slash commands need to be installed, but besides that, is there anything else that needs done? If that guy ends it with a championship it would be resetting up the allowed mods and allowed channels although the guy could also simply set it within Discords Command Permission system. Other than that I think running setupCommands once and running the bot would be enough. If he/she does not want to reset the scores he'd need to use the /mod score subcommands for each user. As I don't think people using the replit bot have a lot of members in their server I'm fine with not creating the migration command for the reason you mentioned. Also the guy at some point has to move over to the new version because discord.js v12 won't be supported by Discord (for the API version they talk to) forever. Got it. In that case I will go ahead and close this issue.
gharchive/issue
2022-10-21T14:10:18
2025-04-01T06:37:01.389581
{ "authors": [ "GeorgeCiesinski", "Wissididom" ], "repo": "GeorgeCiesinski/poke-guesser-bot", "url": "https://github.com/GeorgeCiesinski/poke-guesser-bot/issues/79", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1278967901
[BUG] SSLError workarounds? This seems to be a firewall related error, which has known workarounds for standard PIP install. python -m quickumls.install . ~/repos/umls-files Error: python -m quickumls.install . ~/repos/umls-files Determining if SpaCy for language "ENG" is installed... SpaCy is not available! Attempting to download and install... ⚠ As of spaCy v3.0, shortcuts like 'en' are deprecated. Please use the full pipeline package name 'en_core_web_sm' instead. Traceback (most recent call last): File "/home/dlituiev/anaconda3/envs/spacy/lib/python3.10/site-packages/quickumls/install.py", line 130, in install_spacy spacy.load(SPACY_LANGUAGE_MAP[lang]) File "/home/dlituiev/anaconda3/envs/spacy/lib/python3.10/site-packages/spacy/__init__.py", line 51, in load return util.load_model( File "/home/dlituiev/anaconda3/envs/spacy/lib/python3.10/site-packages/spacy/util.py", line 426, in load_model raise IOError(Errors.E941.format(name=name, full=OLD_MODEL_SHORTCUTS[name])) # type: ignore[index] OSError: [E941] Can't find model 'en'. It looks like you're trying to load a model from a shortcut, which is obsolete as of spaCy v3.0. To load the model, use its full name instead: nlp = spacy.load("en_core_web_sm") For more details on the available models, see the models directory: https://spacy.io/models. If you want to create a blank model, use spacy.blank: nlp = spacy.blank("en") During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/dlituiev/anaconda3/envs/spacy/lib/python3.10/site-packages/urllib3/connectionpool.py", line 703, in urlopen httplib_response = self._make_request( File "/home/dlituiev/anaconda3/envs/spacy/lib/python3.10/site-packages/urllib3/connectionpool.py", line 386, in _make_request self._validate_conn(conn) File "/home/dlituiev/anaconda3/envs/spacy/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1040, in _validate_conn conn.connect() File "/home/dlituiev/anaconda3/envs/spacy/lib/python3.10/site-packages/urllib3/connection.py", line 414, in connect self.sock = ssl_wrap_socket( File "/home/dlituiev/anaconda3/envs/spacy/lib/python3.10/site-packages/urllib3/util/ssl_.py", line 449, in ssl_wrap_socket ssl_sock = _ssl_wrap_socket_impl( File "/home/dlituiev/anaconda3/envs/spacy/lib/python3.10/site-packages/urllib3/util/ssl_.py", line 493, in _ssl_wrap_socket_impl return ssl_context.wrap_socket(sock, server_hostname=server_hostname) File "/home/dlituiev/anaconda3/envs/spacy/lib/python3.10/ssl.py", line 512, in wrap_socket return self.sslsocket_class._create( File "/home/dlituiev/anaconda3/envs/spacy/lib/python3.10/ssl.py", line 1070, in _create self.do_handshake() File "/home/dlituiev/anaconda3/envs/spacy/lib/python3.10/ssl.py", line 1341, in do_handshake self._sslobj.do_handshake() ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:997) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/dlituiev/anaconda3/envs/spacy/lib/python3.10/site-packages/requests/adapters.py", line 489, in send resp = conn.urlopen( File "/home/dlituiev/anaconda3/envs/spacy/lib/python3.10/site-packages/urllib3/connectionpool.py", line 785, in urlopen retries = retries.increment( File "/home/dlituiev/anaconda3/envs/spacy/lib/python3.10/site-packages/urllib3/util/retry.py", line 592, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /explosion/spacy-models/master/compatibility.json (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:997)'))) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/dlituiev/anaconda3/envs/spacy/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/home/dlituiev/anaconda3/envs/spacy/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/home/dlituiev/anaconda3/envs/spacy/lib/python3.10/site-packages/quickumls/install.py", line 233, in <module> main() File "/home/dlituiev/anaconda3/envs/spacy/lib/python3.10/site-packages/quickumls/install.py", line 171, in main install_spacy(opts.language) File "/home/dlituiev/anaconda3/envs/spacy/lib/python3.10/site-packages/quickumls/install.py", line 134, in install_spacy spacy.cli.download(SPACY_LANGUAGE_MAP[lang]) File "/home/dlituiev/anaconda3/envs/spacy/lib/python3.10/site-packages/spacy/cli/download.py", line 67, in download compatibility = get_compatibility() File "/home/dlituiev/anaconda3/envs/spacy/lib/python3.10/site-packages/spacy/cli/download.py", line 78, in get_compatibility r = requests.get(about.__compatibility__) File "/home/dlituiev/anaconda3/envs/spacy/lib/python3.10/site-packages/requests/api.py", line 73, in get return request("get", url, params=params, **kwargs) File "/home/dlituiev/anaconda3/envs/spacy/lib/python3.10/site-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/home/dlituiev/anaconda3/envs/spacy/lib/python3.10/site-packages/requests/sessions.py", line 587, in request resp = self.send(prep, **send_kwargs) File "/home/dlituiev/anaconda3/envs/spacy/lib/python3.10/site-packages/requests/sessions.py", line 701, in send r = adapter.send(request, **kwargs) File "/home/dlituiev/anaconda3/envs/spacy/lib/python3.10/site-packages/requests/adapters.py", line 563, in send raise SSLError(e, request=request) requests.exceptions.SSLError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /explosion/spacy-models/master/compatibility.json (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:997)'))) **Environment ** OS: WSL Ubuntu QuickUMLS version [e.g. 1.4] UMLS version [e.g. 2022AA] Python 3.10 spacy: spacy-3.3.1 en_core_web_sm is installed it would be great to have --trusted-host argument also it seems like an antiquated requirement for en language instead of a specific spacy model like en_core_web_sm (see spacy warning in the log) I've patched it by replacing in constants.py: en -> en_core_web_sm
gharchive/issue
2022-06-21T19:42:15
2025-04-01T06:37:01.403871
{ "authors": [ "DSLituiev" ], "repo": "Georgetown-IR-Lab/QuickUMLS", "url": "https://github.com/Georgetown-IR-Lab/QuickUMLS/issues/86", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
298578123
memory grid has an invalid kx, asserts baseimg.c {1919} See issues/issue 16 be4be3a57bd4fca42ef128 Options: (prefered) ensure memory grids behave like persistent grids, in which case they can have a KX +1 or -1. (easier, but the bug may appear somewhere else) baseimg can tread KX == 0 as KX == 1, does not need to assert. See https://github.com/GeosoftInc/gxapi/tree/master/issues/issue 16 for a failing program. See also https://github.com/GeosoftInc/gxapi/blob/master/tests/python/test_grid.py, which has a skipped test that will fail until this is fixed. Jacques found that is call img.opt_kx(kx) the kx is set and assertions go away. Still should not assert though... Resolved fda2032d3ef25067be57fc344d06
gharchive/issue
2018-02-20T12:35:26
2025-04-01T06:37:01.408655
{ "authors": [ "ianneilmacleod" ], "repo": "GeosoftInc/gxapi", "url": "https://github.com/GeosoftInc/gxapi/issues/16", "license": "bsd-2-clause", "license_type": "permissive", "license_source": "bigquery" }
1557368141
superfund, power plants, init weight calculation functions Review this pull request carefully for compatibility. I had to stash and sync my fork a few times. In this commit, I have split point_data_factors.R into npl_super and power_plants. I saved the new npl_superfund geocoded file to the data/processed folder. I also created the framework for some weighting functions. I think it would be a good idea to restructure the R folder to have subdirectories of "functions", "factors", and 'standalone scripts'. Looks great! I'm definitely on board with a file restructure, let's revisit once most of the processing code is developed.
gharchive/pull-request
2023-01-25T22:41:19
2025-04-01T06:37:01.410268
{ "authors": [ "ccmothes", "dhunt22" ], "repo": "GeospatialCentroid/NASA-prison-EJ", "url": "https://github.com/GeospatialCentroid/NASA-prison-EJ/pull/11", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
168271311
Add gradient, remove unused method in Gradients.js + minor linting Added 3 new gradients to gradients.json getGradients() in Gradients.js was not used anymore, so removed Minor refactor Minor linting, adding missing semi colons, spaces between braces, removed unnecessary white space @i-break-codes Thanks for the PR, this is super. One minor edit though, can you just please revert the getGradients() method. It is basically like a utility method I use sometimes. It's left there intentionally. @Ghosh aah, my bad, sorry, I didn't realized that! Well, reverted back. Thanks 👍
gharchive/pull-request
2016-07-29T08:01:34
2025-04-01T06:37:01.477415
{ "authors": [ "Ghosh", "i-break-codes" ], "repo": "Ghosh/uiGradients", "url": "https://github.com/Ghosh/uiGradients/pull/163", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1765046848
🛑 MCSkinHistory is down In 6fdde30, MCSkinHistory (https://mcskinhistory.com) was down: HTTP code: 0 Response time: 0 ms Resolved: MCSkinHistory is back up in 19881cf.
gharchive/issue
2023-06-20T09:48:37
2025-04-01T06:37:01.495707
{ "authors": [ "GigadriveBot" ], "repo": "Gigadrive/status.gigadrive.network", "url": "https://github.com/Gigadrive/status.gigadrive.network/issues/160", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
806987660
How to stop the fans? I can't seem to find a way to stop the fans. The obvious setDutyCycle(0) doesn't work. In forums I've read that you can turn it off by switching pin mode to OUTPUT and digitalWrite(pin, LOW). But I don't know what effect that would have while this libraries code is running (problems along the line, how to start the fans again?) I would highly appreciate if someone could implement a method stop() and maybe restart() Just started to use this library. It's great, but I can't stop the fans either. Closest I can get is ~5% without side effects. Any chance there is a fix in the works? Thanks! You can use another input and a mofset as mentioned in this other diagram. You only need the mofset part https://github.com/sker65/esphome-fan-controller
gharchive/issue
2021-02-12T06:33:58
2025-04-01T06:37:01.501031
{ "authors": [ "ghost", "kenkit", "ufoDziner" ], "repo": "GiorgioAresu/FanController", "url": "https://github.com/GiorgioAresu/FanController/issues/15", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
198923919
Custom 404 Not Found pages for books Most books would benefit from having relevant 404 pages. I can see several solutions: Book setting to setup a redirection URL on 404 (simple and flexible) 404 pages defined in the gitbook (not great because it introduces a new convention in the toolchain for 404.md files) GitBook.com could add a relevant link to the book homepage on 404 pages Related ticket: https://gitbook.zendesk.com/agent/tickets/4891 I think we will opt for solution 3. Improving the 404 page for books, by providing a call to action: It looks like the page you accessed does not exist, or was moved. Go back to the content: <root_url_of_the_book>
gharchive/issue
2017-01-05T10:34:36
2025-04-01T06:37:01.536941
{ "authors": [ "Soreine" ], "repo": "GitbookIO/feedback", "url": "https://github.com/GitbookIO/feedback/issues/295", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2622539456
🛑 gadgethouse is down In 9576d13, gadgethouse (gadgethouse.nl) was down: HTTP code: 0 Response time: 0 ms Resolved: gadgethouse is back up in 94c598d after 18 minutes.
gharchive/issue
2024-10-29T23:44:04
2025-04-01T06:37:01.548326
{ "authors": [ "GiuseppeFilingeri" ], "repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle", "url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/102911", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1040550169
🛑 orkut is down In 40eb0b0, orkut (orkut.com.br) was down: HTTP code: 0 Response time: 0 ms Resolved: orkut is back up in 176205e.
gharchive/issue
2021-10-31T18:31:56
2025-04-01T06:37:01.550669
{ "authors": [ "GiuseppeFilingeri" ], "repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle", "url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/10430", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2753183764
🛑 rapishare is down In 4ee951b, rapishare (rapishare.com) was down: HTTP code: 0 Response time: 0 ms Resolved: rapishare is back up in 7eb483f after 19 minutes.
gharchive/issue
2024-12-20T17:38:58
2025-04-01T06:37:01.553147
{ "authors": [ "GiuseppeFilingeri" ], "repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle", "url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/105847", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2763813980
🛑 pornfuze is down In 7a0c510, pornfuze (pornfuze.com) was down: HTTP code: 0 Response time: 0 ms Resolved: pornfuze is back up in b41ffa0 after 13 hours, 42 minutes.
gharchive/issue
2024-12-30T23:58:12
2025-04-01T06:37:01.555494
{ "authors": [ "GiuseppeFilingeri" ], "repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle", "url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/106593", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1111375487
🛑 orkut is down In b36a945, orkut (orkut.com.br) was down: HTTP code: 404 Response time: 100 ms Resolved: orkut is back up in bb6ed22.
gharchive/issue
2022-01-22T09:20:38
2025-04-01T06:37:01.557847
{ "authors": [ "GiuseppeFilingeri" ], "repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle", "url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/15179", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1133352233
🛑 letsgo is down In f1fe320, letsgo (letsgo.com) was down: HTTP code: 0 Response time: 0 ms Resolved: letsgo is back up in 8489200.
gharchive/issue
2022-02-12T00:08:32
2025-04-01T06:37:01.560383
{ "authors": [ "GiuseppeFilingeri" ], "repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle", "url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/16915", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1152829615
🛑 orkut is down In 4552226, orkut (orkut.com.br) was down: HTTP code: 0 Response time: 0 ms Resolved: orkut is back up in 0b528ba.
gharchive/issue
2022-02-27T05:02:58
2025-04-01T06:37:01.563174
{ "authors": [ "GiuseppeFilingeri" ], "repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle", "url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/18146", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1167399384
🛑 orkut is down In 57adc11, orkut (orkut.com.br) was down: HTTP code: 403 Response time: 242 ms Resolved: orkut is back up in e0d3926.
gharchive/issue
2022-03-12T21:00:42
2025-04-01T06:37:01.565468
{ "authors": [ "GiuseppeFilingeri" ], "repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle", "url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/19172", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1180837537
🛑 orkut is down In 6693451, orkut (orkut.com.br) was down: HTTP code: 403 Response time: 246 ms Resolved: orkut is back up in 78cc02a.
gharchive/issue
2022-03-25T14:06:28
2025-04-01T06:37:01.567798
{ "authors": [ "GiuseppeFilingeri" ], "repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle", "url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/20064", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1299831145
🛑 orkut is down In ce0c95f, orkut (orkut.com.br) was down: HTTP code: 0 Response time: 0 ms Resolved: orkut is back up in 2503110.
gharchive/issue
2022-07-10T05:10:36
2025-04-01T06:37:01.570138
{ "authors": [ "GiuseppeFilingeri" ], "repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle", "url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/29635", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1309542921
🛑 orkut is down In 24b9710, orkut (orkut.com.br) was down: HTTP code: 0 Response time: 0 ms Resolved: orkut is back up in c395c63.
gharchive/issue
2022-07-19T13:40:55
2025-04-01T06:37:01.572636
{ "authors": [ "GiuseppeFilingeri" ], "repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle", "url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/30554", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1315275170
🛑 penispedia is down In 7b3d995, penispedia (penispedia.de) was down: HTTP code: 0 Response time: 0 ms Resolved: penispedia is back up in 19a4347.
gharchive/issue
2022-07-22T17:53:30
2025-04-01T06:37:01.574965
{ "authors": [ "GiuseppeFilingeri" ], "repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle", "url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/30872", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1419977531
🛑 orkut is down In 1350053, orkut (orkut.com.br) was down: HTTP code: 429 Response time: 748 ms Resolved: orkut is back up in e5e3055.
gharchive/issue
2022-10-23T23:32:59
2025-04-01T06:37:01.577382
{ "authors": [ "GiuseppeFilingeri" ], "repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle", "url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/36414", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1428747906
🛑 orkut is down In 6ce2ef4, orkut (orkut.com.br) was down: HTTP code: 429 Response time: 674 ms Resolved: orkut is back up in 2a575da.
gharchive/issue
2022-10-30T11:42:21
2025-04-01T06:37:01.579709
{ "authors": [ "GiuseppeFilingeri" ], "repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle", "url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/37220", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1462190467
🛑 orkut is down In 2a362bb, orkut (orkut.com.br) was down: HTTP code: 429 Response time: 526 ms Resolved: orkut is back up in 508ebfd.
gharchive/issue
2022-11-23T18:06:17
2025-04-01T06:37:01.582060
{ "authors": [ "GiuseppeFilingeri" ], "repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle", "url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/40866", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1504444239
🛑 orkut is down In 0761160, orkut (orkut.com.br) was down: HTTP code: 429 Response time: 1052 ms Resolved: orkut is back up in 2c80752.
gharchive/issue
2022-12-20T11:56:19
2025-04-01T06:37:01.584597
{ "authors": [ "GiuseppeFilingeri" ], "repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle", "url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/45524", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
968368087
🛑 orkut is down In 7cb9d2f, orkut (orkut.com.br) was down: HTTP code: 0 Response time: 0 ms Resolved: orkut is back up in 832c8a9.
gharchive/issue
2021-08-12T08:43:27
2025-04-01T06:37:01.586892
{ "authors": [ "GiuseppeFilingeri" ], "repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle", "url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/4880", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1555174247
🛑 ojolink is down In de60f51, ojolink (ojolink.net) was down: HTTP code: 403 Response time: 544 ms Resolved: Ojolink is back up in c709c63.
gharchive/issue
2023-01-24T15:16:15
2025-04-01T06:37:01.589231
{ "authors": [ "GiuseppeFilingeri" ], "repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle", "url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/50847", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1569160344
🛑 orkut is down In 4658806, orkut (orkut.com.br) was down: HTTP code: 429 Response time: 1041 ms Resolved: orkut is back up in e57dce9.
gharchive/issue
2023-02-03T03:42:09
2025-04-01T06:37:01.591554
{ "authors": [ "GiuseppeFilingeri" ], "repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle", "url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/51897", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1570839931
🛑 orkut is down In f6e8676, orkut (orkut.com.br) was down: HTTP code: 429 Response time: 786 ms Resolved: orkut is back up in db7590c.
gharchive/issue
2023-02-04T08:48:35
2025-04-01T06:37:01.593883
{ "authors": [ "GiuseppeFilingeri" ], "repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle", "url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/52016", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
918206834
🛑 orkut is down In 797ec07, orkut (orkut.com.br) was down: HTTP code: 0 Response time: 0 ms Resolved: orkut is back up in 6a252c5.
gharchive/issue
2021-06-11T04:24:43
2025-04-01T06:37:01.596374
{ "authors": [ "GiuseppeFilingeri" ], "repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle", "url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/543", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1656694006
🛑 ojolink is down In b89871c, ojolink (ojolink.fr) was down: HTTP code: 0 Response time: 0 ms Resolved: Ojolink is back up in d56d0e5.
gharchive/issue
2023-04-06T05:34:17
2025-04-01T06:37:01.598694
{ "authors": [ "GiuseppeFilingeri" ], "repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle", "url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/55914", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1665050829
🛑 wvtaxsales is down In 4db431f, wvtaxsales (wvtaxsales.com) was down: HTTP code: 0 Response time: 0 ms Resolved: wvtaxsales is back up in 868b9ef.
gharchive/issue
2023-04-12T18:32:27
2025-04-01T06:37:01.601336
{ "authors": [ "GiuseppeFilingeri" ], "repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle", "url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/56545", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1701633901
🛑 ojolink is down In 1cfa59f, ojolink (ojolink.fr) was down: HTTP code: 0 Response time: 0 ms Resolved: Ojolink is back up in 91818f7.
gharchive/issue
2023-05-09T08:48:20
2025-04-01T06:37:01.603772
{ "authors": [ "GiuseppeFilingeri" ], "repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle", "url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/58851", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1808527797
🛑 ojolink is down In b821212, ojolink (ojolink.fr) was down: HTTP code: 0 Response time: 0 ms Resolved: Ojolink is back up in d8283fc.
gharchive/issue
2023-07-17T20:04:56
2025-04-01T06:37:01.606105
{ "authors": [ "GiuseppeFilingeri" ], "repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle", "url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/64900", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1850538918
🛑 ojolink is down In 2493366, ojolink (ojolink.fr) was down: HTTP code: 0 Response time: 0 ms Resolved: Ojolink is back up in 9de4ea0.
gharchive/issue
2023-08-14T21:05:20
2025-04-01T06:37:01.608610
{ "authors": [ "GiuseppeFilingeri" ], "repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle", "url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/67430", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1878553890
🛑 ojolink is down In 0a38a23, ojolink (ojolink.fr) was down: HTTP code: 0 Response time: 0 ms Resolved: Ojolink is back up in 2c47b8a after 9 minutes.
gharchive/issue
2023-09-02T09:56:30
2025-04-01T06:37:01.610935
{ "authors": [ "GiuseppeFilingeri" ], "repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle", "url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/69302", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1878743851
🛑 ojolink is down In 4ce00d9, ojolink (ojolink.fr) was down: HTTP code: 0 Response time: 0 ms Resolved: Ojolink is back up in b7c53d8 after 7 minutes.
gharchive/issue
2023-09-02T15:51:13
2025-04-01T06:37:01.613323
{ "authors": [ "GiuseppeFilingeri" ], "repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle", "url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/69328", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
996937078
🛑 orkut is down In db51a73, orkut (orkut.com.br) was down: HTTP code: 0 Response time: 0 ms Resolved: orkut is back up in 28cdd76.
gharchive/issue
2021-09-15T10:50:37
2025-04-01T06:37:01.615635
{ "authors": [ "GiuseppeFilingeri" ], "repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle", "url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/7244", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2116958940
🛑 carlinha is down In e750bf4, carlinha (carlinha.org) was down: HTTP code: 0 Response time: 0 ms Resolved: carlinha is back up in 5eb62a8 after 27 minutes.
gharchive/issue
2024-02-04T08:37:00
2025-04-01T06:37:01.618065
{ "authors": [ "GiuseppeFilingeri" ], "repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle", "url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/75851", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2246422052
🛑 ojolink is down In c91785d, ojolink (ojolink.fr) was down: HTTP code: 0 Response time: 0 ms Resolved: Ojolink is back up in bdd436e after 24 minutes.
gharchive/issue
2024-04-16T16:08:37
2025-04-01T06:37:01.620637
{ "authors": [ "GiuseppeFilingeri" ], "repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle", "url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/79369", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2267440752
🛑 rapishare is down In d38d8e5, rapishare (rapishare.com) was down: HTTP code: 0 Response time: 0 ms Resolved: rapishare is back up in 13d0488 after 1 hour, 21 minutes.
gharchive/issue
2024-04-28T09:00:46
2025-04-01T06:37:01.622998
{ "authors": [ "GiuseppeFilingeri" ], "repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle", "url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/80793", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2272842303
🛑 ojolink is down In 82cabd3, ojolink (ojolink.fr) was down: HTTP code: 0 Response time: 0 ms Resolved: Ojolink is back up in 51bcf30 after 1 hour.
gharchive/issue
2024-05-01T03:04:59
2025-04-01T06:37:01.625376
{ "authors": [ "GiuseppeFilingeri" ], "repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle", "url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/81152", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2286930288
🛑 ojolink is down In 1336daa, ojolink (ojolink.fr) was down: HTTP code: 0 Response time: 0 ms Resolved: Ojolink is back up in d73a4f7 after 8 minutes.
gharchive/issue
2024-05-09T05:37:05
2025-04-01T06:37:01.627855
{ "authors": [ "GiuseppeFilingeri" ], "repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle", "url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/82144", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2304462276
🛑 ojolink is down In 4dedfc6, ojolink (ojolink.fr) was down: HTTP code: 0 Response time: 0 ms Resolved: Ojolink is back up in 864587e after 8 minutes.
gharchive/issue
2024-05-19T06:47:50
2025-04-01T06:37:01.630269
{ "authors": [ "GiuseppeFilingeri" ], "repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle", "url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/83413", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2431064317
🛑 Wide6 is down In 29b3c73, Wide6 (Wide6.com) was down: HTTP code: 0 Response time: 0 ms Resolved: wide6 is back up in 936b5bc after .
gharchive/issue
2024-07-25T22:18:59
2025-04-01T06:37:01.632792
{ "authors": [ "GiuseppeFilingeri" ], "repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle", "url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/94654", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2439234091
🛑 orkut is down In b69db9f, orkut (orkut.co.in) was down: HTTP code: 0 Response time: 0 ms Resolved: orkut is back up in 325538c after 14 minutes.
gharchive/issue
2024-07-31T06:25:21
2025-04-01T06:37:01.635196
{ "authors": [ "GiuseppeFilingeri" ], "repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle", "url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/95399", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2446316111
🛑 orkut is down In 8d2ea26, orkut (orkut.co.in) was down: HTTP code: 0 Response time: 0 ms Resolved: orkut is back up in 035de4c after 57 minutes.
gharchive/issue
2024-08-03T11:51:21
2025-04-01T06:37:01.637542
{ "authors": [ "GiuseppeFilingeri" ], "repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle", "url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/95838", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2453709067
🛑 torrentzap is down In 4842ffa, torrentzap (torrentzap.com) was down: HTTP code: 0 Response time: 0 ms Resolved: torrentzap is back up in fab96ab after 57 minutes.
gharchive/issue
2024-08-07T14:58:59
2025-04-01T06:37:01.640239
{ "authors": [ "GiuseppeFilingeri" ], "repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle", "url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/96348", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2454182647
🛑 torrentzap is down In ec2a0b3, torrentzap (torrentzap.com) was down: HTTP code: 0 Response time: 0 ms Resolved: torrentzap is back up in df65a18 after 16 minutes.
gharchive/issue
2024-08-07T19:27:10
2025-04-01T06:37:01.642623
{ "authors": [ "GiuseppeFilingeri" ], "repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle", "url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/96378", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2469146045
🛑 orkut is down In e4df387, orkut (orkut.co.in) was down: HTTP code: 0 Response time: 0 ms Resolved: orkut is back up in b245962 after 7 minutes.
gharchive/issue
2024-08-15T23:30:10
2025-04-01T06:37:01.645223
{ "authors": [ "GiuseppeFilingeri" ], "repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle", "url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/97437", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2485443390
🛑 orkut is down In 20142e4, orkut (orkut.co.in) was down: HTTP code: 0 Response time: 0 ms Resolved: orkut is back up in ad75ef6 after 51 minutes.
gharchive/issue
2024-08-25T20:06:47
2025-04-01T06:37:01.647525
{ "authors": [ "GiuseppeFilingeri" ], "repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle", "url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/98752", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2491499214
🛑 Ojolink is down In 2197ca6, Ojolink (Ojolink.net) was down: HTTP code: 0 Response time: 0 ms Resolved: ojolink is back up in 254b6e6 after .
gharchive/issue
2024-08-28T09:01:02
2025-04-01T06:37:01.649896
{ "authors": [ "GiuseppeFilingeri" ], "repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle", "url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/99079", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1851282660
Coreg.fit_pts(..., mask_high_curv=True) does not seem to work. I'm getting an error when enabling the mask_high_curv=True flag (after adhering to the warning: "Warning: There is no curvature in dataframe. Set mask_high_curv=True for more robust results"). File /nix/store/kyrwb500hfcyyabb66msvq2whpvhmh0d-python3-3.10.10-env/lib/python3.10/site-packages/xdem/coreg/base.py:907, in Coreg.fit_pts(self, reference_dem, dem_to_be_aligned, inlier_mask, transform, samples, subsample, verbose, mask_high_curv, order, z_name, weights) 903 ref_dem = reference_dem[ref_valid] 905 if mask_high_curv: 906 maxc = np.maximum( --> 907 np.abs(get_terrain_attribute(tba_dem, attribute=["planform_curvature", "profile_curvature"])), axis=0 908 ) 909 # Mask very high curvatures to avoid resolution biases 910 mask_hc = maxc.data > 5.0 TypeError: bad operand type for abs(): 'Raster' The code in question is here: https://github.com/GlacioHack/xdem/blob/02902c095ffb2bfbf422e90c13d4fce227da6583/xdem/coreg/base.py#L905-L910 I don't know if this piece of code was ever tested, or if something new has broken it! Either way, it doesn't seem to work right now, as np.abs(Raster) doesn't work it seems. @adehecq or @rhugonnet, do you know if this has ever worked? I see three potential solutions: Make np.abs(Raster) work in GeoUtils. Change the problematic line to: np.max(np.abs(get_terrain_attribute(tba_dem, attribute=[...]).data), axis=0) (note the .data addition to make it a masked array instead of a raster). Remove this functionality here altogether and make it a Filter Also, I don't think it's great that a maxc limit of 5.0 is hardcoded. Should that be made a keyword argument? What do you think @rhugonnet? Yes the tests are still poor for these new point functions (I intended to work a bit on that in the next PR on coregistration, but I'm not there yet). We didn't insist too much on tests in #346 as we knew we were about to rework some of the module. I have no idea if it has ever worked, never used it. I agree, actually I think both directions should be done: We should definitely move this to coreg/filters in time, with the limit a keyword argument, It'd be great if we added np.abs to GeoUtils in single-input handled functions: https://github.com/GlacioHack/geoutils/blob/main/geoutils/raster/raster.py#L68. To keep new features cleanly tested in the future (and ensure they are added to API, we update the package history, etc...), we could add an automated "PR" checklist in .github? Something like this: https://github.com/pyproj4/pyproj/blob/main/.github/PULL_REQUEST_TEMPLATE.md. What do you think @erikmannerfelt @adehecq? I added np.abs and np.absolute (aliases) in https://github.com/GlacioHack/geoutils/pull/393, it was just a couple lines! I remember now: the reason that line of code might have worked before is because the Raster class had the __array_interface__. Unfortunately we had to deactivate it for now because it created infinite loops when called by np.ma functions, and messed up some priorities for arithmetic functions. They are working on fixing that in NumPy by adding an interface for masked arrays (...eventually!) :sweat_smile: Yes the tests are still poor for these new point functions (I intended to work a bit on that in the next PR on coregistration, but I'm not there yet). We didn't insist too much on tests in #346 as we knew we were about to rework some of the module. I have no idea if it has ever worked, never used it. I agree, actually I think both directions should be done: * We should definitely move this to `coreg/filters` in time, with the limit a keyword argument, * It'd be great if we added `np.abs` to GeoUtils in single-input handled functions: https://github.com/GlacioHack/geoutils/blob/main/geoutils/raster/raster.py#L68. To keep new features cleanly tested in the future (and ensure they are added to API, we update the package history, etc...), we could add an automated "PR" checklist in .github? Something like this: https://github.com/pyproj4/pyproj/blob/main/.github/PULL_REQUEST_TEMPLATE.md. What do you think @erikmannerfelt @adehecq? Awesome @rhugonnet, thanks for the input! Yeah a checklist would be great. Even though there's no check that the listed parts are actually implemented, I think it could work in our favour in the long term. Also, thanks a lot for fixing the problem on the GU side!! Oh, and perhaps internally we could have a routine for PRs like #346. I was the one who pushed it despite it not being absolutely finished, so I'm mostly to blame. But I suspect there may be times when this could happen again, such as when @adehecq needed to quickly implement some functionality for his workshop a year ago (?). It would be good to have an "express train" routine to make sure that the essentials are merged, and cleanup happens soon thereafter!
gharchive/issue
2023-08-15T11:27:19
2025-04-01T06:37:01.668113
{ "authors": [ "erikmannerfelt", "rhugonnet" ], "repo": "GlacioHack/xdem", "url": "https://github.com/GlacioHack/xdem/issues/404", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1833332838
Investigate: Building 0.3.0 image failed the first time Context It failed quickly (~48seconds) https://github.com/GlareDB/glaredb/actions/runs/5739955754/job/15556727166#step:6:164 with: error[E0433]: failed to resolve: could not find FsConfigCmd in types --> /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/rustix-0.38.5/src/backend/linux_raw/mount/syscalls.rs:214:27 | 214 | super::types::FsConfigCmd::Create, | ^^^^^^^^^^^ could not find FsConfigCmd in types error[E0433]: failed to resolve: could not find FsConfigCmd in types --> /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/rustix-0.38.5/src/backend/linux_raw/mount/syscalls.rs:228:27 | 228 | super::types::FsConfigCmd::Reconfigure, | ^^^^^^^^^^^ could not find FsConfigCmd in types Compiling serde_json v1.0.104 Compiling camino v1.1.6 Compiling crypto-common v0.1.6 error[E0277]: the trait bound reg::ArgReg<'_, A3>: From<MountFlagsArg> is not satisfied --> /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/rustix-0.38.5/src/backend/linux_raw/arch/mod.rs:258:17 | 258 | $a3.into(), | ^^^^ the trait From<MountFlagsArg> is not implemented for reg::ArgReg<'_, A3> | ::: /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/rustix-0.38.5/src/backend/linux_raw/mount/syscalls.rs:23:13 | 23 | ret(syscall_readonly!( | ______- 24 | | NR_mount, 25 | | source, 26 | | target, ... | 29 | | data 30 | | )) | |- in this macro invocation | = help: the following other types implement trait From<T>: <reg::ArgReg<'a, Num> as From<&'a CStr>> <reg::ArgReg<'a, Num> as From<&'a mut MaybeUninit>> <reg::ArgReg<'a, Num> as From<&'a mut [MaybeUninit]>> <reg::ArgReg<'a, Num> as From<(backend::fs::types::Mode, backend::fs::types::FileType)>> <reg::ArgReg<'a, Num> as From<*const T>> <reg::ArgReg<'a, Num> as From<*mut T>> <reg::ArgReg<'a, Num> as From> <reg::ArgReg<'a, Num> as From<BorrowedFd<'a>>> and 18 others = note: required for MountFlagsArg to implement Into<reg::ArgReg<'_, A3>> = note: this error originates in the macro syscall_readonly (in Nightly builds, run with -Z macro-backtrace for more info) Compiling block-buffer v0.10.4 Compiling term_size v0.3.2 Specifically on cargo install just. Not sure why that would intermittently fail. This was a bug in rustix 0.38.5, which is now fixed in rustix 0.35.6. 🙏 Really appreciate the info and getting a fix out quickly.
gharchive/issue
2023-08-02T14:45:06
2025-04-01T06:37:01.679881
{ "authors": [ "greyscaled", "scsmithr", "sunfishcode" ], "repo": "GlareDB/glaredb", "url": "https://github.com/GlareDB/glaredb/issues/1460", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
112582689
Code Folding error When i have folded some code. And if press Beautify, folding open its self and doesnt come back to closed. It will be handfull after Beautify ends, load the folding. :) Duplicate of #116
gharchive/issue
2015-10-21T13:01:02
2025-04-01T06:37:01.681803
{ "authors": [ "Glavin001", "kasik96" ], "repo": "Glavin001/atom-beautify", "url": "https://github.com/Glavin001/atom-beautify/issues/615", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
98100278
Add Gherkin grammar support This resolves Glavin001/atom-beautify#377 Uses the https://github.com/cucumber/gherkin/tree/master/js node.js package to use the official Lexer class. The only known issue I've seen so far is that it does not format tables properly -- the columns are not resized to match the widest cell in each column. @Glavin001 When can we expect v0.29.0 with Gherkin support? very excited about this update! :+1: This is great! Thank you for this Pull Request. Only thing that is missing is tests: place original and expected files in a gherkin directory inside of examples/nested-jsbeautifyrc/ directory: https://github.com/Glavin001/atom-beautify/tree/master/examples/nested-jsbeautifyrc Thanks again! @meustice I hope to merge this soon. I have been set back with v0.29.0 because of work and I do not have the time that I would like to review all of the Pull Requests properly and add new features under the v0.29.0 milestone. Hopefully this week goes well and I have a little bit of time this weekend to review and merge everything. If the tests are all there, I would like to merge all of the currently open Pull Requests and then make a new release. Thank you for your patience! @Glavin001 thanks, I'll see if I can get around to writing tests for this, and I'll make the change for debug_lexer also. The main reason I didn't want it to run initially is that it could slow things down if it's executed when it's not needed. Is there a Logger method to check if a certain type is enabled, before doing the rest of the work? I.e., only do the work if log level is verbose? Is there a Logger method to check if a certain type is enabled, before doing the rest of the work? I.e., only do the work if log level is verbose? Good idea. You could probably do something like: loggerLevel = atom?.config.get('atom-beautify._loggerLevel') if loggerLevel is 'verbose': # Log stuff here I'll see if I can get around to writing tests for this, and I'll make the change for debug_lexer also. The tests should be very easy: an example file with some styling problems in the original directory, and then the correctly styled output in the expected directory. You can disable a test by adding _ to the front of the filename inside of original. The _ prefix in the original example file will cause it to be ignored / skipped. So if you could add a completely working test and then maybe a disabled test for the tables, then that would be great! Should only take a few minutes. Just something rough. Thanks! @Glavin001 Are the existing test failures expected? When I run build-package.sh, I get a bunch of failures: Finished in 4.637 seconds 55 tests, 395 assertions, 201 failures, 0 skipped @Glavin001 I ran into an issue with Atom automatically inserting a newline at the end of the file upon saving (due to the whitespace package). This was causing problems because atom-beautify appears to specifically strip the trailing newlines from resolved string, as even trying to explicitly add a call to @write_blank(), or concatenating \n to the resolved string, and the test continued to fail due to the trailing newline in the expected output. The only way to get the tests to pass was by turning off the "ensure newline" option from the whitespace package, so that I could delete the trailing newline from the original and expected files. Then it would work. Is that a known issue? Anything I should have done differently to account for that? The only way to get the tests to pass was by turning off the "ensure newline" option from the whitespace package, so that I could delete the trailing newline from the original and expected files. Then it would work. Is that a known issue? Anything I should have done differently to account for that? Nope, you are correct. Disabling the whitespace package, at least temporarily, if also what I do. Alternatively you could use the Right-Click context menu item for the file in the file tree called Beautify File which will not trigger the whitespace package. Are the existing test failures expected? When I run build-package.sh, I get a bunch of failures: These failures are likely because there are many beautifiers supported that use third-party CLI executables that need to be installed. For instance, to run the tests for beautifying PHP code Atom Beautify requires PHP-CS-Fixer, which I assume you may not have installed. As long as your tests for Gherkin pass, then I can review this and still merge it. Thanks. Published to v0.28.9
gharchive/pull-request
2015-07-30T07:06:54
2025-04-01T06:37:01.693111
{ "authors": [ "Glavin001", "jhansche", "meustice" ], "repo": "Glavin001/atom-beautify", "url": "https://github.com/Glavin001/atom-beautify/pull/488", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1930919628
Migrate to AOSPv14 Boots to UI. The same issue here: https://github.com/waydroid/waydroid/issues/1070 We also have this issue: https://github.com/raspberry-vanilla/android_local_manifest/issues/29 I ended up reverting wpa_supplicant to A13. Ready for merging.
gharchive/pull-request
2023-10-06T20:52:44
2025-04-01T06:37:01.710221
{ "authors": [ "rsglobal" ], "repo": "GloDroidCommunity/raspberry-pi", "url": "https://github.com/GloDroidCommunity/raspberry-pi/pull/30", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
568803093
[BUG] 筛选form.Select 不生效 以及 时间字段为bigint筛选问题 bug 描述 1. 按照例子 // 设置为单选类型 info.AddField("Gender", "gender", db.Tinyint). FieldFilterable(types.FilterType{FormType: form.SelectSingle}). FieldFilterOptions([]map[string]string{ {"value": "0", "field": "men"}, {"value": "1", "field": "women"}, }).FieldFilterOptionExt(map[string]interface{}{"allowClear": true}) 把 form.SelectSingle 改为 form.Select 无法实现筛选,单个筛选也不行 2. 时间筛选 数据库时间字段为 bigint 无法用时间筛选,请问怎么实现 复现步骤 [清晰描述复现步骤,让别人也能看到问题] 期望结果 [描述你原本期望看到的结果] 复现代码 [提供可复现的代码,仓库,或线上示例] 版本信息: GoAdmin 版本: [e.g. 1.0.0] golang 版本 浏览器环境 开发环境 [e.g. mac OS] 其他信息 [如截图等其他信息可以贴在这里] @smirkcat 复现不了,确认下你的GoAdmin版本是不是最新的。 @chenhg5 版本信息: GoAdmin 版本: 1.2.2 golang 版本 go version go1.13.6 windows/amd64 浏览器环境 各个浏览器 开发环境 windows vscode @chenhg5 筛选url为 http://18.139.250.215:9033/admin/info/localorder?pair=&username=&username__operator__=like&order_number=&status=&type=buy&hegding_status[]=0&hegding_status[]=1&create_time_start__goadmin=&create_time_end__goadmin=&_previous_=&_t= 其中 hegding_status 为筛选的字段 @smirkcat 方便提供下数据模型文件中,hegding_status这一列的代码吗? info.AddField("对冲状态", "hegding_status", db.Tinyint).FieldDisplay(func(model types.FieldModel) interface{} { if model.Value == "2" { return "状态3" } if model.Value == "1" { return "状态2" } return "状态1" }).FieldFilterable(types.FilterType{FormType: form.Select}). FieldFilterOptions([]map[string]string{ {"value": "0", "field": "状态1"}, {"value": "1", "field": "状态2"}, {"value": "2", "field": "状态3"}, }) @smirkcat 测试了下,form.Select目前确实不支持的,改成form.SelectSingle可以。 form.Select本周的下个版本应该会增加支持。 当前版本已支持。
gharchive/issue
2020-02-21T08:13:02
2025-04-01T06:37:01.763545
{ "authors": [ "chenhg5", "smirkcat" ], "repo": "GoAdminGroup/go-admin", "url": "https://github.com/GoAdminGroup/go-admin/issues/174", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2161375856
button_pressed and button_released signals of InteractableAreaButton always generate errors The button_pressed and button_released signals send this error when emited: interactable_area_button.gd:87 @ _on_button_entered(): Error calling from signal 'button_pressed' to callable: 'Node3D(Button.gd)::on_button_pressed': Cannot convert argument 1 from Object to NodePath. <C++ Source> core/object/object.cpp:1140 @ emit_signalp() <Stack Trace> interactable_area_button.gd:87 @ _on_button_entered() By changing emit(self) by emit(get_path()) it resolve the problem but I don't know if this is correct. Could you provide a bit more information. The complaint is indicating a mismatch with some "on_button_pressed" handler in a Button.gd file - could you provide the code of that callback function? Yes sorry it was in fact my callback and not the signal the problem, for some reason I typed the "button" argument as a NodePath instead of a Variant. I was confused looking at the code that has a "button" variable wich is a NodePath.
gharchive/issue
2024-02-29T14:18:32
2025-04-01T06:37:01.775596
{ "authors": [ "BrokAnkle", "Malcolmnixon" ], "repo": "GodotVR/godot-xr-tools", "url": "https://github.com/GodotVR/godot-xr-tools/issues/618", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2203795261
Create README.md gg done
gharchive/pull-request
2024-03-23T09:39:36
2025-04-01T06:37:01.776648
{ "authors": [ "Godse-07", "IamPiklu" ], "repo": "Godse-07/Random-Password-generator", "url": "https://github.com/Godse-07/Random-Password-generator/pull/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2392604763
apply-colors.sh has a bug Line 126 is currently this: if [[ -z "${GS}"]] &&[[ -z "${DCONF}" ]] && [[ -z "${GCONF}" ]]; then it should be this: if [[ -z "${GS}" ]] && [[ -z "${DCONF}" ]] && [[ -z "${GCONF}" ]]; then It's missing a couple spaces Yes, Yesterday update break apply script. On my Ubuntu workstation with gnome terminal: /tmp/gogh.apply.EEXYdO: line 126: conditional binary operator expected /tmp/gogh.apply.EEXYdO: line 126: syntax error near `-z' /tmp/gogh.apply.EEXYdO: line 126: ` if [[ -z "${GS}"]] &&[[ -z "${DCONF}" ]] && [[ -z "${GCONF}" ]]; then' Fixed in #449, should be closed.
gharchive/issue
2024-07-05T13:22:45
2025-04-01T06:37:01.793960
{ "authors": [ "EvergreenTheTree", "bwanshoom", "luX0r-reload" ], "repo": "Gogh-Co/Gogh", "url": "https://github.com/Gogh-Co/Gogh/issues/450", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1365142059
Bulletin Board Tab Selection Broken Tabs aren't showing which is selected any more. Also, PVP should come to the left of "Other" and "All" Also we lost the "Send Spec" button in the lower right. Also needs "Created" time and "Updated" time still as columns, default sort to Created oldest on top. Sorry I can't remember if that's in another ticket yet. Seems to work now... maybe I had some bad data in the groups? Dunno. Probably a Dupe of #87
gharchive/issue
2022-09-07T20:22:47
2025-04-01T06:37:01.796398
{ "authors": [ "Gogo1951" ], "repo": "Gogo1951/Groupie", "url": "https://github.com/Gogo1951/Groupie/issues/86", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
378266314
RNTwitterSignIn.logIn() sometimes requires authorising twice When invoking RNTwitterSignIn.init followed by RNTwitterSignIn.logIn(), most of the time only the screen on the left will show. I press connect, and this then dismisses the view and I handle the promise call using the authToken and authTokenSecret. However, I would say roughly 1 in 3 times, the screen on the right will show immediately after pressing connect and the screen dismissing. Why is this happening? The screen on the left is from the Twitter app. I am logged in and as far as I know once I hit connect that should finish the process. The package should just pass the relevant info through the promise and that be the end. Why does it sometimes load this second window afterwards prompting login? Thanks in advance. @jskidd3 @nabylb Did you find any solution? I've same issue. If I remember rightly the Twitter login SDK is deprecated so we pulled support from our app. 👍 On Thu, 24 Oct 2019 at 13:39, Ayberk notifications@github.com wrote: @jskidd3 https://github.com/jskidd3 @nabylb https://github.com/nabylb Did you find any solution? I've same issue. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/GoldenOwlAsia/react-native-twitter-signin/issues/119?email_source=notifications&email_token=AAP6LL3RE73DE7TDV3IQYL3QQGJPJA5CNFSM4GCJKTPKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOECE36CI#issuecomment-545898249, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAP6LL4QSXVZG364M2KB6STQQGJPJANCNFSM4GCJKTPA . -- Regards, Joel. @GoldenowlConsultingCompany Having same issue. Why does this even happen? Like the first auth through Twitter app doesn't even matter. You can just close it and then the browser version is opened and that's the one that matters (if auth or decline). Also why do I have to authorize everytime? Shouldn't I just be logged in already everytime the Twitter auth screen opens? I am writing my Twitter username & password everytime I Want to login with Twitter for some reasons.
gharchive/issue
2018-11-07T12:25:24
2025-04-01T06:37:01.808307
{ "authors": [ "ZerakPalani", "ayberkanilatsiz", "jskidd3" ], "repo": "GoldenOwlAsia/react-native-twitter-signin", "url": "https://github.com/GoldenOwlAsia/react-native-twitter-signin/issues/119", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2049561534
fix(help): catch http exception fixes: #146 Description Catch possible exceptions while retrieving the help info 好耶,感谢dalao贡献~
gharchive/pull-request
2023-12-19T22:56:30
2025-04-01T06:37:01.809733
{ "authors": [ "GoldenPotato137", "dreamjz" ], "repo": "GoldenPotato137/PotatoVN", "url": "https://github.com/GoldenPotato137/PotatoVN/pull/147", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1086024722
Fix subclasses bug and add support for frozensets Description In this PR: Support for arbitrary set, dict and list subclasses Support for frozenset Add filter_not for lists and sets Fixes #17 Type of change Please delete options that are not relevant. [x] Bug fix (non-breaking change which fixes an issue) [x] New feature (non-breaking change which adds functionality) [x] This change requires a documentation update Checklist [x] My code follows the style guidelines of this project [x] I have made corresponding changes to the documentation [x] I have added tests that prove my fix is effective or that my feature works You can review it now @samuelchassot ! Haha no I'll do them! Haha no I'll do them! hahah 👍 Actually it seems that some lines are not tested. Is it a bug? Should be fixed Great so let's merge :) Great job!
gharchive/pull-request
2021-12-21T16:47:42
2025-04-01T06:37:01.836553
{ "authors": [ "Gondolav", "samuelchassot" ], "repo": "Gondolav/pyfuncol", "url": "https://github.com/Gondolav/pyfuncol/pull/19", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
60020912
Allow clients to explicitly reset client id Resetting client ID was requested as a feature during a recent (unrelated product) privacy review. No going to happen.
gharchive/issue
2015-03-05T21:42:59
2025-04-01T06:37:01.844000
{ "authors": [ "stevogotchi" ], "repo": "GoogleChrome/chrome-platform-analytics", "url": "https://github.com/GoogleChrome/chrome-platform-analytics/issues/32", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
104001501
Add Array.prototype.includes sample @jeffposnick Do you mind if I take over this simple one? All yours! Thanks for taking this on!
gharchive/issue
2015-08-31T07:26:12
2025-04-01T06:37:01.869306
{ "authors": [ "addyosmani", "beaufortfrancois", "jeffposnick" ], "repo": "GoogleChrome/samples", "url": "https://github.com/GoogleChrome/samples/issues/201", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1753551371
Remove Internet Explorer from list of supported browsers As noted in #350 web-vitals no longer works in IE9 as of version 2.1.2. It probably still works in IE10 and IE11 but is no longer tested in either and so is at risk of breaking in either soon. With all versions of IE not officially unsupported I think it's best to remove any language of it's support from the README. Perhaps this should be a breaking change and saved for v4, but as noted v3 (and also the latest version of v3) already are not reflective of the actual situation so I'm tempted not to leave this for v4 and just merge this. Closing for now as not needed.
gharchive/pull-request
2023-06-12T20:42:11
2025-04-01T06:37:01.870838
{ "authors": [ "tunetheweb" ], "repo": "GoogleChrome/web-vitals", "url": "https://github.com/GoogleChrome/web-vitals/pull/355", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
265787820
Add definitions Glossary to Docs Taken from #640 From Kayce I'd avoid jargon-y terms like "precache" and "inject manifest" and just describe what it's doing in plain English. Or, if you do continue to use these terms, link to definitions. From Addy Workbox/Toolbox/Precache have historically opted to use these more jargon-y terms but if they're impeding new users from using our libraries, we should change that. I'm up for a glossary of terms or trying to stay away from using such jardon where we can. Created a place holder doc for this: https://docs.google.com/document/d/1IjDlBAsB4_SAx2XqPrNDGBj8tsR2UULLxCbJIYIk4NM/edit?usp=sharing Feel free to add content or suggestions and I'll ensure it lands in V3 release.
gharchive/issue
2017-10-16T14:21:38
2025-04-01T06:37:01.880732
{ "authors": [ "gauntface" ], "repo": "GoogleChrome/workbox", "url": "https://github.com/GoogleChrome/workbox/issues/904", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1874903588
It is unclear if chromedriver always matches the chrome version With chromedriver 114 and lower there was a dedicated URL to tell the latest chromedriver version: https://chromedriver.storage.googleapis.com/LATEST_RELEASE Here we have new links to tell the version, but it doesn't clearly specify if it is common for chrome and chromedriver: https://github.com/GoogleChromeLabs/chrome-for-testing#other-api-endpoints In https://googlechromelabs.github.io/chrome-for-testing/last-known-good-versions-with-downloads.json I can see that for Stable release there is a common version and I can assume that both chrome and chromedriver download links are based on that version. Could you please confirm if it is safe to assume that the versions always match? Nowhere in the docs it has been clearly stated. Here we have new links to tell the version, but it doesn't clearly specify if it is common for chrome and chromedriver: https://github.com/GoogleChromeLabs/chrome-for-testing#other-api-endpoints From the README: The set of “all CfT assets” for a given Chrome version is a matrix of supported binaries × platforms. The current list of supported binaries is: chrome a.k.a. Chrome for Testing (supported since v113.0.5672.0) chromedriver (supported since v115.0.5763.0) chrome-headless-shell (supported since v118.0.5944.0) The current list of supported platforms is: linux64 mac-arm64 mac-x64 win32 win64 Rest assured that any version number you obtain via the CfT JSON API or via the other endpoints are guaranteed to have all CfT assets available for that version.
gharchive/issue
2023-08-31T06:58:53
2025-04-01T06:37:01.887350
{ "authors": [ "mathiasbynens", "pwspot" ], "repo": "GoogleChromeLabs/chrome-for-testing", "url": "https://github.com/GoogleChromeLabs/chrome-for-testing/issues/48", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1615593382
Implement browsingContext.print (print to PDF) Implement Add browsingContext.print command by jgraham · Pull Request #363 · w3c/webdriver-bidi. CDP method: Page.printToPDF. Spec: https://w3c.github.io/webdriver-bidi/#command-browsingContext-print Tracking [x] Basic implementation [x] ~CDP does not have a shrinkToFit equivalent (c.f. comment)~ it is called preferCSSPageSize [x] Implement shrinkToFit [x] #577 [x] Issue 1430696: CDP Page.printToPDF crashes on tiny pages [ ] CDP Page.printToPDF throws invalid parameters exception on pages with tiny dimensions (e.g. 1x1 pixel) (cont.) [ ] Fix WPT tests /webdriver/tests/bidi/browsing_context/print/ [ ] (optional) Add E2E test: print to pdf -> open it in the browser (either data/pdf or file:///) -> take screenshot -> compare golden @sadym-chromium shrinkToFit does not have a CDP equivalent. What do we do in these situations? File a FR against CDP? What do we do in these situations? In general there are 3 options: Implement in CDP. Implement on top of existing CDP functionality + some extra logic. Roll back the spec part. Implement in CDP. How can we do so? This was also suggested by @whimboo on https://github.com/web-platform-tests/wpt/pull/38931#issuecomment-1468350079 Could you give me a pointer to the CDP repo? Implement in CDP. How can we do so? This was also suggested by @whimboo on web-platform-tests/wpt#38931 (comment) Could you give me a pointer to the CDP repo? sent PM @thiagowfx FYI there are bunch of new failing tests in WPT: https://github.com/GoogleChromeLabs/chromium-bidi/pull/625/files bidiMapper:mapperDebug:CDP sent ▸ { "id": 11, "method": "Page.navigate", "params": { "url": "data:application/pdf;base64,JVBERi0xLjQKJdPr6eEKMSAwIG9iago8PC9DcmVhdG9yIChDaHJvbWl1bSkKL1Byb2R1Y2VyIChTa2lhL1BERiBtMTE2KQovQ3JlYXRpb25EYXRlIChEOjIwMjMwNjEyMjEwNjM4KzAwJzAwJykKL01vZERhdGUgKEQ6MjAyMzA2MTIyMTA2MzgrMDAnMDAnKT4+CmVuZG9iagozIDAgb2JqCjw8L0xlbmd0aCAwPj4gc3RyZWFtCgplbmRzdHJlYW0KZW5kb2JqCjIgMCBvYmoKPDwvVHlwZSAvUGFnZQovUmVzb3VyY2VzIDw8L1Byb2NTZXQgWy9QREYgL1RleHQgL0ltYWdlQiAvSW1hZ2VDIC9JbWFnZUldPj4KL01lZGlhQm94IFswIDAgMjI2NzcuMTE5IDE3MDA4LjA4XQovQ29udGVudHMgMyAwIFIKL1N0cnVjdFBhcmVudHMgMAovUGFyZW50IDQgMCBSPj4KZW5kb2JqCjQgMCBvYmoKPDwvVHlwZSAvUGFnZXMKL0NvdW50IDEKL0tpZHMgWzIgMCBSXT4+CmVuZG9iago1IDAgb2JqCjw8L1R5cGUgL0NhdGFsb2cKL1BhZ2VzIDQgMCBSPj4KZW5kb2JqCnhyZWYKMCA2CjAwMDAwMDAwMDAgNjU1MzUgZiAKMDAwMDAwMDAxNSAwMDAwMCBuIAowMDAwMDAwMjAyIDAwMDAwIG4gCjAwMDAwMDAxNTUgMDAwMDAgbiAKMDAwMDAwMDM3NiAwMDAwMCBuIAowMDAwMDAwNDMxIDAwMDAwIG4gCnRyYWlsZXIKPDwvU2l6ZSA2Ci9Sb290IDUgMCBSCi9JbmZvIDEgMCBSPj4Kc3RhcnR4cmVmCjQ3OAolJUVPRgo=", "frameId": "A45327FAB30B9B82E7B22502D73B7520" }, "sessionId": "9371436D303A35CD0D25C2B137D48C64" } +0ms bidiMapper:mapperDebug:CDP received ◂ { "id": 11, "result": { "frameId": "A45327FAB30B9B82E7B22502D73B7520", "loaderId": "4C7ADCE66AA34C1EE27626ABB37C38EA", "errorText": "net::ERR_ABORTED" }, "sessionId": "9371436D303A35CD0D25C2B137D48C64" } +0ms FAILED [100%] When will this feature be released?
gharchive/issue
2023-03-08T16:52:19
2025-04-01T06:37:01.898133
{ "authors": [ "devAtQ", "sadym-chromium", "thiagowfx" ], "repo": "GoogleChromeLabs/chromium-bidi", "url": "https://github.com/GoogleChromeLabs/chromium-bidi/issues/518", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1961438797
ci: speed up prettier/eslint pre-commit hooks and use types instead of files c.f. https://github.com/pre-commit/mirrors-prettier/blob/main/.pre-commit-hooks.yaml and https://github.com/pre-commit/identify/blob/main/identify/extensions.py I had made a mistake. It's types_or, not types. Fixed.
gharchive/pull-request
2023-10-25T13:31:19
2025-04-01T06:37:01.900001
{ "authors": [ "thiagowfx" ], "repo": "GoogleChromeLabs/chromium-bidi", "url": "https://github.com/GoogleChromeLabs/chromium-bidi/pull/1477", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
337842596
Vertical two-up Current mobile designs show the two-up working vertically, so this needs to be a feature of the two-up. Done!
gharchive/issue
2018-07-03T10:40:03
2025-04-01T06:37:01.906768
{ "authors": [ "jakearchibald" ], "repo": "GoogleChromeLabs/squoosh", "url": "https://github.com/GoogleChromeLabs/squoosh/issues/81", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
408383761
Help us pick a new project logo! Hi all! We're thinking about getting a new logo for Agones, and we want your thoughts! The project founders have selected three options. We want you to vote and tell us which one you prefer! Here's the options: Option 1: Option 2: Option 3: Got a favorite? Now, how to vote! We're going to do this cheesy emoji style. For option 1, react to this post with the :+1: For option 2, react to this post with the :heart: For option 3, react to this post with the :tada: Voting will close on Feb 12! looking more attractive than the other two How about adding a little Kubernetes icon on the light blue controller of the first option to show that it is a controller for Kubernetes? How about adding a little Kubernetes icon on the light blue controller of the first option to show that it is a controller for Kubernetes? @markmandel @Kuqd what do you think of this idea? Honestly - nobody else does sub logos, so I'm personally not a fan. Better to be clean about it, and just have our own image. That all being said - looks like we have a clear winner :+1: Yep, sounds good! Now to design a t-shirt.... :) Since we've past the 12th, should we close this ticket? Winner winner is: :fireworks: :fireworks: :fireworks: :fireworks: :fireworks: :fireworks: :fireworks:
gharchive/issue
2019-02-09T01:43:18
2025-04-01T06:37:01.914903
{ "authors": [ "markmandel", "nikkisingh0204", "pooneh-m", "thisisnotapril" ], "repo": "GoogleCloudPlatform/agones", "url": "https://github.com/GoogleCloudPlatform/agones/issues/577", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2401788447
Jetstream Maxtext Deployment Module: All scale rules now in a single HPA Having multiple HPAs monitoring the same resource causes a race condition. Keeping all the rules in the same HPA fixes this. /gcbrun /gcbrun /gcbrun /gcbrun /gcbrun
gharchive/pull-request
2024-07-10T21:39:22
2025-04-01T06:37:01.916681
{ "authors": [ "Bslabe123" ], "repo": "GoogleCloudPlatform/ai-on-gke", "url": "https://github.com/GoogleCloudPlatform/ai-on-gke/pull/730", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
161093611
ResumingStreamingResultScanner should always close calls. ResumingStreamingResultScanner doesn't close calls when it throws an exception. Close the call for both types of exceptions. LGTM
gharchive/pull-request
2016-06-19T21:35:46
2025-04-01T06:37:01.933561
{ "authors": [ "garye", "sduskis" ], "repo": "GoogleCloudPlatform/cloud-bigtable-client", "url": "https://github.com/GoogleCloudPlatform/cloud-bigtable-client/pull/882", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1379807137
compute-vm module does not support 'disk_encryption_key' property for compute instance templates Add disk_encryption_key field to google_compute_instance_template resource in compute-vm module (Supported when compute instance is made individually but not the instance template) Ref: https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/compute_instance_template#disk_encryption_key PR #830
gharchive/issue
2022-09-20T18:10:03
2025-04-01T06:37:01.935484
{ "authors": [ "bjbloemker-google" ], "repo": "GoogleCloudPlatform/cloud-foundation-fabric", "url": "https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/issues/829", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
789568505
feat: enable comment bot all repos allows bot on all repos by default added deny list for disabling on specific modules This has been applied. Spot checked https://github.com/terraform-google-modules/terraform-google-network/pull/232 This has been applied. Spot checked https://github.com/terraform-google-modules/terraform-google-network/pull/232
gharchive/pull-request
2021-01-20T02:25:57
2025-04-01T06:37:01.937534
{ "authors": [ "bharathkkb" ], "repo": "GoogleCloudPlatform/cloud-foundation-toolkit", "url": "https://github.com/GoogleCloudPlatform/cloud-foundation-toolkit/pull/862", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1839804696
Feature Request: support --quiet option Expected Behavior have a way to configure --quiet (or via env var): no useless logs increasing monitoring costs. cf https://github.com/GoogleCloudPlatform/cloud-sql-proxy/issues/1738 Actual Behavior currently we get lots of useless INFO logs with connections. Specifications Version: 1.0.2 Platform: gke +1
gharchive/issue
2023-08-07T16:12:53
2025-04-01T06:37:01.941063
{ "authors": [ "thomas-riccardi", "williamcruzme" ], "repo": "GoogleCloudPlatform/cloud-sql-proxy-operator", "url": "https://github.com/GoogleCloudPlatform/cloud-sql-proxy-operator/issues/402", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
290439254
Cloudsql-Proxy in Kubernetes for cluster PostgreSQL Hi! I need to configure the proxy to attack to Postgres cluster, this has read and read-write nodes. Would the proxy support this configuration? Or you should deploy a proxy that manages this, such as crunchy-proxy Example diagrams: Cloudsql-Proxy managing read and read-write Crunchy-Poxy managing read and read-write and use Cloudsql-Proxy to connect with Google SQL THX! For graph one, do you need a LB that will automatic route to the master or the replica? If yes, then current proxy doesn't support this, go for graph two. If you just want one proxy to be able to setup the connection to both of the master and replica, and have the "Web Application" control which one to talk to, then graph one is supported. Hi, The solution that I take in my case is, deploy pgpool for sql management and set as master node and slave two cloudsql-proxy pointing to different set of servers. Pgpool managing read and read-write and use Cloudsql-Proxy to connect with Google SQL. THX! https://issuetracker.google.com/issues/37271935
gharchive/issue
2018-01-22T11:54:12
2025-04-01T06:37:01.945800
{ "authors": [ "AthenaShi", "Tedezed" ], "repo": "GoogleCloudPlatform/cloudsql-proxy", "url": "https://github.com/GoogleCloudPlatform/cloudsql-proxy/issues/144", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
363098592
Improve support for compiling without C++11 This helps back-porting the packages to older releases using older compilers. I signed it! I signed it! Are the CLA issues resolved now? Can you please sync your branch and trigger the CLA checker again if your a member of the right group? Pull requests with multiple contributors breaks the CLA checker as far as I can tell. I created a test PR with only my commits and it does not pass the CLA test either: https://github.com/GoogleCloudPlatform/compute-image-packages/pull/719
gharchive/pull-request
2018-09-24T11:01:47
2025-04-01T06:37:01.948245
{ "authors": [ "illfelder", "rbalint" ], "repo": "GoogleCloudPlatform/compute-image-packages", "url": "https://github.com/GoogleCloudPlatform/compute-image-packages/pull/657", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
483394887
Unable to process unmanaged instance group with managed_instance_group.py managed_instance_group.py doesn't give an option to create unmanaged instnace group. In case of creation of unmanaged instnace group instance template should not be required. moved to cft Issue has been moved to the cft https://github.com/GoogleCloudPlatform/cloud-foundation-toolkit/issues/288
gharchive/issue
2019-08-21T12:37:04
2025-04-01T06:37:01.950160
{ "authors": [ "gdzieleziesz" ], "repo": "GoogleCloudPlatform/deploymentmanager-samples", "url": "https://github.com/GoogleCloudPlatform/deploymentmanager-samples/issues/490", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
812185966
fix: use current server time as version time for CreateBackup Use the current server time as version time instead of the local system time. The specs for the PITR samples indicate that the CreateBackup sample should set the current time as version time. By selecting the current timestamp from the server before actually creating the backup, the version time should be valid as it is after database creation and not in the future. Alternatively we could choose to remove the version time entirely from the sample (although that is not in line with the original intention for the sample). Fixes #1262 I think that the solution taken here is fair, but we have done something different in the Java Spanner tests, where we use the database earliest version time, instead of the server current timestamp (see https://github.com/googleapis/java-spanner/blob/master/google-cloud-spanner/src/test/java/com/google/cloud/spanner/it/ITPitrBackupAndRestore.java#L117). I would prefer if we did that unless you have concerns. I like that we are not showing the user where we are getting the date from. Failures are unrelated, merging...
gharchive/pull-request
2021-02-19T17:03:26
2025-04-01T06:37:01.952925
{ "authors": [ "amanda-tarafa", "olavloite", "thiagotnunes" ], "repo": "GoogleCloudPlatform/dotnet-docs-samples", "url": "https://github.com/GoogleCloudPlatform/dotnet-docs-samples/pull/1263", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
167107551
Adding UDF Resources to Queries The BigQuery client doesn't currently support adding UDF resources. This pull request adds a UDFResources object, and properties to set these resources on queries and QueryJobs. I signed it! See #2007 for an alternative. All commits were authored by me. I had a different email set as primary when the initial commit was made. I'm not entirely sure how that not works but you might have to rebase and change the user on those commits. On Friday, July 22, 2016, Daniel McClary notifications@github.com wrote: All commits were authored by me. I had a different email set as primary when the initial commit was made. — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/GoogleCloudPlatform/gcloud-python/pull/2015#issuecomment-234679998, or mute the thread https://github.com/notifications/unsubscribe-auth/AALvyElSGhqFqGtucoId7um9jqo9fiHlks5qYVB7gaJpZM4JTBqe . -- Tom Schultz OK rebased to reflect the proper author email. @dwmclary Thanks for the patch! @dwmclary It appears that Github / the CLA bot do not think 1ff32523f5de4d9ecacf7294ace17a77b57211d0 was authored by you, but by another blind poet of the same name. Authored by me, and now painfully rebased to reflect it. OK, should be all fixed as soon as CLAbot looks at the authorship again. @googlebot can you check the CLA again? OK, this rebase stuff is silly, I'm going to close this PR and open a new one with the right authorship. @tseaver @daspecster Any idea why the CLA isn't flipping? At this point all commits and author emails should point to me. There doesn't appear to be any rebase magic I can perform beyond this. Worst-case, I can re-fork and resubmit if we can't make @googlebot behave. Re-opened here
gharchive/pull-request
2016-07-22T18:45:04
2025-04-01T06:37:01.961889
{ "authors": [ "daspecster", "dwmclary", "tseaver" ], "repo": "GoogleCloudPlatform/gcloud-python", "url": "https://github.com/GoogleCloudPlatform/gcloud-python/pull/2015", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1715208954
CloudSQL New BP Ext Rule for maintenance window cloudsql/BP_EXT/2023_001 Add new rule for CloudSQL - cloudsql/BP_EXT/2023_001 LGTM. Thanks for the PR
gharchive/pull-request
2023-05-18T08:08:08
2025-04-01T06:37:01.963019
{ "authors": [ "abhigupta1207", "junggil" ], "repo": "GoogleCloudPlatform/gcpdiag", "url": "https://github.com/GoogleCloudPlatform/gcpdiag/pull/67", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
319282757
BigQuery: add UnknownJob type for redacted jobs. Fixes https://github.com/GoogleCloudPlatform/google-cloud-python/issues/5220 /cc @shollyman I can't see any benefit to the caller in getting back an instance of UnknownJob: I would certainly find it disconcerting and useless. ISTM that the caller's experience would be better if the backend should skip reporting jobs for which the caller does not access. As an alternative, Client.list_jobs() could just discard entries without any configuration. That said, I don't see an issue with the code here, so feel free to merge if the consensus is agin' me.
gharchive/pull-request
2018-05-01T18:29:54
2025-04-01T06:37:01.964892
{ "authors": [ "tseaver", "tswast" ], "repo": "GoogleCloudPlatform/google-cloud-python", "url": "https://github.com/GoogleCloudPlatform/google-cloud-python/pull/5281", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
202921561
Document Speech low-level API (GAPIC code) Types such as v1beta1/speech_api.rb are not currently published to the gh-pages documentation. [] Remove exclusions from .yardopts [] Add navigation to GAPIC/gRPC code Speech will need the functionality covered in #1180 in order to properly expose the GAPIC types. The GAPIC and Protobuf classes share a common namespace.
gharchive/issue
2017-01-24T19:49:06
2025-04-01T06:37:01.966739
{ "authors": [ "blowmage", "quartzmo" ], "repo": "GoogleCloudPlatform/google-cloud-ruby", "url": "https://github.com/GoogleCloudPlatform/google-cloud-ruby/issues/1205", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
325922852
Firestore - Passing Information out of Transaction Functions In the Firestore documentation we have code samples in the other languages for how to pass information out of Firestore Transactions back to the caller. (I/e a simple true/false boolean for whether population is updated). Is it possible to do this using the Ruby client? If so, how would this work? Thanks! Hi @ryanmats, thank you for the question and for using the Ruby Firestore client. It is possible to pass information back from a transaction, but to do this you should create a separate method context to pass values out. The Firestore transaction uses a Ruby closure which shares the Binding that the closure is created in. But the transaction closure may be executed more than once in response to errors on the server. So you should wait until the transaction is complete before using the value. Here is my take on the Python version of the documentation you referenced: require "google/cloud/firestore" firestore = Google::Cloud::Firestore.new # Get a document reference city_ref = firestore.doc "cities/SF" # define method that will create a transaction def update_in_transaction firestore, city_ref updated = nil firestore.transaction do |tx| snapshot = tx.get_all(city_ref).first new_population = snapshot.get(:population) + 1 if new_population < 1000000 tx.update city_ref, population: new_population updated = true else updated = false end end updated end # call method that will create a transaction result = update_in_transaction firestore, city_ref if result puts "Population updated" else puts "Sorry! Population is too big." end The Python code gets a DocumentSnapshot object from a DocumentReference object using a Transaction with this code: snapshot = city_ref.get(transaction=transaction). We don't have that in Ruby yet, which is why the line snapshot = tx.get_all(city_ref).first is used. I would prefer that this line be this instead: snapshot = city_ref.get transaction: tx, so I'll create an issue to implement that. @beccca @frankyn Any idea why the Ruby example is not present on this documentation? Hi @blowmage, by this documentation you mean the Firestore Transactions documentation. @ryanmats is writing samples for Ruby. Ah, gotcha. I see what's going on now. :) BTW, my previous recollection was off. The original trade-off was to allow Transaction#get to get either a DocumentSnapshot or fulfill a Query. So instead of stating snapshot = tx.get_all(city_ref).first you can just say snapshot = tx.get city_ref. @blowmage I think I'm good - thanks for all the help!
gharchive/issue
2018-05-24T00:36:06
2025-04-01T06:37:01.972459
{ "authors": [ "blowmage", "frankyn", "ryanmats" ], "repo": "GoogleCloudPlatform/google-cloud-ruby", "url": "https://github.com/GoogleCloudPlatform/google-cloud-ruby/issues/2103", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
163215137
Hide zones in the GCE section of the Cloud Explorer by default We should make the zones part of the GCE tree in the explorer collapsible with an icon button (i.e. just show a flat list of VMs w/o the zone in the tree). Further, we should hide the zones by default (most folks don't want to dig through zones first before getting to their list of VMs). To provide the same data for VMs in multiple zones, we should build the zone name into the machine page, e.g. "machine - zone". This was fixed in 85f8119
gharchive/issue
2016-06-30T17:02:06
2025-04-01T06:37:01.973876
{ "authors": [ "csells", "ivannaranjo" ], "repo": "GoogleCloudPlatform/google-cloud-visualstudio", "url": "https://github.com/GoogleCloudPlatform/google-cloud-visualstudio/issues/111", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
711057646
what am I doing wrong poli@poli-Parallels-Virtual-Platform:~/Desktop/iot-smart-home-cloud-master/firebase/functions$ poli@poli-Parallels-Virtual-Platform:~/Desktop/iot-smart-home-cloud-master/firebase/functions$ firebase functions:config:set \ smarthome.id=$CLIENT_ID \ smarthome.secret=$CLIENT_SECRET ✔ Functions config updated. Please deploy your functions for the change to take effect by running firebase deploy --only functions poli@poli-Parallels-Virtual-Platform:~/Desktop/iot-smart-home-cloud-master/firebase/functions$ firebase functions:config:set \ smarthome.key="my-secret-string" ✔ Functions config updated. Please deploy your functions for the change to take effect by running firebase deploy --only functions poli@poli-Parallels-Virtual-Platform:~/Desktop/iot-smart-home-cloud-master/firebase/functions$ npm install audited 425 packages in 5.749s 33 packages are looking for funding run npm fund for details found 0 vulnerabilities poli@poli-Parallels-Virtual-Platform:~/Desktop/iot-smart-home-cloud-master/firebase/functions$ firebase deploy === Deploying to 'actionhome-4baaa'... i deploying database, storage, firestore, functions, hosting, remoteconfig Running command: npm --prefix "$RESOURCE_DIR" run lint functions@ lint /home/poli/Desktop/iot-smart-home-cloud-master/firebase/functions eslint . /home/poli/Desktop/iot-smart-home-cloud-master/firebase/functions/device-cloud/device-configuration.js 33:97 warning Expected to return a value at the end of arrow function consistent-return /home/poli/Desktop/iot-smart-home-cloud-master/firebase/functions/device-cloud/register-device.js 33:75 warning Expected to return a value at the end of arrow function consistent-return /home/poli/Desktop/iot-smart-home-cloud-master/firebase/functions/smart-home/device-model.js 72:2 error Unnecessary semicolon no-extra-semi 100:2 error Unnecessary semicolon no-extra-semi 132:2 error Unnecessary semicolon no-extra-semi /home/poli/Desktop/iot-smart-home-cloud-master/firebase/functions/smart-home/fulfillment.js 68:19 error Unexpected await inside a loop no-await-in-loop ✖ 6 problems (4 errors, 2 warnings) 3 errors and 0 warnings potentially fixable with the --fix option. npm ERR! code ELIFECYCLE npm ERR! errno 1 npm ERR! functions@ lint: eslint . npm ERR! Exit status 1 npm ERR! npm ERR! Failed at the functions@ lint script. npm ERR! This is probably not a problem with npm. There is likely additional logging output above. npm ERR! A complete log of this run can be found in: npm ERR! /home/poli/.npm/_logs/2020-09-29T12_00_22_259Z-debug.log Error: functions predeploy error: Command terminated with non-zero exit code1 poli@poli-Parallels-Virtual-Platform:~/Desktop/iot-smart-home-cloud-master/firebase/functions$ cd.. cd..: command not found poli@poli-Parallels-Virtual-Platform:~/Desktop/iot-smart-home-cloud-master/firebase/functions$ ls admin.js device-cloud index.js node_modules package.json package-lock.json smart-home poli@poli-Parallels-Virtual-Platform:~/Desktop/iot-smart-home-cloud-master/firebase/functions$ firebase install Error: install is not a Firebase command Did you mean ext:install? poli@poli-Parallels-Virtual-Platform:~/Desktop/iot-smart-home-cloud-master/firebase/functions$ cd.. cd..: command not found poli@poli-Parallels-Virtual-Platform:~/Desktop/iot-smart-home-cloud-master/firebase/functions$ cd poli@poli-Parallels-Virtual-Platform:~$ cd Desktop/ poli@poli-Parallels-Virtual-Platform:~/Desktop$ cd iot-smart-home-cloud-master/ poli@poli-Parallels-Virtual-Platform:~/Desktop/iot-smart-home-cloud-master$ cd firebase/ poli@poli-Parallels-Virtual-Platform:~/Desktop/iot-smart-home-cloud-master/firebase$ firebase init ######## #### ######## ######## ######## ### ###### ######## ## ## ## ## ## ## ## ## ## ## ## ###### ## ######## ###### ######## ######### ###### ###### ## ## ## ## ## ## ## ## ## ## ## ## #### ## ## ######## ######## ## ## ###### ######## You're about to initialize a Firebase project in this directory: /home/poli/Desktop/iot-smart-home-cloud-master/firebase Before we get started, keep in mind: You are initializing in an existing Firebase project directory ? Which Firebase CLI features do you want to set up for this folder? Press Space to select features, then Enter to confirm your choices. Database: Deploy Firebase Realtime Database Rules, Firestore: D eploy rules and create indexes for Firestore, Functions: Configure and deploy Cloud Functions, Hosti ng: Configure and deploy Firebase Hosting sites, Storage: Deploy Cloud Storage security rules, Emula tors: Set up local emulators for Firebase features, Remote Config: Get, deploy, and rollback configu rations for Remote Config === Project Setup First, let's associate this project directory with a Firebase project. You can create multiple project aliases by running firebase use --add, but for now we'll just set up a default project. i Using project actionhome-4baaa (ActionHome) === Database Setup Firebase Realtime Database Rules allow you to define how your data should be structured and when your data can be read from and written to. ? What file should be used for Database Rules? database.rules.json ✔ Database Rules for actionhome-4baaa have been downloaded to database.rules.json. Future modifications to database.rules.json will update Database Rules when you run firebase deploy. === Firestore Setup Firestore Security Rules allow you to define how and when to allow requests. You can keep these rules in your project directory and publish them with firebase deploy. ? What file should be used for Firestore Rules? firestore.rules ? File firestore.rules already exists. Do you want to overwrite it with the Firestore Rules from the Firebase Console? Yes Firestore indexes allow you to perform complex queries while maintaining performance that scales with the size of the result set. You can keep index definitions in your project directory and publish them with firebase deploy. ? What file should be used for Firestore indexes? firestore.indexes.json ? File firestore.indexes.json already exists. Do you want to overwrite it with the Firestore Indexes from the Firebase Console? Yes === Functions Setup A functions directory will be created in your project with a Node.js package pre-configured. Functions can be deployed with firebase deploy. ? What language would you like to use to write Cloud Functions? JavaScript ? Do you want to use ESLint to catch probable bugs and enforce style? Yes ? File functions/package.json already exists. Overwrite? Yes ✔ Wrote functions/package.json ✔ Wrote functions/.eslintrc.json ? File functions/index.js already exists. Overwrite? Yes ✔ Wrote functions/index.js ✔ Wrote functions/.gitignore ? Do you want to install dependencies with npm now? Yes npm WARN deprecated circular-json@0.3.3: CircularJSON is in maintenance only, flatted is its successor. protobufjs@6.10.1 postinstall /home/poli/Desktop/iot-smart-home-cloud-master/firebase/functions/node_modules/protobufjs node scripts/postinstall added 426 packages from 264 contributors and audited 426 packages in 48.911s 33 packages are looking for funding run npm fund for details found 12 low severity vulnerabilities run npm audit fix to fix them, or npm audit for details === Hosting Setup Your public directory is the folder (relative to your project directory) that will contain Hosting assets to be uploaded with firebase deploy. If you have a build process for your assets, use your build's output directory. ? What do you want to use as your public directory? public ? Configure as a single-page app (rewrite all urls to /index.html)? Yes ✔ Wrote public/index.html === Storage Setup Firebase Storage Security Rules allow you to define how and when to allow uploads and downloads. You can keep these rules in your project directory and publish them with firebase deploy. ? What file should be used for Storage Rules? storage.rules === Emulators Setup ? Which Firebase emulators do you want to set up? Press Space to select emulators, then Enter to con firm your choices. (Press to select, to toggle all, to invert selection)Functions Em ulator, Firestore Emulator, Hosting Emulator ? Which port do you want to use for the functions emulator? 5001 ? Which port do you want to use for the firestore emulator? 8080 ? Which port do you want to use for the hosting emulator? 5000 ? Would you like to enable the Emulator UI? Yes ? Which port do you want to use for the Emulator UI (leave empty to use any available port)? NaN ? Would you like to download the emulators now? Yes === Remoteconfig Setup ? What file should be used for your Remote Config template? remoteconfig.template.json i Writing configuration info to firebase.json... i Writing project information to .firebaserc... ✔ Firebase initialization complete! poli@poli-Parallels-Virtual-Platform:~/Desktop/iot-smart-home-cloud-master/firebase$ firebase functions:config:set \ cloudiot.region=$REGION \ cloudiot.registry=$REGISTRY ✔ Functions config updated. Please deploy your functions for the change to take effect by running firebase deploy --only functions poli@poli-Parallels-Virtual-Platform:~/Desktop/iot-smart-home-cloud-master/firebase$ firebase functions:config:set \ smarthome.id=$CLIENT_ID \ smarthome.secret=$CLIENT_SECRET ✔ Functions config updated. Please deploy your functions for the change to take effect by running firebase deploy --only functions poli@poli-Parallels-Virtual-Platform:~/Desktop/iot-smart-home-cloud-master/firebase$ firebase functions:config:set \ smarthome.key="my-secret-string" ✔ Functions config updated. Please deploy your functions for the change to take effect by running firebase deploy --only functions poli@poli-Parallels-Virtual-Platform:~/Desktop/iot-smart-home-cloud-master/firebase$ firebase deploy === Deploying to 'actionhome-4baaa'... i deploying database, storage, firestore, functions, hosting, remoteconfig Running command: npm --prefix "$RESOURCE_DIR" run lint functions@ lint /home/poli/Desktop/iot-smart-home-cloud-master/firebase/functions eslint . /home/poli/Desktop/iot-smart-home-cloud-master/firebase/functions/device-cloud/device-configuration.js 33:97 warning Expected to return a value at the end of arrow function consistent-return /home/poli/Desktop/iot-smart-home-cloud-master/firebase/functions/device-cloud/register-device.js 33:75 warning Expected to return a value at the end of arrow function consistent-return /home/poli/Desktop/iot-smart-home-cloud-master/firebase/functions/smart-home/device-model.js 72:2 error Unnecessary semicolon no-extra-semi 100:2 error Unnecessary semicolon no-extra-semi 132:2 error Unnecessary semicolon no-extra-semi /home/poli/Desktop/iot-smart-home-cloud-master/firebase/functions/smart-home/fulfillment.js 68:19 error Unexpected await inside a loop no-await-in-loop ✖ 6 problems (4 errors, 2 warnings) 3 errors and 0 warnings potentially fixable with the --fix option. npm ERR! code ELIFECYCLE npm ERR! errno 1 npm ERR! functions@ lint: eslint . npm ERR! Exit status 1 npm ERR! npm ERR! Failed at the functions@ lint script. npm ERR! This is probably not a problem with npm. There is likely additional logging output above. npm ERR! A complete log of this run can be found in: npm ERR! /home/poli/.npm/_logs/2020-09-29T12_10_09_161Z-debug.log Error: functions predeploy error: Command terminated with non-zero exit code1 poli@poli-Parallels-Virtual-Platform:~/Desktop/iot-smart-home-cloud-master/firebase$ ls database.rules.json firestore.rules public storage.rules firebase.json functions README.md firestore.indexes.json package-lock.json remoteconfig.template.json poli@poli-Parallels-Virtual-Platform:~/Desktop/iot-smart-home-cloud-master/firebase$ cd functions/ poli@poli-Parallels-Virtual-Platform:~/Desktop/iot-smart-home-cloud-master/firebase/functions$ npm install audited 426 packages in 5.93s 33 packages are looking for funding run npm fund for details found 12 low severity vulnerabilities run npm audit fix to fix them, or npm audit for details poli@poli-Parallels-Virtual-Platform:~/Desktop/iot-smart-home-cloud-master/firebase/functions$ firebase deploy === Deploying to 'actionhome-4baaa'... i deploying database, storage, firestore, functions, hosting, remoteconfig Running command: npm --prefix "$RESOURCE_DIR" run lint functions@ lint /home/poli/Desktop/iot-smart-home-cloud-master/firebase/functions eslint . /home/poli/Desktop/iot-smart-home-cloud-master/firebase/functions/device-cloud/device-configuration.js 33:97 warning Expected to return a value at the end of arrow function consistent-return /home/poli/Desktop/iot-smart-home-cloud-master/firebase/functions/device-cloud/register-device.js 33:75 warning Expected to return a value at the end of arrow function consistent-return /home/poli/Desktop/iot-smart-home-cloud-master/firebase/functions/smart-home/device-model.js 72:2 error Unnecessary semicolon no-extra-semi 100:2 error Unnecessary semicolon no-extra-semi 132:2 error Unnecessary semicolon no-extra-semi /home/poli/Desktop/iot-smart-home-cloud-master/firebase/functions/smart-home/fulfillment.js 68:19 error Unexpected await inside a loop no-await-in-loop ✖ 6 problems (4 errors, 2 warnings) 3 errors and 0 warnings potentially fixable with the --fix option. npm ERR! code ELIFECYCLE npm ERR! errno 1 npm ERR! functions@ lint: eslint . npm ERR! Exit status 1 npm ERR! npm ERR! Failed at the functions@ lint script. npm ERR! This is probably not a problem with npm. There is likely additional logging output above. npm ERR! A complete log of this run can be found in: npm ERR! /home/poli/.npm/_logs/2020-09-29T12_11_00_100Z-debug.log Error: functions predeploy error: Command terminated with non-zero exit code1 Having trouble? Try firebase [command] --help poli@poli-Parallels-Virtual-Platform:~/Desktop/iot-smart-home-cloud-master/firebase/functions$ npm install audited 426 packages in 6.197s 33 packages are looking for funding run npm fund for details found 12 low severity vulnerabilities run npm audit fix to fix them, or npm audit for details poli@poli-Parallels-Virtual-Platform:~/Desktop/iot-smart-home-cloud-master/firebase/functions$ cd .. poli@poli-Parallels-Virtual-Platform:~/Desktop/iot-smart-home-cloud-master/firebase$ npm install npm WARN saveError ENOENT: no such file or directory, open '/home/poli/Desktop/iot-smart-home-cloud-master/firebase/package.json' npm WARN enoent ENOENT: no such file or directory, open '/home/poli/Desktop/iot-smart-home-cloud-master/firebase/package.json' npm WARN firebase No description npm WARN firebase No repository field. npm WARN firebase No README data npm WARN firebase No license field. up to date in 1.166s found 0 vulnerabilities poli@poli-Parallels-Virtual-Platform:~/Desktop/iot-smart-home-cloud-master/firebase$ firebae deploy firebae: command not found poli@poli-Parallels-Virtual-Platform:~/Desktop/iot-smart-home-cloud-master/firebase$ firebase deploy === Deploying to 'actionhome-4baaa'... i deploying database, storage, firestore, functions, hosting, remoteconfig Running command: npm --prefix "$RESOURCE_DIR" run lint functions@ lint /home/poli/Desktop/iot-smart-home-cloud-master/firebase/functions eslint . /home/poli/Desktop/iot-smart-home-cloud-master/firebase/functions/device-cloud/device-configuration.js 33:97 warning Expected to return a value at the end of arrow function consistent-return /home/poli/Desktop/iot-smart-home-cloud-master/firebase/functions/device-cloud/register-device.js 33:75 warning Expected to return a value at the end of arrow function consistent-return /home/poli/Desktop/iot-smart-home-cloud-master/firebase/functions/smart-home/device-model.js 72:2 error Unnecessary semicolon no-extra-semi 100:2 error Unnecessary semicolon no-extra-semi 132:2 error Unnecessary semicolon no-extra-semi /home/poli/Desktop/iot-smart-home-cloud-master/firebase/functions/smart-home/fulfillment.js 68:19 error Unexpected await inside a loop no-await-in-loop ✖ 6 problems (4 errors, 2 warnings) 3 errors and 0 warnings potentially fixable with the --fix option. npm ERR! code ELIFECYCLE npm ERR! errno 1 npm ERR! functions@ lint: eslint . npm ERR! Exit status 1 npm ERR! npm ERR! Failed at the functions@ lint script. npm ERR! This is probably not a problem with npm. There is likely additional logging output above. npm ERR! A complete log of this run can be found in: npm ERR! /home/poli/.npm/_logs/2020-09-29T12_13_08_980Z-debug.log Error: functions predeploy error: Command terminated with non-zero exit code1 poli@poli-Parallels-Virtual-Platform:~/Desktop/iot-smart-home-cloud-master/firebase$ ok i delete 3 ";" in file smart-home\device-model.js delete await on line 68 smart-home\fulfillment.js and deploy result: 2 problems (0 errors, 2 warnings) is this the correct solution?
gharchive/issue
2020-09-29T12:20:18
2025-04-01T06:37:02.023479
{ "authors": [ "poli44" ], "repo": "GoogleCloudPlatform/iot-smart-home-cloud", "url": "https://github.com/GoogleCloudPlatform/iot-smart-home-cloud/issues/21", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
60442651
Update README to include design overview To me, DESIGN.md is the most helpful document to actually understand what Kubernetes is. I think it belongs in the main README. LGTM, Thanks!
gharchive/pull-request
2015-03-10T02:42:51
2025-04-01T06:37:02.033411
{ "authors": [ "ghodss", "nikhiljindal" ], "repo": "GoogleCloudPlatform/kubernetes", "url": "https://github.com/GoogleCloudPlatform/kubernetes/pull/5226", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
62296824
Removed guestbook.sh e2e test The test is rewritten in Go #5045 and is stable cc @jlowdermilk Was this test being kept around to cover some kubectl testing? @zmerlynn kubectl is now being sufficiently tested by the go tests. The shell test is being kept around until we are satisfied that the Go port is at least as good of an e2e test as the shell test was. For other transitions, we've waiting between 2 and 7 days to verify that we aren't losing test coverage during the migration. We've seen a few flakes in the new go test that have hopefully been fixed by #5595. I'm going to wait another day or two to merge this until we see the go test showing higher reliability than the shell test. Should wait until #5604 is fixed. @piosz Can you rebase? I'm going to take a look again today and see if we can finally nuke this shell test. Done. I've taken a detailed look at the test history for Shell tests that guestbook.sh passes and kubectl guestbook should create and stop a working application on both GCE and GKE. For Shell tests that guestbook.sh passes, I found that it was failing consistently on GKE until yesterday when #5749 was merged. Since then it is still a bit flaky on GKE (6 of 30 runs), both hitting timeouts and failing with Wrong entry received: {"data": ""}. It has been similarly flaky on GCE (also 6 in the last 30 runs but at different times), failing with similar errors. On the other hand kubectl guestbook should create and stop a working application has flaked a few times on both GKE and GCE over the last 30 runs, mainly due to the 60 second timeout which was extended to 3 minutes in #5845. Overall, the replacement test seems more stable than the shell test so it's time to remove the shell test. There is a slight decrease in test coverage removing the shell test because the shell test was using the public IP associated with the service to read/write from the guestbook whereas the go test is using the kubernetes master as a proxy to reach the service (so the test will pass even if the external LB creation fails). This gap is now covered by Services should be able to create a functioning external load balancer which was added in #5772. I'll merge on green. Shippable beat travis! Merging. Thanks!
gharchive/pull-request
2015-03-17T06:04:12
2025-04-01T06:37:02.039080
{ "authors": [ "piosz", "roberthbailey", "satnam6502" ], "repo": "GoogleCloudPlatform/kubernetes", "url": "https://github.com/GoogleCloudPlatform/kubernetes/pull/5536", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
65526283
example ansible setup repo This is a basic ansible repo that will do a couple of things set up an etcd node set up a master running apiserver, scheduler, controller-manager setup any number of nodes Hopefully this can be expanded to do things like set up skydns, set up a private docker repo, set up an overlay network (flannel) etc etc. But right now all it does is set up etcd and configure a master and nodes. @nzwulfin Today there exist 2 repos which try to configure kubernetes using ansible. https://github.com/eparis/kubernetes-ansible https://github.com/nzwulfin/ansible-atomic I'd like to see these merged, potentially IN the kubernetes tree. If @nzwulfin is not interested, we may want to move forward anyway. This is a stripped down version of both my repo and ideas from his repo. It does very very little. This is not a replacement for the salt-stack cluster setup. It only does some small portion of those things. @eparis any recommended reviewers? :) I'm going to say we blind LGTM this, as it is in contrib, and I'm not sure we have anyone who's expert enough in ansible to say anything deep. A README.md might be nice to help explain how to use it. @jsafrane could also be a reviewer (I believe @nzwulfin is out until next week) I miss some readme on how to use it. It's very Fedora specific. This is not bad, it's just not mentioned anywhere. Other distros don't have /etc/kubernetes, they have /etc/default with different option names. What is ansible/library/rpm_facts.py good for? All these facts can be acquired using Ansible itself. How do I use ansible to get installed (or running) rpms? I couldn't find a way other than my own module to collect those facts.... The same way as you do it in the python module: --- - shell: rpm -q iptables-service register: has_iptables yes, that's what i did do, I just hated having when has_iptables.rc == 0 throughout the code when custom facts lets me do when has_iptables guess it's just style, i'm not strongly attached. I do not have any strong preference either, I just find the extra python module to be overkill here. What about the other facts, like is_atomic? It's not trivial to get, you need 3 tasks to do it and still it's in yaml and not in python. You're probably right, I should move back into the playbook. @jsafrane What do you think of this? It looks good to me. @eparis I'm back around now. I think this is a good place to start some merging, I just need to check internally on the CLA for good corporate citizenship. I'm happy to merge this, once the CLA is set. erm, forgot it was eparis as the original author... merging.
gharchive/pull-request
2015-03-31T19:19:01
2025-04-01T06:37:02.048124
{ "authors": [ "brendandburns", "eparis", "jsafrane", "nzwulfin", "vmarmol" ], "repo": "GoogleCloudPlatform/kubernetes", "url": "https://github.com/GoogleCloudPlatform/kubernetes/pull/6237", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
823342229
Span messages are prefixed with Span. (incorrect behaviour) I know this is done intentionally, but it makes the Stackdriver exporter qazi unusable. It is also not semantically correct as it does change the meaning. Example: If using java auto instrumentation and I want to search for a SELECT statement, I don't expect to need to search for Span.Client-SELECT ... This is the commit that introduced the change: https://github.com/GoogleCloudPlatform/opentelemetry-operations-go/commit/a1393affee9a13ed21974ae1d791da1b3189f4a0 If the change is to surface the kind in the UI, it would be better to change the UI and not the name of the span. Related https://github.com/GoogleCloudPlatform/opentelemetry-operations-go/issues/109 @alexvanboxel we've decided to remove the whole Span.<kind>., should be an easy fix This should be fixed by #155
gharchive/issue
2021-03-05T19:06:23
2025-04-01T06:37:02.157765
{ "authors": [ "aabmass", "alexvanboxel", "tbarker25" ], "repo": "GoogleCloudPlatform/opentelemetry-operations-go", "url": "https://github.com/GoogleCloudPlatform/opentelemetry-operations-go/issues/151", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }