id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
2460110231
feat: add renovate customManager to update buildah image STONEBLD-2661 buildah image is also specified in the .spec.stepTemplate.env. This custom manager updates the buildah image there. Before you complete this pull request ... Look for any open pull requests in the repository with the title "e2e-tests update" and see if there are recent e2e-tests updates that will be applicable to your change. Will adding this change here work? Based on my current understanding of renovate, I don't think so. The packageRules defines how renovate handles a package (a dependency) detected by the tekton manager. This custom manager is an addition to the tekton manager to cover the value fields.
gharchive/pull-request
2024-08-12T06:03:42
2025-04-01T06:39:18.442563
{ "authors": [ "tkdchen" ], "repo": "konflux-ci/build-definitions", "url": "https://github.com/konflux-ci/build-definitions/pull/1280", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2497409652
feat: Added csdiff package csdiff package is installed in order to be used by SAST tasks to parse the results and generate fingerprinting. This will be used by future SAST tasks provided by the OpenScanHub team, for example: https://issues.redhat.com/browse/OSH-737 @konflux-team , we created this as a draft PR in order to gather feedback from you. Would this be acceptable? Is something else needed? ... Codecov Report All modified and coverable lines are covered by tests :white_check_mark: Please upload report for BASE (main@09f6efc). Learn more about missing BASE report. Additional details and impacted files @@ Coverage Diff @@ ## main #292 +/- ## ======================================== Coverage ? 100.00% ======================================== Files ? 18 Lines ? 498 Branches ? 0 ======================================== Hits ? 498 Misses ? 0 Partials ? 0 :umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here. @ralphbean Would you mind giving a review on this? @ralphbean @14rcole @Josh-Everett Could any of you review it and approve it/comment it? Thank you! /ok-to-test @ralphbean Would you mind enabling the last CI test? I am not able to trigger it/merge this @jperezdealgaba if you rebase your commit on the latest main, the Red Hat Konflux job issue should get resolved. @dirgim I rebased the PR and GitHub is still showing that I need the approval from one maintainer for the workflow: P.S.: I added the installation of the git packages to this PR as it will be also needed. I hope it is not a problem All checks passed! I have no merge rights in this repo
gharchive/pull-request
2024-08-30T14:35:36
2025-04-01T06:39:18.450072
{ "authors": [ "codecov-commenter", "dirgim", "jperezdealgaba", "ralphbean" ], "repo": "konflux-ci/konflux-test", "url": "https://github.com/konflux-ci/konflux-test/pull/292", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
440692282
Duplicate @RequestBody parameter generated with the name 'body' Fixes #746 #257 When using version 3.1.8, the annotation @RequestHeader("Accept") still generates a body parameter which then collides with a separate @RequestBody parameter.
gharchive/pull-request
2019-05-06T13:10:25
2025-04-01T06:39:18.452999
{ "authors": [ "bratwurzt", "ladrl" ], "repo": "kongchen/swagger-maven-plugin", "url": "https://github.com/kongchen/swagger-maven-plugin/pull/747", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
102556843
Variant in CASP9 can't be found by rsID search Works if you search by the variant http://exac.broadinstitute.org/variant/1-15832495-T-G Although the variant is listed correctly as rs146054764, a search by the rsID returns no results Fixed in most recent commit.
gharchive/issue
2015-08-22T18:14:11
2025-04-01T06:39:18.462351
{ "authors": [ "konradjk", "monkollek" ], "repo": "konradjk/exac_browser", "url": "https://github.com/konradjk/exac_browser/issues/206", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
183615567
Fix/remove leftover pry And point terminal success url to production LGTM
gharchive/pull-request
2016-10-18T08:01:36
2025-04-01T06:39:18.468269
{ "authors": [ "jakolehm", "kke" ], "repo": "kontena/kontena", "url": "https://github.com/kontena/kontena/pull/1154", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2589451290
Analytics service is down and the Migration Toolkit cannot be run Brief bug description At the time of writing this issue, the endpoint that is hit to track analytics of this library is down. While the endpoint is likely to come back online soon, I think the CLI should run regardless and continue even if analytics cannot be tracked. Repro steps Run any command from migration toolkit Expected behavior Commands run properly regardless of analytics endpoint health. Test environment All environments Additional context A quick workaround is to manually instantiate a manager then run the command via code. Screenshots N/A Thank you, I've updated the code to handle these exceptions gracefully :)
gharchive/issue
2024-10-15T17:41:06
2025-04-01T06:39:18.470619
{ "authors": [ "Enngage", "nkooman-bzs" ], "repo": "kontent-ai/kontent-ai-migration-toolkit", "url": "https://github.com/kontent-ai/kontent-ai-migration-toolkit/issues/14", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1685089491
DOMParser is not defined Brief bug description Tree shaking is not applied in the case of @kontent-ai\rich-text-resolver\dist\cjs\src\parser\browser\rich-text-browser-parser.js. Repro steps https://github.com/kontent-ai/gatsby-packages/pull/236/files#diff-8fff385791166d8060e27ff7becf1ea3c8193ee0b83a480c5e77b0c385e5a686R99-R113 fixed as a part of #17
gharchive/issue
2023-04-26T13:57:52
2025-04-01T06:39:18.472259
{ "authors": [ "Simply007", "pokornyd" ], "repo": "kontent-ai/rich-text-resolver-js", "url": "https://github.com/kontent-ai/rich-text-resolver-js/issues/16", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2413517709
[BUG] Gradle analysis taking too long to complete Is there an existing issue for this? [X] I have searched the existing issues Konveyor version 0.5-beta2 Priority Critical Current Behavior Currently, the analysis of the tackle-testapp-public in its gradle form takes too long to complete. This might be related to this bug opened in the hub: https://github.com/konveyor/tackle2-hub/issues/667 Expected Behavior Gradle analysis shouldn't take too much longer than Maven analysis. How Reproducible Always (Default) Steps To Reproduce Analyze tackle-testapp-public on its main branch (Maven) and then on its gradle branch (Gradle). Compare the time it takes for both to run. Environment No response Anything else? No response Can't reproduce anymore in the latest d/s
gharchive/issue
2024-07-17T12:38:26
2025-04-01T06:39:18.477461
{ "authors": [ "jmle" ], "repo": "konveyor/analyzer-lsp", "url": "https://github.com/konveyor/analyzer-lsp/issues/662", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1301954064
feat: support multi architecture image builds Signed-off-by: Mehant Kammakomati kmehant@gmail.com WIP: please do not merge
gharchive/pull-request
2022-07-12T11:53:26
2025-04-01T06:39:18.478551
{ "authors": [ "kmehant" ], "repo": "konveyor/move2kube-api", "url": "https://github.com/konveyor/move2kube-api/pull/111", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
715686236
Failure exporter framework not prescriptive enough As seen with #211 , we need to enhance the abstract class for the failure provider to force the implementer to include app logic Related to #225 @mpryc @mateusoliveira43 @KevinMGranger the #211 describes steps to create service-now dev account. We probably want to look at the service now failure exporter code and adjust it with the rest of the exporters. @etsauer could you tell us what do you mean by "include app logic" ?
gharchive/issue
2020-10-06T13:36:00
2025-04-01T06:39:18.480136
{ "authors": [ "KevinMGranger", "etsauer", "mpryc" ], "repo": "konveyor/pelorus", "url": "https://github.com/konveyor/pelorus/issues/213", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1631141455
Edit Toolbar Configuration button is ignored. The Edit Toolbar shows up whether or not you toggle the "Turn on editor toolbar" button. Also, when using simple text area, you have no toolbar (maybe it's always been like that?). Thank you Mark. I will investigate. The Edit Toolbar shows up whether or not you toggle the "Turn on editor toolbar" button. Thank you I will investigate and back to you. Also, when using simple text area, you have no toolbar (maybe it's always been like that?). Yes, the simple editor has no toolbar! It is a simple text area! issue fixed in TW-Section 1.1.1
gharchive/issue
2023-03-19T22:27:38
2025-04-01T06:39:18.484847
{ "authors": [ "Marxsal", "kookma" ], "repo": "kookma/TW-Section", "url": "https://github.com/kookma/TW-Section/issues/32", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
434137868
科学上网插件对客户端设置访问控制后分流不彻底 我的nas上跑BT下载,所以我设置nas的访问控制为“不通过ss“但是一旦开启bt下载,机场端的审计记录上就会有我进行BT的下载的记录,导致我的账号有被机场封禁的可能。 我该如何设置才能让nas端与科学上网的线路彻底隔离? 我按现在网络结构猜想了一下,应该是IPv6的数据没有被过滤掉,希望更新插件对于IPv6的访问控制,谢谢
gharchive/issue
2019-04-17T07:42:36
2025-04-01T06:39:18.490218
{ "authors": [ "lsl9119" ], "repo": "koolshare/ledesoft", "url": "https://github.com/koolshare/ledesoft/issues/316", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1290178304
Fixing API Spec This PR introduces final breaking changes to the specifications and brings the API into a stable state. After that, more changes might be introduced, but not in a breaking way. The following changes are planned (checked means ready for review) [x] Authencation service [x] Mood diary [x] Wiki Service [x] User Service [x] Motivator Service (Starkmacher) For those services, after merging the PR, @kopfsachen-dev/backend shall commit to not introduce any additional services in a breaking way. ⚠️ Remark: After the PR got approved, I will mark all endpoints that are not yet in a stable state in any environment as deprecated. The deprecation notice gets removed, as soon as a service is deployed in a stable state. After removing deprecation notices from all services/endpoints, the deprecated tag will be used in the intended way. @theEpsilon: The authentication documentation is updated and I consider it finished. Please review that part. A few things are missing: The request body property "csrf_token": "<token>" is required in the browser flow submissions for registration and login Each browser-related submit request (Reg, Login, Logout) needs the headers: Accept: application/json, Content-Type: application/json Login submit flow is missing the request body definition: { "identifier":"<accountKey>", "method": "password", "password":"<md5(accountKey)>", "csrf_token": "<token>" } Apart from that, auth part looks good @theEpsilon: The authentication documentation is updated and I consider it finished. Please review that part. A few things are missing: Thanks you very much @theEpsilon. All of your feedback should be implemented. Could you check again? :) Might be worth mentioning the JSON headers in every request. If they are not present, the server will send redirects which are undiserable in our use case. Specifically those headers are required: Accept: application/json in: GET /self-service/registration/browser GET /self-service/login/browser GET /self-service/logout/browser Content-Type: application/json in: POST /self-service/registration POST /self-service/login Everything else is fine! Cuz it seems like these header declarations are not visible in the spec visualization (?) I can see them in the code though. Everything else is fine! In OpenAPI v3 you set the a content-type for each operations requestBody and responses. The requester has to set the HTTP Accepts and Content-Type headers accordingly. Standard compliant code generations tools should also act that way. Therefore, the x-accepts and x-contentType properties are actually not needed, i've just added them for compatibility with tools not complying fully to the open api spec. Anyway, they are not shown in the Swagger UI specifically as a requirement. I agree that this can be misleading and added a few more words on the descriptions of the endpoints you mentioned and hope that it's clear now. Thank you for all the work you put into it :) As I included the requested changes, this PR closes #20. I consider the spec done and stable for 1.0 release. Please review. Some remarks on the (semantic) versioning of the API. A major release (First of the two numbers in the version string) indicates the introduction of breaking changes, why a minor release shall be compatible to all prior versions of the API of the same major relase. Thats why i bumped the version to 1.0. @MHajoha Many thanks for you quick review of the changes and the approval. I've implemented you're feedback :) I request a frontend review again for the browser teams. Particularly interesting for you could be, that the browser authentication flow is now described in detail in the spec. In case of struggling with the actual implementation, you can get some inspiration from mindtastic/stagefright (credits to @theEpsilon who implemented the API connection on this demo). lgtm
gharchive/pull-request
2022-06-30T13:56:59
2025-04-01T06:39:18.504845
{ "authors": [ "Siar-Akbayin", "jgraeger", "theEpsilon" ], "repo": "kopfsachen-dev/api", "url": "https://github.com/kopfsachen-dev/api/pull/19", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
86155391
Minimal example doesn't seems to work I've done this minimal example (1.0.1) and I get the following error at runtime: import Graphics.Element exposing (..) import Sha main : Element main = show (Sha.digest "hex" (Sha.createHash "hello")) hello is not supported (we accept pull requests) Sha.createHash accepts one of these hash algorithms: ["sha1", "sha", "sha256", "sha512"]. These are same as sha.js's supported hashes. https://github.com/crypto-browserify/sha.js#supported-hashes examples: https://github.com/koyachi/elm-sha/blob/master/examples/Main.elm https://github.com/koyachi/elm-sha/blob/master/tests/Test.elm Ooops, really sorry I didn't quite understood the API correctly. It works like a charm. You might be able to enforce good usage by creating data structure instead of String: type HashType = SHA1 | SHA | SHA256 | SHA512 with createHash : HashType -> Hash And certainly the same for digest: type Digest = HEX | B64 | BIN Best, Y. You might be able to enforce good usage by creating data structure instead of String: type HashType = SHA1 | SHA | SHA256 | SHA512 with createHash : HashType -> Hash And certainly the same for digest: type Digest = HEX | B64 | BIN Ah, yes. Specifying Hash Algorithm and Digest Encoding with string is not Elm-way. I'll fix this with elm-sha Ver.1.0.2. Thanks!
gharchive/issue
2015-06-08T12:24:28
2025-04-01T06:39:18.907774
{ "authors": [ "koyachi", "yogsototh" ], "repo": "koyachi/elm-sha", "url": "https://github.com/koyachi/elm-sha/issues/2", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
1859394731
Update thread.rs Tiny change to allow compilation Didn't know people used this crate. Thanks. Won't be able to cargo publish until later however. Would love to see the blazingly-fastness from your work get adopted into more commonly used runtimes. Or to see this mature :) Tempted to share how the benches look on my machine, but I'll stick to your decision of not sharing one isolated case :+1:
gharchive/pull-request
2023-08-21T13:37:59
2025-04-01T06:39:18.919345
{ "authors": [ "kprotty", "v1gnesh" ], "repo": "kprotty/uasync", "url": "https://github.com/kprotty/uasync/pull/4", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2113261766
Possible GTFS Loader Bugs Two possible bugs with loading and embedding GTFS data with GTFS2VEC loader and embedder: GTFSLoader uses df.pivot_table() to calculate the hourly embedding features from the static feed, which gives NaN values for hours/stops that don't have trips. For _load_trips this is filled with 0, but for _load_directions it should be an empty set, which as far as I can tell is not possible using df.pivot_table(). I handled this in GTFS2VecEmbedder by filtering NaN values as they are reduced. I also added an initial value because in a few cases there were hours with no trips at all. GTFS2VecEmbedder expects a features GeoDataFrame with an index that matches the joint GeoDataFrame, which is checked in _validate_indexes. GTFSLoader assigns the features GeoDataFrame an index of None. I just changed it to assign the FEATURE_ID constant. from pathlib import Path from srai.embedders import GTFS2VecEmbedder from srai.joiners import IntersectionJoiner from srai.loaders import GTFSLoader, download_file from srai.neighbourhoods.h3_neighbourhood import H3Neighbourhood from srai.regionalizers import H3Regionalizer import geopandas as gpd from shapely.geometry import Polygon from srai.constants import WGS84_CRS # Load GTFS from example notebook wroclaw_gtfs = Path().resolve() / "files" / "example.zip" gtfs_url = "https://transitfeeds.com/p/mpk-wroc-aw/663/20221221/download" download_file(gtfs_url, wroclaw_gtfs.as_posix()) gtfs_loader = GTFSLoader() features = gtfs_loader.load(wroclaw_gtfs) print(features.index.name) # None # Get H3 embedding regions covering the GTFS bounding box, join with features min_x, min_y = features.geometry.bounds[['minx', 'miny']].min() max_x, max_y = features.geometry.bounds[['maxx', 'maxy']].max() geo = Polygon(( (min_x, min_y), (min_x, max_y), (max_x, max_y), (max_x, min_y), (min_x, min_y) )) area = gpd.GeoDataFrame( {'region_id': ['Wroclaw_test'], 'geometry': [geo]}, crs=WGS84_CRS ) area.set_index('region_id', inplace=True) regionalizer = H3Regionalizer(resolution=8) joiner = IntersectionJoiner() regions = regionalizer.transform(area) neighbourhood = H3Neighbourhood(regions_gdf=regions) joint = joiner.transform(regions, features) # Fit embedder embedder = GTFS2VecEmbedder(hidden_size=2, embedding_size=4) embedder.fit(regions, features, joint) embeddings_gtfs = embedder.transform(regions, features, joint) # ValueError: features_gdf must have a named index. features.index.name = 'feature_id' embedder = GTFS2VecEmbedder(hidden_size=2, embedding_size=4) embedder.fit(regions, features, joint) embeddings_gtfs = embedder.transform(regions, features, joint) # TypeError: descriptor 'union' for 'set' objects doesn't apply to a 'float' object Hi @zackAemmer Thanks a lot for finding and fixing those bugs. It's a great contribution! Everything looks good to me. I added the CHANGELOG entry and will merge and ship those changes with the next release. Awesome!
gharchive/pull-request
2024-02-01T19:07:14
2025-04-01T06:39:18.925807
{ "authors": [ "piotrgramacki", "zackAemmer" ], "repo": "kraina-ai/srai", "url": "https://github.com/kraina-ai/srai/pull/427", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
530238927
Setting the height-dimension in percentage does not work I was trying to set the dimensions of the zoid by using the percentages as suggested by the docs. Setting the width to '100%' does work but whenever I change the height-dimension to a percentage, the iframe does not show up. It only works with pixels. I'm currently using Zoid in a React-environment. Code for reference: dimensions: { width: '800px', height: '100%', }, How you did resolve this?
gharchive/issue
2019-11-29T09:22:14
2025-04-01T06:39:18.927324
{ "authors": [ "KristofVDB1", "jurajkrivda" ], "repo": "krakenjs/zoid", "url": "https://github.com/krakenjs/zoid/issues/281", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1930749111
FR: Open in popup? Possible feature request? New to Statamic here and I really like this add-on. What would be really neat, is if it were possible to open the palette in a popup to select a color, then hide the palette but showing the activate color being used. When using Bard or similar to build pay layours, the picker takes up a lot of real estate, especially if used multiple times in a layout. Just an idea! @minemindmedia I like this idea. I'm not sure if I'll have bandwidth for it soon, but I'll try to explore the option. That's awesome! @minemindmedia Hows this look? https://d.pr/v/4VvrPx @minemindmedia Would it be better without the extra button? just show an icon to choose the color? https://d.pr/v/addpRf Sorry for the multiple posts... @minemindmedia what about this? https://d.pr/v/vdnhVd Hi @vmitchell85 Sorry for the late response! Looks great man! I think any of those would be awesome.
gharchive/issue
2023-10-06T18:26:22
2025-04-01T06:39:18.931305
{ "authors": [ "minemindmedia", "vmitchell85" ], "repo": "krakero/tailwind-fieldtype", "url": "https://github.com/krakero/tailwind-fieldtype/issues/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
512805509
PHP Fatal error: Uncaught GuzzleHttp\Exception\ConnectException: cURL error 7 Getting this exception in apache2 error.log: PHP Fatal error: Uncaught GuzzleHttp\\Exception\\ConnectException: cURL error 7: Failed to connect to oauth2.googleapis.com port 443: Connection timed out (see https://curl.haxx.se/libcurl/c/libcurl-errors.html) in /var/www/html/beta/v1/vendor/guzzlehttp/guzzle/src/Handler/CurlFactory.php:200\nStack trace:\n#0 /var/www/html/beta/v1/vendor/guzzlehttp/guzzle/src/Handler/CurlFactory.php(155): GuzzleHttp\\Handler\\CurlFactory::createRejection(Object(GuzzleHttp\\Handler\\EasyHandle), Array)\n#1 /var/www/html/beta/v1/vendor/guzzlehttp/guzzle/src/Handler/CurlFactory.php(105): GuzzleHttp\\Handler\\CurlFactory::finishError(Object(GuzzleHttp\\Handler\\CurlHandler), Object(GuzzleHttp\\Handler\\EasyHandle), Object(GuzzleHttp\\Handler\\CurlFactory))\n#2 /var/www/html/beta/v1/vendor/guzzlehttp/guzzle/src/Handler/CurlHandler.php(43): GuzzleHttp\\Handler\\CurlFactory::finish(Object(GuzzleHttp\\Handler\\CurlHandler), Object(GuzzleHttp\\Handler\\EasyHandle), Object(GuzzleHttp\\Handler\\CurlFactory))\n#3 /var/www/html/beta/v1/vendor/guzzlehttp/guzzle/src/Handler/Proxy.ph in /var/www/html/beta/v1/vendor/kreait/firebase-php/src/Firebase/Exception/DatabaseApiExceptionConverter.php on line 49 To Reproduce I did not find why it actually happening. :( Environment: OS: Ubuntu 18.04, PHP version: [e.g. 7.3.8] Firebase SDK Version: latest There‘s unfortunately not much I can help you with - if the request to the Google APIs fails, it could have several reasons: bad internet connection, a firewall, your IP could be blocked from accessing the services, ... Okay, but is there any way to set custom timeout duration to the firebase requests? https://firebase-php.readthedocs.io/en/stable/setup.html#http-client-options-and-middlewares I am also getting the below exception. { "File": "\/var\/www\/html\/local\/vendor\/kreait\/firebase-php\/src\/Firebase\/Exception\/DatabaseApiExceptionConverter.php", "Line": 49, "Message": "Unable to connect to the API: cURL error 7: Failed to connect to oauth2.googleapis.com port 443: Connection timed out (see http:\/\/curl.haxx.se\/libcurl\/c\/libcurl-errors.html)" } kreait/firebase: php: 7.1 Connection code like $serviceAccount = ServiceAccount::fromJsonFile($firebase_path); $firebase = (new Factory) ->withServiceAccount($serviceAccount) ->withDatabaseUri($firebase_database_path) ->create(); $database = $firebase->getDatabase(); It was working fine before, but getting error from last week. It's the same kind of error as before - on the machine the code is running on, a connection to the Google API was not possible... unfortunately that's nothing that we can fix in code. please cheak your url。I said this because I got messages like you and found a space between “ and h....
gharchive/issue
2019-10-26T08:00:56
2025-04-01T06:39:18.953384
{ "authors": [ "ahqmrf", "danyjadhav", "helojianxin", "jeromegamez" ], "repo": "kreait/firebase-php", "url": "https://github.com/kreait/firebase-php/issues/346", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2234518297
Develop Checklist: [ ] Have you added an explanation of what your changes do and why you'd like them to be included? [ ] Have you updated or added documentation for the change, as applicable? [ ] Have you tested your changes on all related environments with successful results, as applicable? Type of Changes: [ ] Bug fix (non-breaking change which fixes an issue) [ ] New feature (non-breaking change which adds functionality) [ ] Breaking change (fix or feature that would cause existing functionality to change) What is the current behavior? (link to any open issues here) What is the new behavior (if this is a feature change)? Other information: /test image
gharchive/pull-request
2024-04-10T00:21:32
2025-04-01T06:39:18.956986
{ "authors": [ "jobcespedes" ], "repo": "krestomatio/moodle-operator", "url": "https://github.com/krestomatio/moodle-operator/pull/237", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
322726396
Problems with continuous operation KRProgressHUD.dismiss {} KRProgressHUD.showImage(#imageLiteral(resourceName: "toast_error"), message: message) the hud can't display. @springlo Please use it. KRProgressHUD.dismiss { KRProgressHUD.showImage(#imageLiteral(resourceName: "toast_error"), message: message) } No response.
gharchive/issue
2018-05-14T08:54:59
2025-04-01T06:39:18.959397
{ "authors": [ "krimpedance", "springlo" ], "repo": "krimpedance/KRProgressHUD", "url": "https://github.com/krimpedance/KRProgressHUD/issues/36", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
866681683
Request: Add @lsprr As A Contributor Add @lsprr As A Contributor For #32 @all-contributors add @Isprr for documentation @all-contirbutors add @lsprr for doc @all-contributors add @lsprr for doc
gharchive/issue
2021-04-24T07:22:48
2025-04-01T06:39:18.969467
{ "authors": [ "krishdevdb" ], "repo": "krishdevdb/reseter.css", "url": "https://github.com/krishdevdb/reseter.css/issues/34", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1289532276
Add gene set enrichment If we add the ability for the user to supply a network, we should figure out how to add gene set enrichment as a page of results since model similarity won't be a thing we can do features and issues deferred for rewrite --closing here.
gharchive/issue
2022-06-30T03:31:46
2025-04-01T06:39:18.970367
{ "authors": [ "ChristopherMancuso", "billspat" ], "repo": "krishnanlab/geneplexus_app", "url": "https://github.com/krishnanlab/geneplexus_app/issues/220", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
161034096
Zoom feature fails on delayed data load When data is loaded after chart initializations, zoom feature fails: Uncaught TypeError: Cannot read property 'call' of undefined angular-nvd3.js:587 Plunker: http://plnkr.co/edit/HmfNd4NzXMFZr36YNXbm?p=preview Thanks, I will fix it in the near future.
gharchive/issue
2016-06-18T17:13:01
2025-04-01T06:39:18.975231
{ "authors": [ "juja", "krispo" ], "repo": "krispo/angular-nvd3", "url": "https://github.com/krispo/angular-nvd3/issues/455", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1074488629
Export pull_layer & auth API Export pull_blob API 1. Container images will share layers, the API should support the scenario which don't need pull all the layers. 2. Container image size may vary from megabyte to gigabyte, export pull_layer API can allow the user to the following layer decompress/unpack/store operations in parallel. Export auth API For some container image service which support ondemand layer pull like: * stargz https://github.com/containerd/stargz-snapshotter * Nydus Image Service https://github.com/dragonflyoss/image-service export auth API is a requirement when token is expired. I seem to remember there being a reason why these methods were not public. @bacongobbler or @radu-matei was there a reason either of you remember why these were private? Yes - some of that conversation can be found here: https://github.com/krustlet/krustlet/pull/564 Basically the user should not be given any control over auth - we know what endpoints do and do not need authentication, so the call to auth should be hidden behind methods like pull. As for exposing pull_layer... I don't see how this is useful unless you're trying to write an abstraction over the existing Client. It doesn't make a ton of sense because you still have no way to push or pull manifests, and push_layer is still hidden. We need to decide whether we allow others to write their own clients on top of oci-distribution, or we are the ones publishing an OCI client, exposing only the high-level concepts like client.pull() and client.push() (which is the current design today). export pull_layer and auth API can allow the user to do the following layer decompress/unpack/store operations in parallel. Is there a compromise we can make here? I think that should be something pull can handle. It's already async anyways. If it helps contextualize the OP's desire, one of my hopeful use-cases when consuming this crate is a flavor/variant of https://oras.land/ - specifically with the goal of usage for smarter CI caching. This means that I may be running my caching utility in memory-constrained contexts while also dealing with up-to-multi-gigabyte payloads when considering the "image" as a whole. One of the possible optimizations under those constraints would be for me to be able to stream individual layers to decompress and write to disk during the logical pull operation, so that I never need to buffer an entire layer payload in-memory. I have an analogous desire during pushing as well due to the same memory constraints, where I wouldn't wish to hold those entire layers in-memory all at once. I'd have to know the checksum ahead of time for the push per the registry API, but that doesn't directly require me to hold the payload in memory. Most of these ideas appear incompatible with this crate's implementation details today, which I understand has been mostly informed by krustlet's usecase and contending with much smaller WASM artifacts. (I recognize that my goals are not inherently this project's goals and may need to find my own way as a result if the examples I've offered are not compelling-enough.) one of my hopeful use-cases when consuming this crate is a flavor/variant of https://oras.land/ - specifically with the goal of usage for smarter CI caching. We've discussed the idea with a few of the ORAS maintainers. They were interested in oci-distribution being the basis of a Rust client for ORAS. oras-rs imported krustlet (including oci-distribution) as a subtree project, but hasn't seen any activity since that point. I assume the goal was to copy oci-distribution as a starting point for a Rust client. If you're looking for an oras-go-alike client but for Rust, I'd ask them about their plans with that repository. As oras-rs matures, I could see much of oci-distribution being ported over to oras-rs. Implementing the entire OCI distribution spec is one of our stated goals. One of the possible optimizations under those constraints would be for me to be able to stream individual layers to decompress and write to disk during the logical pull operation, so that I never need to buffer an entire layer payload in-memory. I have an analogous desire during pushing as well due to the same memory constraints, where I wouldn't wish to hold those entire layers in-memory all at once. I'd have to know the checksum ahead of time for the push per the registry API, but that doesn't directly require me to hold the payload in memory. I don't see how exposing methods like pull_image_layer and auth help you in that regard unless you're embedding Client within another Client, calling methods like auth to fetch credentials and pass that back to the exterior client. That just seems wonky. But perhaps we can decouple these methods away from the internal logic of the Client and into its own module. Kinda like how oras-go has its own standalone Copy that isn't tied to a Client struct. That might help you re-use some of oci-distribution's client logic. We could also abstract some of the Client's methods into different Trait which would give you the high-level constraints like pull and push, then it'd be up to you to determine the underlying behaviour. That way the existing Client doesn't have to leak implementation details like pull_manifest and auth back to the caller. I'd imagine we would want to have those as separate traits so users can implement a read-only client. can allow the user to do the following layer decompress/unpack/store operations in parallel. Is there a compromise we can make here? I think that should be something pull can handle. It's already async anyways. Parallelizing the layer pull/unpack/store operations within pull should accomplish the same thing as what's requested here. Yes, current pull API is already async, but we need wait all the layers are pulled before next operations. Many containers image support encrypted layers, decryption and decompression are time consuming and these operations depends on other crates which different users may have different selections. Another reason we want export pull_layer API is many container stack support on demand pull like stargz-snapshotter, we will not pull all the layers at the beginning, and will pull the layers on demand. I think we're in agreement here. I want to re-think the design approach though. I don't see how exposing methods like pull_image_layer and auth could help you unless you're embedding Client within another Client, calling methods like auth to fetch credentials and pass that back to the exterior client. That just seems wonky from a design perspective. But perhaps we can decouple these methods away from the internal logic of the Client and into its own module. Do you have an example how you plan to use auth and pull_image_layer in your project? I think we're in agreement here. I want to re-think the design approach though. I don't see how exposing methods like pull_image_layer and auth could help you unless you're embedding Client within another Client, calling methods like auth to fetch credentials and pass that back to the exterior client. That just seems wonky from a design perspective. But perhaps we can decouple these methods away from the internal logic of the Client and into its own module. Would you mind weighing in on this? Do you have an example how you plan to use auth and pull_image_layer in your project? Perhaps that may help clarify your use case. For parallel image layer data processing, we may don't need use auth API, and pull_layer API self param can not be mutable like current implementation in pull_layer API, we will do the work as below: let mut client = Client::default(); // Authenticate when pull_manifest_and_config let (manifest, digest, config) = client .pull_manifest_and_config(&reference, &RegistryAuth::Anonymous) .await?; let layers = manifest.layers.into_iter().map(|layer| { let this = &client; async move { this.pull_layer(image, &layer.digest, &mut out).await?; decrypt_layer() decompress_layer() unpack_layer() } }); For on demand pull, we may need auth when token is expired: // on demand pull like when the token is expired let op = RegistryOperation::Pull; client .auth(&reference, &RegistryAuth::Anonymous, op) .await?; client.pull_layer(image, &layer.digest, &mut out).await?; We can also hiden the auth for pull_layer API like below, but the self will be mutable since token may updated, now this pull_layer API will can not used in the first senario when we want pull in parallel, any suggestions when we want support both? I found export auth and pull_layer API can do the job, but not sure whether it is the righ way: pub async fn pull_layer<T: AsyncWrite + Unpin>( &mut self, image: &Reference, auth: &RegistryAuth, digest: &str, mut out: T, ) -> anyhow::Result<()> { let op = RegistryOperation::Pull; if !self.tokens.contains_key(image, op) { self.auth(image, auth, op).await?; } self._pull_layer(image, digest, out) .await?; Ok(()) } async fn _pull_layer<T: AsyncWrite + Unpin>( &self, Okay. I've thought about this for a while... I'd be okay with exposing these APIs as it does not appear there's a good alternative other than a huge refactor of the crate to match the design I proposed earlier. If you can remove all of the additional changes made to this PR and keep it to the bare minimum (marking these functions as pub), that would be appreciated. We can discuss the design decisions behind some of the other changes in another PR if you'd like to still propose those, but they appear orthogonal to the original ask. Thanks! For on demand pull, we may need auth when token is expired Can't we just address that in the calling code by checking the token's expiration date? That would mean you can just call pull without having to embed auth/pull_layer yourself. For parallel image layer data processing, I still don't understand why this can't be handled in oci-distribution. Why does this have to be orchestrated from another library? Why can't a pull fetch multiple layers in parallel? Why does this have to be done at a higher level? For on demand pull, we may need auth when token is expired Can't we just address that in the calling code by checking the token's expiration date? That would mean you can just call pull without having to embed auth/pull_layer yourself. Yes, we can do that way, but current TokenCache in client module is not public and TokenCache itself is only visible in current crate by design: pub(crate) struct TokenCache { For parallel image layer data processing, I still don't understand why this can't be handled in oci-distribution. Why does this have to be orchestrated from another library? Why can't a pull fetch multiple layers in parallel? Why does this have to be done at a higher level? We could just implement some form of middleware pattern so that pull can call a function on each layer. That way you can still call decrypt/decompress/unpack on each layer, and it'd all be performed in parallel. Would that solve your issue? https://doc.rust-lang.org/book/ch19-05-advanced-functions-and-closures.html Thanks for your suggestions. Yes, we can pass functions to current pull API, but we have two concerns, first is we modify the interface of an key public API, next is after we processed the layer data, the pull API return value will also need be changed based on the user's needs. @bacongobbler Another concern is container image layers are shared, after we pull the image manifests, we also need check whether the host already have shared layers pulled by other containers, and we only need pulling the missed parts of the layers. Image service/runtime are operate at image layer level and image distribution may also need export the layer related API. Hi @bacongobbler @thomastaylor312 @flavio I rebased the PR and updated the commit message, please review. Yeah I think this is fine for now. We should just be careful as we approach 1.0 as we decide whether or not the pull_layer function should be exported or not I still disagree with this change in relation to oci-distribution's current API, but I don't really have the time right now to make contributions for further improvements. I'm fine with this going through for now. We can make changes to this API in future iterations since we haven't hit 1.0 yet, so there's plenty of time to refactor if necessary. I see one code regression that I'd like to see changed. Otherwise this looks good to go. Thanks Matt, fully agree to keep auth() clean as you requested, just updated the PR. @thomastaylor312 and @flavio do you have any ideas/concerns about this change? @bacongobbler This good to go from your end? @bacongobbler @thomastaylor312 @flavio Thanks, much appreciated!
gharchive/pull-request
2021-12-08T14:45:41
2025-04-01T06:39:19.050655
{ "authors": [ "arronwy", "bacongobbler", "shanesveller", "thomastaylor312" ], "repo": "krustlet/oci-distribution", "url": "https://github.com/krustlet/oci-distribution/pull/9", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
416473546
Cargoload of trains/trucks Fix that cargo trains drive with around with really low loads of cargo. (1 - 30 %) Automatically select appropriate vehicle for load --> a van for a small load, a truck for a large load. (i've made difference in capacity in AVO, sadly i get trucks loaded for 50%, instead of a van, loaded for 100%), while vans would probably mean faster delivery. https://github.com/VictorPhilipp/Cities-Skylines-Traffic-Manager-President-Edition/issues/123 Duplicate of #170 Closing this as duplicate of 170
gharchive/issue
2019-03-03T04:12:42
2025-04-01T06:39:19.060135
{ "authors": [ "VictorPhilipp", "aubergine10" ], "repo": "krzychu124/Cities-Skylines-Traffic-Manager-President-Edition", "url": "https://github.com/krzychu124/Cities-Skylines-Traffic-Manager-President-Edition/issues/161", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2631481464
Create YZEncoder class Summary Create YXEcnoder class, which is used in the original quclassi paper. merged to the main branch.
gharchive/issue
2024-11-03T21:21:06
2025-04-01T06:39:19.085177
{ "authors": [ "ksk0629" ], "repo": "ksk0629/quantum_machine_learning", "url": "https://github.com/ksk0629/quantum_machine_learning/issues/7", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
211622943
I want to lock global_mrb Hi! We implemented it because we want to lock global_mrb when using it with multithreading. https://github.com/ksss/mruby-signal/blob/master/src/signal.c#L558 ↑ code is executed at mrb_open to prevent global_mrb from being rewritten. sample.rb def run_with_signal msec_timer, signal @timer_thread = Thread.new msec_timer, 1000, signal do |timer, interval, sig| loop_time = 0 # calculate by usec while loop_time < timer * 1000 loop_time += usleep interval end Process.kill sig, Process.pid end Signal.mrb_state_unlock end Signal.trap(:USR1) do |signo| puts "catch signal from timer thread" exit end Signal.mrb_state_lock run_with_signal 1000, :USR1 puts "waiting timer" loop { sleep 1 } @pyama86 Thank you for using this library. Agree This library expects to work same as CRuby's Signal. $ cat t.rb # mattn/mruby-thread # matsumotory/mruby-sleep # iij/mruby-process def run_with_signal(signal) Thread.new(signal) do |sig| sleep 1 Process.kill sig, Process.pid end end Signal.trap(:USR1) do |signo| puts "catch signal from timer thread" exit end run_with_signal :USR1 puts "waiting timer" loop { sleep 1 } $ mruby t.rb waiting timer SignalException: SIGUSR1 [2] 65470 abort mruby t.rb $ ruby t.rb waiting timer catch signal from timer thread So, It's not expected behavior. I agree with you on this issue. Negative This library expects to work same as CRuby's Signal. So, I'm negative to add methods what CRuby's Signal doesn't have. Proposal I admit that mruby-signal and multi mrb_state are incompatible. So, I propose this patch. diff --git a/src/signal.c b/src/signal.c index 1a3da25..38b3127 100644 --- a/src/signal.c +++ b/src/signal.c @@ -165,7 +165,7 @@ static const struct signals { {NULL, 0} }; -static mrb_state *global_mrb; +static mrb_state *initial_mrb = NULL; static const char* signo2signm(mrb_int no) @@ -228,7 +228,7 @@ static sighandler_t mrb_signal(mrb_state *mrb, int signum, sighandler_t handler) static RETSIGTYPE sighandler(int sig) { - mrb_state *mrb = global_mrb; + mrb_state *mrb = initial_mrb; struct RClass *mrb_mSignal = mrb_module_get(mrb, "Signal"); mrb_value trap_list = mrb_iv_get(mrb, mrb_obj_value(mrb_mSignal), mrb_intern_lit(mrb, "trap_list")); mrb_value command = mrb_ary_ref(mrb, trap_list, sig); @@ -373,7 +373,6 @@ mrb_signal(mrb_state *mrb, int signum, sighandler_t handler) { struct sigaction sigact, old; - global_mrb = mrb; sigemptyset(&sigact.sa_mask); sigact.sa_handler = handler; sigact.sa_flags = 0; @@ -546,6 +545,8 @@ install_sighandler(mrb_state *mrb, int signum, sighandler_t handler) void mrb_mruby_signal_gem_init(mrb_state* mrb) { + if (initial_mrb == NULL) initial_mrb = mrb; + struct RClass *signal = mrb_define_module(mrb, "Signal"); mrb_obj_iv_set(mrb, (struct RObject *)signal, mrb_intern_lit(mrb, "trap_list"), mrb_ary_new_capa(mrb, NSIG)); Does this solve your problem? Yes! I will solve my problem.Rather, I want them to do so. Would you try this? https://github.com/ksss/mruby-signal/commit/be8980cdad5e58c7526698c9c06aa90c0e3bf18d (CI maybe relates to mruby-onig-regexp and maybe solved recently) This worked perfectly! Thanks. ✨
gharchive/pull-request
2017-03-03T08:19:24
2025-04-01T06:39:19.091269
{ "authors": [ "ksss", "pyama86" ], "repo": "ksss/mruby-signal", "url": "https://github.com/ksss/mruby-signal/pull/1", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
288082031
Pointer deref crash in KSReachableOperationKSCrash Hi, We have recently seen a crash coming from inside the KSReachableOperationKSCrash initWithHost method. Not really sure how to reliably reproduce but seems to occasionally happen on app launch. We're using KSCrash 1.15.16 with Carthage. Here's an example backtrace: 0 libobjc.A.dylib 0x1b290dd6 objc_msgSend 1 KSCrash 0xbd68cf __60-[KSReachableOperationKSCrash initWithHost:allowWWAN:block:]_block_invoke 2 KSCrash 0xbd6503 -[KSReachabilityKSCrash onReachabilityFlagsChanged:] 3 KSCrash 0xbd6279 __49-[KSReachabilityKSCrash initWithReachabilityRef:]_block_invoke_2 4 libdispatch.dylib 0x1b6c9797 _dispatch_call_block_and_release 5 libdispatch.dylib 0x1b6c9783 _dispatch_client_callout 6 libdispatch.dylib 0x1b6cdd05 _dispatch_main_queue_callback_4CF 7 CoreFoundation 0x1bfb7d69 __CFRUNLOOP_IS_SERVICING_THE_MAIN_DISPATCH_QUEUE__ 8 CoreFoundation 0x1bfb5e19 __CFRunLoopRun 9 CoreFoundation 0x1bf091af CFRunLoopRunSpecific 10 CoreFoundation 0x1bf08fd1 CFRunLoopRunInMode 11 GraphicsServices 0x1d6b3b41 GSEventRunModal 12 UIKit 0x21291a53 UIApplicationMain Here's a snippet of the report that was generated by KSCrash: { "diagnosis": "Attempted to dereference garbage pointer 0x4d.", "error": { "address": 77, "mach": { "code": 1, "exception": 1, "exception_name": "EXC_BAD_ACCESS", "subcode": 0 }, "signal": { "code": 0, "code_name": "BUS_NOOP", "name": "SIGBUS", "signal": 10 }, "type": "mach" } } We also managed to catch it on the debugger: At first guess, it looks like the blockSelf reference has been freed. If this is the most likely case, would adding a guard be a sufficient solution? Happy to submit a PR to that effect if so. Thanks, Chris I think should use '__weak' to replace '__unsafe_unretained' in line 320。
gharchive/issue
2018-01-12T10:58:15
2025-04-01T06:39:19.095687
{ "authors": [ "cgwyllie", "iOkay" ], "repo": "kstenerud/KSCrash", "url": "https://github.com/kstenerud/KSCrash/issues/270", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
220414305
dev goes out of bounds on devices In Simulation.cpp this line appears twice: auto dev = jobs[job].tasks[jobs[job].cur].device; followed by some form of devices[dev]. When stepping though however, dev is sometimes outside the bounds of devices. Job: 3 dev: 2 size: 2 Job: 3 dev: 2 size: 2 Job: 4 dev: 2 size: 2 Job: 6 dev: 2 size: 2 Job: 3 dev: 2 size: 2 Job: 3 dev: 2 size: 2 Job: 4 dev: 2 size: 2 Job: 1 dev: 2 size: 2 Job: 9 dev: 2 size: 2 Job: 9 dev: 2 size: 2 This out of bounds dev and devices appears to be in my stack trace for a segmantation fault error in the fiforeadyqueue::next method. So I can confirm its happening. fix: Task.cpp : 10 std::uniform_int_distribution<> dist(low, high - 1); Task.cpp : 37 auto max = type == cs3100::Task::Type::CPU ? maxPage : maxDevice - 1; In my program your second fix does the trick but implementing the first fix in any combination causes seg faults in all cases. Can anybody else confirm? I changed it, only the one from line 37 is needed
gharchive/issue
2017-04-08T17:24:46
2025-04-01T06:39:19.098639
{ "authors": [ "AmmonHepworth", "PhilipNelson5", "johnsonjo4531" ], "repo": "ksundberg/CourseMaterials", "url": "https://github.com/ksundberg/CourseMaterials/issues/12", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1665740731
Cannot update the device post initial install Hello For some reason the update has stopped working. I changed the code, but even though it says it compiled and OTA pushed through ESPHome, the device is still not updated - old sensor names. I tried deleting the device and recreating it in ESPHome, but its still the same thing - no update What version of libretuya and what chip are you using? What board name did you choose? turned out to be a device specific issue - if you hold the button after powering it off and then plugging back in with button pressed, it will start to use firmware. So its loading it, but not using is for some reason. Closing the issue This is very weird, can you tell me more about this device? Is it Realtek or Beken? Beken Then it's not really possible to update depending on the button presses. The bootloader should update the firmware as soon as it receives it.
gharchive/issue
2023-04-13T05:34:34
2025-04-01T06:39:19.154645
{ "authors": [ "fokcuk", "kuba2k2" ], "repo": "kuba2k2/libretuya", "url": "https://github.com/kuba2k2/libretuya/issues/108", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
672018760
update etcd package This change is  /deploy
gharchive/pull-request
2020-08-03T11:48:36
2025-04-01T06:39:19.156007
{ "authors": [ "yehiyam" ], "repo": "kube-HPC/hkube", "url": "https://github.com/kube-HPC/hkube/pull/896", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1673743598
[Bug]: autoscaler workers not working? Description Hi there, It seems I cannot create autoscaler nodepools without creating at least one non-autoscaler node? This was working in an older version. Kube.tf file module "kube-hetzner" { source = "kube-hetzner/kube-hetzner/hcloud" version = "2.1.0" initial_k3s_channel = "v1.25" hcloud_token = var.hcloud_token cluster_name = "${var.stage}-${var.region}" base_domain = var.domain network_region = var.region load_balancer_type = var.lb_type lb_hostname = local.lb_hostname load_balancer_location = local.locations[var.region][0] control_plane_nodepools = [ for cp in range(3) : { name = "control-plane-${var.stage}-${var.region}-${cp}", server_type = var.cp_type, location = local.locations[var.region][cp % length(local.locations[var.region])], labels = [], taints = [], count = 1 } ] autoscaler_nodepools = [ for cp in range(3) : { name = "autoscaler-${cp}" server_type = var.agent_type location = local.locations[var.region][cp % length(local.locations[var.region])], min_nodes = 0 max_nodes = 5 } ] agent_nodepools = [ { name = "agent-small", server_type = var.agent_type location = "fsn1", labels = [], taints = [], count = 0 // working when setting to > 0 } ] ingress_controller = "nginx" use_control_plane_lb = true cni_plugin = "cilium" enable_cert_manager = false create_kubeconfig = false create_kustomization = false ssh_public_key = file("${var.ssh_key}.pub") ssh_private_key = file(var.ssh_key) providers = { hcloud = hcloud } } variable "lb_type" { default = "lb11" } variable "cp_type" { default = "cpx11" } variable "agent_type" { default = "cpx31" } locals { lb_hostname = "lb-${var.stage}-${var.region}.${var.domain}" # https://docs.hetzner.com/cloud/general/locations/ locations = { "eu-central" = ["fsn1", "nbg1", "hel1"], "us-east" = ["ash"], "us-west" = ["hil"], } } Screenshots No response Platform Mac @stubbi You have to check the autoscaler pod logs, see what is happening. @stubbi Was your previews try with cilium too? Try without cilium, and as started about, the key is probably in the logs of the autoscaler pod.
gharchive/issue
2023-04-18T19:53:58
2025-04-01T06:39:19.159580
{ "authors": [ "mysticaltech", "stubbi" ], "repo": "kube-hetzner/terraform-hcloud-kube-hetzner", "url": "https://github.com/kube-hetzner/terraform-hcloud-kube-hetzner/issues/736", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
806556169
Bug 1927850: Bump the github.com/gogo/protobuf dependency to v1.3.2 Ensure the github.com/gogo/protobuf dependency uses the v1.3.2 version. This was done by running the following commands locally: # Get the newer version of the protobuf implicit dependency. $ go get github.com/gogo/protobuf@v1.3.2 $ go mod vendor && go mod tidy && go mod verify # Verify the version is correctly pinned. $ go list -mod=readonly -m all | grep gogo/protobuf github.com/gogo/protobuf v1.3.2 => github.com/gogo/protobuf v1.3.2 /bugzilla refresh /cherry-pick release-4.7
gharchive/pull-request
2021-02-11T16:38:23
2025-04-01T06:39:19.162151
{ "authors": [ "timflannagan" ], "repo": "kube-reporting/helm", "url": "https://github.com/kube-reporting/helm/pull/56", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1811025850
fix: simplify the code Fix #128 https://go.dev/play/p/0KAEYU0Eivx package main import "fmt" func main() { var labels map[string]string if labels["xxx"] != "xxx" { if labels == nil { labels = make(map[string]string) } labels["xxx"] = "yyy" } fmt.Println(labels) } no panic here.... maybe refactor:?
gharchive/pull-request
2023-07-19T02:39:59
2025-04-01T06:39:19.164065
{ "authors": [ "Abirdcfly", "bjwswang", "dayuy" ], "repo": "kubebb/core", "url": "https://github.com/kubebb/core/pull/131", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1502601724
Add Percona XtraDB Docs Signed-off-by: Md. Alif Biswas alif@appscode.com Before Merge Need to update the parent repository Folder name is updated from percona-xtradb to perconaxtradb (Got rid of the hyphen) Codespan schema check is failing from ProxySQL end
gharchive/pull-request
2022-12-19T09:26:33
2025-04-01T06:39:19.174521
{ "authors": [ "spectro30" ], "repo": "kubedb/docs", "url": "https://github.com/kubedb/docs/pull/518", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1587536264
Inference result is not output in lifelong learning thermal comfort case What happened: I correctly installed Sedna and KubeEdge. And tried running Lifelong Learning Thermal Comfort Prediction case. I found that the pods for training and evaluation were working fine, but the inferring pod remained running and was never completed, and did not output results to the designated "/output/deployment" directory. I looked at the log of the pod responsible for inference and no valuable information was found. What you expected to happen: Run this case correctly, both training and inference. How to reproduce it (as minimally and precisely as possible): I have written a blog including everything about installation and the running of this lifelong learning case, which can be used for reproduction. Log of the pod responsible for inference log of inference pod.txt Environment: Sedna Versionv0.5.1 KubeEdge Versionv1.10.0 My issue is similar to https://github.com/kubeedge/sedna/issues/380#issue-1452864350 @jaypume might help to take a look at it This issue has been solved, and the reference result can be found here. Thanks all.
gharchive/issue
2023-02-16T11:49:54
2025-04-01T06:39:19.206377
{ "authors": [ "MooreZheng", "qxygxt" ], "repo": "kubeedge/sedna", "url": "https://github.com/kubeedge/sedna/issues/396", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
857705195
Implementation for injecting storage-initializer In the #18, I proposed to add an init-containers to download dataset/models before running workers. Then we need to inject the storage-initializer to workers. the simply way The obvious way to implement it is to modify the creating-worker logic in each collaboration features in GM. I can abstract the common logic to one func/file. its pros: simply and quick its cons: need to modify the gm the more decouple way Another good way I found is to leverage the k8s admission hooks used by the kfserving. its pros: decoupling with each collaboration features its cons: add extra a webhook server; more code worker What I decide to do now For simplicity, firstly to implement the simply way, then evolute to the admission hook way when needed since injecting code can be reused. kfserving storage-initializer implementation see this pr https://github.com/kubeflow/kfserving/pull/156. /close closed by #52
gharchive/issue
2021-04-14T09:25:12
2025-04-01T06:39:19.210319
{ "authors": [ "llhuii" ], "repo": "kubeedge/sedna", "url": "https://github.com/kubeedge/sedna/issues/51", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2342526933
Add gitops & metaphor repository names as flags for create What is your feature idea? Problem: Gitops & metaphor repository names are hard-coded Screenshot: Solution: Add --gitopsRepoName flag Add --metaphorRepoName flag Keep "gitops" & "metaphor" as sane defaults Impacted files: There might be more, just what I found off initial search https://github.com/kubefirst/kubefirst/blob/a47ff7fa7d222555c9cd31ef066b4910368cc796/cmd/google/create.go#L66 https://github.com/kubefirst/kubefirst/blob/a47ff7fa7d222555c9cd31ef066b4910368cc796/cmd/digitalocean/create.go#L67 https://github.com/kubefirst/kubefirst/blob/a47ff7fa7d222555c9cd31ef066b4910368cc796/cmd/civo/create.go#L61 https://github.com/kubefirst/kubefirst/blob/a47ff7fa7d222555c9cd31ef066b4910368cc796/cmd/k3s/create.go#L67 https://github.com/kubefirst/kubefirst/blob/a47ff7fa7d222555c9cd31ef066b4910368cc796/cmd/akamai/create.go#L61 https://github.com/kubefirst/kubefirst/blob/a47ff7fa7d222555c9cd31ef066b4910368cc796/cmd/vultr/create.go#L68 https://github.com/kubefirst/kubefirst/blob/a47ff7fa7d222555c9cd31ef066b4910368cc796/cmd/aws/create.go#L90 Why is it needed? Why: maintaining multiple gitops repos for different clouds/regions maintaining associated metaphor repos for different clouds/regions mitigating risk of deletion when working with deletes (eg. spinning up a gitops directory on my local github to test something and then stressing that cleanup commands could nuke something on organization's github if I'm not careful) Is this missing feature preventing you from using kubefirst? [ ] Yes Code of Conduct [X] I agree to follow this project's Code of Conduct Started fork: https://github.com/alechp/kubefirst/tree/feat/repository-name-flags-for-digitalocean Still failing here: Happy to keep exploring. Pointers to save time would be nice Inside gitShim/init.go I see Repositories[] being referenced & looped through to check whether the repositories exist: Updated the newRepositories here: Noticed that the success message has a hard-coded gitops/metaphor. Which makes sense, but don't think that would impact create I am not setting the flag with Viper in flags.go, but from what I can tell that's not necessary...? Perhaps I'm wrong I didn't define it in apiTypes.ClusterDefinition which might be the issue (not sure how this impacts gitShim, but maybe the check that throws is being done elsewhere): Thanks for this feature suggestion @alechp. Could you split those in two issues please as they will need different work in different places. Mainly, the gitops one will require a lot more work since it's hardcoded in multiples places. Hey @fharper went ahead and split it into 3 issues (three requirements to enable deploying more than one gitops cluster per github organization): https://github.com/kubefirst/kubefirst/issues/2210 https://github.com/kubefirst/kubefirst/issues/2211 https://github.com/kubefirst/kubefirst/issues/2212 Thanks a lot, and sorry for the additional work 😅 Will close this one now.
gharchive/issue
2024-06-09T22:46:41
2025-04-01T06:39:19.223662
{ "authors": [ "alechp", "fharper" ], "repo": "kubefirst/kubefirst", "url": "https://github.com/kubefirst/kubefirst/issues/2193", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2595279283
Add VaniHaripriya as a Kubeflow Member Resolves #719 Contributions https://github.com/kubeflow/pipelines/pull/11300 https://github.com/kubeflow/pipelines/pull/11295 https://github.com/kubeflow/pipelines/pull/11262 https://github.com/kubeflow/pipelines/pull/11066 pytest Output: $ pytest test_org_yaml.py ============================================================================================================ test session starts ============================================================================================================= platform linux -- Python 3.11.9, pytest-7.4.3, pluggy-1.3.0 rootdir: /home/vmudadla/OpenshiftAI/internal-acls/github-orgs collected 1 item test_org_yaml.py . [100%] ============================================================================================================= 1 passed in 0.15s ============================================================================================================== Make sure to sign off your commit @VaniHaripriya :) cc @terrytangyuan /ok-to-test
gharchive/pull-request
2024-10-17T17:02:59
2025-04-01T06:39:19.254899
{ "authors": [ "DharmitD", "VaniHaripriya", "hbelmiro" ], "repo": "kubeflow/internal-acls", "url": "https://github.com/kubeflow/internal-acls/pull/720", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
2612598068
KEP-2170: Adding validation webhook for v2 trainjob Adds validation webhook for v2 trainjob What this PR does / why we need it: Which issue(s) this PR fixes (optional, in Fixes #<issue number>, #<issue number>, ... format, will close the issue(s) when PR gets merged): Fixes # Checklist: [ ] Docs included if any changes are user facing cc @tenzen-y @andreyvelich Pull Request Test Coverage Report for Build 11522449031 Details 6 of 6 (100.0%) changed or added relevant lines in 1 file are covered. No unchanged relevant lines lost coverage. Overall coverage remained the same at 100.0% Totals Change from base Build 11507477280: 0.0% Covered Lines: 78 Relevant Lines: 78 💛 - Coveralls
gharchive/pull-request
2024-10-24T21:34:20
2025-04-01T06:39:19.301641
{ "authors": [ "akshaychitneni", "coveralls" ], "repo": "kubeflow/training-operator", "url": "https://github.com/kubeflow/training-operator/pull/2307", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1079875872
Install Kubeflow on OpenShift The Link (https://raw.githubusercontent.com/opendatahub-io/manifests/v1.3-branch/distributions/kfdef/kfctl_openshift_v1.3.0.yaml). on page https://www.kubeflow.org/docs/distributions/openshift/install-kubeflow/ is dead. 404. Seemls like the closing bracket got included in the link somehow. @nakfour have you seen this issue on the OpenShift Docs ? /platform Openshift /priority p2 /kind bug @kilmarnock thanks for filing this. We'll get it fixed. @nakfour I'll fix the broken link, but I wanted to check in to see whether we should go ahead and update this page for Kubeflow 1.4.
gharchive/issue
2021-12-14T15:10:13
2025-04-01T06:39:19.304553
{ "authors": [ "jbottum", "kilmarnock", "shannonbradshaw" ], "repo": "kubeflow/website", "url": "https://github.com/kubeflow/website/issues/3097", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
446167231
mysql Backup&Restore procedure Hey, we are trying to restore kubeflow mysql db. Is a mysql dump enogh to backup? we are dealing with the current error after restore: Error: failed to generate Pipeline graph. An error occurred Cannot read property 'spec' of undefine /assign @IronPan I found that if I add a pipeline, backup the mysql DB, delete the pipeline then restore, that pipeline will return the erro above. all the other pipelines, which were not deleted, just backed and restored, are ok. Hey guys, any suggestions? I need to know what that delete button does when a pipeline is deleted. Because it's sure deletes more than something in mysql. In this way I can prepare some consistent backups. TY //George /cc @paveldournov Do you have any suggestions for this problem? Many thanks, Sarah Hey Sarah, thank you, that would be really helpful to find out the missing dependency when we delete a pipeline just to double check did you follow up this instruction or some other ways https://www.kubeflow.org/docs/pipelines/upgrade/ aha, so I can get a backup of all pipeline with Reinstalling Kubeflow Pipelines ? I just did a mysql dump, deleted a pipeline, and restore the dump, and I saw the delete pipeline is not consistent after restore.
gharchive/issue
2019-05-20T15:03:30
2025-04-01T06:39:19.309824
{ "authors": [ "IronPan", "sarahmaddox", "xaoo" ], "repo": "kubeflow/website", "url": "https://github.com/kubeflow/website/issues/726", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
982211190
update Argo Executors link fix issue https://github.com/kubeflow/website/issues/2901 /assign @RFMVasconcelos cc @RFMVasconcelos cc @RFMVasconcelos сс @Bobgy Thank you @Arhell! /lgtm /approve
gharchive/pull-request
2021-08-29T21:43:57
2025-04-01T06:39:19.312096
{ "authors": [ "Arhell", "Bobgy" ], "repo": "kubeflow/website", "url": "https://github.com/kubeflow/website/pull/2903", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
989369628
Update docs for MXNet Jobs Fix issues in existing doc. Address part of https://github.com/kubeflow/website/issues/2915 /cc @johnugeorge @andreyvelich /lgtm
gharchive/pull-request
2021-09-06T18:18:15
2025-04-01T06:39:19.313360
{ "authors": [ "Jeffwan", "johnugeorge" ], "repo": "kubeflow/website", "url": "https://github.com/kubeflow/website/pull/2918", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
452743462
Fixed assumption that KFAPP variable includes path Fixes https://github.com/kubeflow/website/issues/422 This change is  Preview: https://deploy-preview-774--competent-brattain-de2d6d.netlify.com/docs/gke/cloud-filestore/ /assign @joeliedtke /lgtm /approve /approve cancel /lgtm cancel /approve
gharchive/pull-request
2019-06-05T22:21:35
2025-04-01T06:39:19.316508
{ "authors": [ "joeliedtke", "sarahmaddox" ], "repo": "kubeflow/website", "url": "https://github.com/kubeflow/website/pull/774", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
317370608
Update Kong http-trigger configuration Issue Ref: None Description: Update Kong http-trigger documentation to use new CRDs for Consumers and Credentials. This removes the need to make HTTP requests to the admin API. TODOs: [X] Ready to review awesome, thanks @aledbf
gharchive/pull-request
2018-04-24T19:51:24
2025-04-01T06:39:19.318669
{ "authors": [ "aledbf", "andresmgot" ], "repo": "kubeless/kubeless", "url": "https://github.com/kubeless/kubeless/pull/715", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1293748916
ignore pod not scheduled when reconcile subnet What type of this PR Bug fixes 1、when reconcile subnet, all pods in this subnet will be checked and add to port-group. Should ignore pod which has not been scheduled to node. 2、 This is caused by https://github.com/kubeovn/kube-ovn/pull/1655 Which issue(s) this PR fixes: Fixes #(issue-number) 已回合release-1.10 和 release-1.9 分支
gharchive/pull-request
2022-07-05T03:48:18
2025-04-01T06:39:19.320793
{ "authors": [ "hongzhen-ma" ], "repo": "kubeovn/kube-ovn", "url": "https://github.com/kubeovn/kube-ovn/pull/1666", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
812865497
fix checkSBBindings error when hostname is not nodeName in some case the k8s spec.nodeName is not same with a hostname, then the sbctl could not get the chassis uuid, the check process will fail here. A workaroud solution is that try to get chassis name first from encap table and then do the left lookup. It is not a good solution. When we use a second nic and change the encap ip to the second nic, it could not work either. Signed-off-by: Wan Junjie wanjunjie@bytedance.com The start-ovs.sh will use nodeName to set the hostname in ovn-sb. They should be same in theory https://github.com/kubeovn/kube-ovn/blob/8c5ae3131711a350852aadc5aaefd6522123bccf/dist/images/start-ovs.sh#L100 @oilbeater you are right,will do that. Close.
gharchive/pull-request
2021-02-21T14:16:49
2025-04-01T06:39:19.323539
{ "authors": [ "junka", "oilbeater" ], "repo": "kubeovn/kube-ovn", "url": "https://github.com/kubeovn/kube-ovn/pull/697", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
975774354
Connection refused for k8s api Currently getting this error. Wondering if there's another setup to be aware "kuberhealthy/pod-restarts": { "OK": false, "Errors": [ "Get https://172.20.0.1:443/api/v1/events?fieldSelector=type%3DWarning: dial tcp 172.20.0.1:443: connect: connection refused" ], "RunDuration": "", "Namespace": "kuberhealthy", "Node": "", "LastRun": "2021-08-20T16:31:28Z", "AuthoritativePod": "kuberhealthy-55d8dc7cff-zp8mj", "uuid": "d15d798a-385b-4a9c-bcf7-4941dad5e11d" }, Hello @pragmaticivan, could you please provide what Cluster OS, k8s version, and pod-restarts version you are using? pod-restarts version: kuberhealthy/pod-restarts-check:v2.5.0 Server Version: version.Info{Major:"1", Minor:"17+", GitVersion:"v1.17.17-eks-087e67", GitCommit:"087e67e479962798594218dc6d99923f410c145e", GitTreeState:"clean", BuildDate:"2021-07-31T01:39:55Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"} This seems like a generic 'kubernetes client could not talk to the Kubernetes API' error. I am not sure Kuberhealthy code could cause this one... Maybe there is a NetworkPolicy in place somewhere?
gharchive/issue
2021-08-20T16:57:33
2025-04-01T06:39:19.326893
{ "authors": [ "integrii", "jonnydawg", "pragmaticivan" ], "repo": "kuberhealthy/kuberhealthy", "url": "https://github.com/kuberhealthy/kuberhealthy/issues/1007", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
836021297
Install cri-tools on Amazon Linux 2 What this PR does / why we need it: Amazon Linux 2 doesn't install crictl by default. We use crictl to restart the API server if it's affected by #1222. Additionally, kubeadm verifies is crictl present if containerd is used. Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged): Fixes #1281 Does this PR introduce a user-facing change?: Install cri-tools (crictl) on Amazon Linux 2. This fixes the issue with provisioning Kubernetes and Amazon EKS-D clusters on Amazon Linux 2 /assign @kron4eg /hold to manually test the changes /retest /retest /hold cancel /cherrypick release/v1.2
gharchive/pull-request
2021-03-19T13:55:01
2025-04-01T06:39:19.348767
{ "authors": [ "xmudrii" ], "repo": "kubermatic/kubeone", "url": "https://github.com/kubermatic/kubeone/pull/1282", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
653972955
Reconcile DynamicWorkers Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged): xref #531 Does this PR introduce a user-facing change?: NONE /assign @kron4eg /close We'll take another approach to this.
gharchive/pull-request
2020-07-09T11:04:59
2025-04-01T06:39:19.350768
{ "authors": [ "xmudrii" ], "repo": "kubermatic/kubeone", "url": "https://github.com/kubermatic/kubeone/pull/964", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
201959302
What's the future of this client and how does it compare to pykube? I'm currently using this client for my Kubernetes Operational View dashboard, but I will probably switch to pykube as it looks much cleaner (e.g. config loading does not modify a global object), directly uses requests (which I'm using too) and supports insecure-skip-tls-verify (see #99). Did you consider merging this client with pykube or what are compelling arguments to use client-python instead of pykube? Most part of this client is auto-generated. That distinguish it from pykube. Generating it make it easier for us to keep it in sync with API changes in main repo. Having said that, I have no problem supporting features of pykube here if I get time or get somebody contributes. About your specific concerns, our config loader has a version to load configs in a local config object instead of global one (and there is an example for that in examples folder). I plan to support insecure-skip-tls-verify when I got time. I considered using requests instead of what we have right now (urllib3) but didn't see compelling reasons yet to spend time on it. @mbohlool thanks for the quick answer :smile: Sure.
gharchive/issue
2017-01-19T19:56:13
2025-04-01T06:39:19.393382
{ "authors": [ "hjacobs", "mbohlool" ], "repo": "kubernetes-incubator/client-python", "url": "https://github.com/kubernetes-incubator/client-python/issues/102", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
185528210
Should provide other ways to specify the scaling target beside name. As the scaling target's name could be changed, autoscaler should provide other ways for user to specify what the target is. One candidate would be label selector. Take kube-dns as an example, we could use label k8s-app: kube-dns to select the target ReplicationController/Deployment, and don't need to restart autoscaler to change the input argument everytime when the target name is changed. I think this is a much-needed feature. We have a cluster with multiple nodepools and we would like to scale independently for each of these node pools. Having a label selector to filter the number of nodes allows for scaling in such scenarios. @vijaygos For you use case, is that https://github.com/kubernetes-incubator/cluster-proportional-autoscaler/pull/55 (filter node based on labels)? Yes. This would work. I just realized that the change went in recently. Thanks for adding that. Much appreciated. @vijaygos In case you don't want to build it from head, I will publish a new image includes that change very soon. Would it be possible for you to comment on your timeline for a formal release? @vijaygos I just did --- images below should be available now: k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.5.0 k8s.gcr.io/cluster-proportional-autoscaler-arm:1.5.0 k8s.gcr.io/cluster-proportional-autoscaler-arm6464:1.5.0 k8s.gcr.io/cluster-proportional-autoscaler-ppc64le:1.5.0 Awesome! Thanks@MrHohn for the quick turnaround.
gharchive/issue
2016-10-26T22:33:22
2025-04-01T06:39:19.399243
{ "authors": [ "MrHohn", "vijaygos" ], "repo": "kubernetes-incubator/cluster-proportional-autoscaler", "url": "https://github.com/kubernetes-incubator/cluster-proportional-autoscaler/issues/9", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
196425220
grpc crash when trying to start a pod This happens on master if you just try to start a pod with one of the tesdata configurations: % sudo ./ocid --debug E1220 01:21:23.830356 990 ocicni.go:136] error updating cni config: No networks found in /etc/cni/net.d DEBU[2016-12-20 01:21:23.831441299+11:00] sandboxes: map[] DEBU[2016-12-20 01:21:23.831471271+11:00] containers: &{map[] {{0 0} 0 0 0 0}} DEBU[2016-12-20 01:21:27.103103358+11:00] RunPodSandboxRequest config:<metadata:<name:"podsandbox1" uid:"redhat-test-ocid" namespace:"redhat.test.ocid" attempt:1 > hostname:"ocic_host" log_directory:"." port_mappings:<protocol:UDP container_port:80 host_port:4888 host_ip:"192.168.0.33" > port_mappings:<protocol:2 container_port:81 host_port:4889 host_ip:"192.168.0.33" > labels:<key:"group" value:"test" > annotations:<key:"owner" value:"hmeng" > annotations:<key:"security.alpha.kubernetes.io/seccomp/pod" value:"unconfined" > annotations:<key:"security.alpha.kubernetes.io/sysctls" value:"kernel.shm_rmid_forced=1,net.ipv4.ip_local_port_range=1024 65000" > annotations:<key:"security.alpha.kubernetes.io/unsafe-sysctls" value:"kernel.msgmax=8192" > linux:<cgroup_parent:"/ocid-podsandbox1" security_context:<namespace_options:<host_network:false host_pid:false host_ipc:false > > > > DEBU[2016-12-20 01:21:27.105374620+11:00] copying infra rootfs binary: /usr/libexec/ocid/pause -> /var/lib/ocid/graph/vfs/pause/rootfs/pause 2016/12/20 01:21:27 grpc: Server failed to encode response proto: Marshal called with nil With the key command being: % sudo ./ocic pod run --config test/testdata/sandbox_config.json 2016/12/20 01:21:27 transport: http2Client.notifyError got notified that the client transport was broken EOF. 2016/12/20 01:21:27 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial unix /var/run/ocid.sock: connect: connection refused"; Reconnecting to {"/var/run/ocid.sock" <nil>} FATA[0000] Creating the pod sandbox failed: rpc error: code = 13 desc = transport is closing .. looking .. (thanks) I'm bisecting right now. Gimme a sec. :P Can't reproduce on master though :confused: No networks found in /etc/cni/net.d maybe caused by missing cni networks? @runcom Okay, it just started working again. I had to fix up ocid.conf to correctly refer to the right binaries (this was a new box). Looks like it's a configuration issue, but I'm confused why I got that error which then crashed the daemon. Seems like a really nasty failure mode. Looks like it's a configuration issue, but I'm confused why I got that error which then crashed the daemon. Seems like a really nasty failure mode. welcome to grpc nil marshaling @runcom It's also caused when conmon exits with a non-zero exit code. I hit it with #162 (which I've since fixed) when it would crash due to the log path being wrong. I just met the same issue on the master. # git log -1 commit 6133465e420d387c977271111a3e1bccc316ac08 Merge: ac7943c 8e1af36 Author: Mrunal Patel <mrunal@me.com> Date: Wed Dec 21 11:20:08 2016 -0800 Merge pull request #292 from sameo/topic/network-bats Additional networking tests The issue can be reproduced by running two ocid process on the latest branch. Steps: Start ocid by systemd, # systemctl start ocid Run ocid in terminal, # ocid --debug Run ocic will hit the issue. @cyphar @gouyang is this still an issue? I can still hit the issue by the steps I described above. @gouyang Still an issue? The issue is gone on the master.
gharchive/issue
2016-12-19T14:22:58
2025-04-01T06:39:19.406368
{ "authors": [ "cyphar", "feiskyer", "gouyang", "rhatdan", "runcom" ], "repo": "kubernetes-incubator/cri-o", "url": "https://github.com/kubernetes-incubator/cri-o/issues/287", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
336020130
[1.11] Update ocicni to latest Signed-off-by: Mrunal Patel mrunalp@gmail.com /test all LGTM assuming happy tests LGTM
gharchive/pull-request
2018-06-26T22:56:41
2025-04-01T06:39:19.408388
{ "authors": [ "TomSweeneyRedHat", "mrunalp", "rhatdan" ], "repo": "kubernetes-incubator/cri-o", "url": "https://github.com/kubernetes-incubator/cri-o/pull/1650", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
346074661
flex: Add PV name to delete PV name is required even when deleting as much as provisioning. I added the PV Name to the Delete function in the same way as the provision.go source. This is the tested output. 2018-07-31 07:29:33 flex[37]: delete() called: {"kubernetes.io/pvOrVolumeName":"pvc-6e9b3727-9493-11e8-afe6-525400d87180"} 2018-07-31 07:29:33 flex[37]: log() called: {"status": "Success"} /lgtm
gharchive/pull-request
2018-07-31T07:52:05
2025-04-01T06:39:19.410013
{ "authors": [ "moonek", "wongma7" ], "repo": "kubernetes-incubator/external-storage", "url": "https://github.com/kubernetes-incubator/external-storage/pull/896", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
199795220
[draft] Release v2.1.0 proposal https://github.com/kubernetes-incubator/kargo/releases/tag/v2.1.0 Here's a list of changes to add: We need to move rkt to experimental feature list. It's not the default deployment type. And it only works right now with Flannel/Canal We upgraded etcd to v3.0.12 Other noteworthy changes: Added the nginx proxy to provide k8s apiserver HA Removed the etcd-proxy Improved docker container download and sync Improved scale deployment time Enabled fact caching by default Added optional SSH bastion configuration @mattymo note that in Kargo etcd_version: v3.0.6 Oops we should update it soon, but it's not a blocker for release @mattymo thanks, updates done
gharchive/issue
2017-01-10T11:19:11
2025-04-01T06:39:19.413080
{ "authors": [ "bogdando", "mattymo" ], "repo": "kubernetes-incubator/kargo", "url": "https://github.com/kubernetes-incubator/kargo/issues/879", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
219911911
update docs/conversion.md docs/conversion.md is slightly out of the for example tmpfs is as unsupported we can close this now done
gharchive/issue
2017-04-06T14:10:47
2025-04-01T06:39:19.414320
{ "authors": [ "kadel", "surajnarwade" ], "repo": "kubernetes-incubator/kompose", "url": "https://github.com/kubernetes-incubator/kompose/issues/548", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
241033786
Change menu to left side This commit changes the menu to the left side rather than on top, syncing with cdrage/minimal branch / style. This syncs to the changes I made upstream at https://github.com/cdrage/minimal It will now look like this: this looks cool, LGTM :+1:
gharchive/pull-request
2017-07-06T17:43:22
2025-04-01T06:39:19.416148
{ "authors": [ "cdrage", "surajnarwade" ], "repo": "kubernetes-incubator/kompose", "url": "https://github.com/kubernetes-incubator/kompose/pull/684", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
216300791
Retry on 504 errors when fetching Container Linux AMIs Closes #440 Codecov Report Merging #442 into master will increase coverage by 0.08%. The diff coverage is 100%. @@ Coverage Diff @@ ## master #442 +/- ## ========================================== + Coverage 40.79% 40.88% +0.08% ========================================== Files 37 37 Lines 2662 2666 +4 ========================================== + Hits 1086 1090 +4 Misses 1418 1418 Partials 158 158 Impacted Files Coverage Δ coreos/amiregistry/reliable_http.go 82.35% <100%> (+5.42%) :arrow_up: Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update cc7e1da...c9da13c. Read the comment docs.
gharchive/pull-request
2017-03-23T05:08:54
2025-04-01T06:39:19.422636
{ "authors": [ "codecov-io", "mumoshu" ], "repo": "kubernetes-incubator/kube-aws", "url": "https://github.com/kubernetes-incubator/kube-aws/pull/442", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
353783374
Add azure-container-registry-config for Azure Seperated out KUBELET_CLOUDPROVIDER env var assignment when cloud_provider equals azure Appended azure-container-registry-config parameter Please sign CLA Signed CLA /check-cla it can be that CLA is signed with different e-mail than commit is done? /check-cla ci check this can you please rebase /lgtm /approve
gharchive/pull-request
2018-08-24T13:26:57
2025-04-01T06:39:19.425337
{ "authors": [ "Atoms", "gitphill", "mattymo" ], "repo": "kubernetes-incubator/kubespray", "url": "https://github.com/kubernetes-incubator/kubespray/pull/3178", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
321305875
Add zsh completion to svcat closes: #1912 This is my first PR for svcat. Issue 1912 requires two steps to close: [x] 1. bump spf13/cobra to 0.0.1 (or above) [x] 2. add zsh completion - zsh completion for svcat is added Hi @carolynvs I wasn't able to get the GenZshCompletion() working a few iterations back so I ended up implementing the kubectl / helm way (which does indeed work). As requested I re-implemented using cobra's GenZshCompletion. It doesn't seem to be working for me still but you can pull and try it. We can always revert back to the previous change if you confirm using GenZshCompletion() is no good. @carolynvs yes! svcat get b actually has two options: bindings and brokers. so can confirm: svcat get b --> shows on next line: bindings brokers svcat get bi --> svcat get bindings svcat get br --> svcat get brokers LGTM, but I'll let folks from another org give the final LGTM label
gharchive/pull-request
2018-05-08T18:54:39
2025-04-01T06:39:19.429356
{ "authors": [ "arschles", "kikisdeliveryservice" ], "repo": "kubernetes-incubator/service-catalog", "url": "https://github.com/kubernetes-incubator/service-catalog/pull/2023", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1485433538
CloudStackMachine could not be deleted when using invalid serviceOffing /kind bug What steps did you take and what happened: [A clear and concise description of what the bug is.] E2E testing detects a bug that when invalid serviceOffering is configured to create CloudStackMachine. This CloudStackMachine could not be deleted due to missing instanceID. What did you expect to happen: CloudStackMachine should be deleted successfully without stopping cluster deletion. Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.] This is for admin doc purpose, a PR had been created to address this:https://github.com/kubernetes-sigs/cluster-api-provider-cloudstack/pull/201 Environment: Cluster-api-provider-cloudstack version: Kubernetes version: (use kubectl version): OS (e.g. from /etc/os-release): This issue seems to be addressed by #201 already. Does it need to stay open?
gharchive/issue
2022-12-08T21:30:45
2025-04-01T06:39:19.518216
{ "authors": [ "hrak", "wanyufe" ], "repo": "kubernetes-sigs/cluster-api-provider-cloudstack", "url": "https://github.com/kubernetes-sigs/cluster-api-provider-cloudstack/issues/202", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1899814530
symlink Permission denied occurs when pull using the Crictl command What happened: When using docker pull, the image is pulled normally, but when using Crictl to pull the image, the following symlink permission denied error occurs. $sudo crictl pull docker.io/calico/node:v3.25.1 DEBU[0000] get image connection DEBU[0000] PullImageRequest: &PullImageRequest{Image:&ImageSpec{Image:docker.io/calico/node:v3.25.1,Annotations:map[string]string{},},Auth:nil,SandboxConfig:nil,} E0917 23:55:07.345848 21476 remote_image.go:171] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/calico/node:v3.25.1\": failed to extract layer sha256:b1d7f02a32791d579abb161bccbf82ba1deaa7fb57805c93e84ddd30f0cb9560: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount3830924437: symlink /usr/lib/systemd/system/reboot.target /var/lib/containerd/tmpmounts/containerd-mount3830924437/etc/systemd/system/ctrl-alt-del.target: permission denied: unknown" image="docker.io/calico/node:v3.25.1" FATA[0001] pulling image: rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/calico/node:v3.25.1": failed to extract layer sha256:b1d7f02a32791d579abb161bccbf82ba1deaa7fb57805c93e84ddd30f0cb9560: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount3830924437: symlink /usr/lib/systemd/system/reboot.target /var/lib/containerd/tmpmounts/containerd-mount3830924437/etc/systemd/system/ctrl-alt-del.target: permission denied: unknown What you expected to happen: The docker image should pull normally. How to reproduce it (as minimally and precisely as possible): $ sudo crictl pull docker.io/calico/node:v3.25.1 Anything else we need to know?: Environment: Container runtime or hardware configuration: OS (e.g: cat /etc/os-release): NAME="CentOS Linux" VERSION="7 (Core)" ID="centos" ID_LIKE="rhel fedora" VERSION_ID="7" PRETTY_NAME="CentOS Linux 7 (Core)" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:centos:centos:7" HOME_URL="https://www.centos.org/" BUG_REPORT_URL="https://bugs.centos.org/" CENTOS_MANTISBT_PROJECT="CentOS-7" CENTOS_MANTISBT_PROJECT_VERSION="7" REDHAT_SUPPORT_PRODUCT="centos" REDHAT_SUPPORT_PRODUCT_VERSION="7" Kernel (e.g. uname -a): Linux 3.10.0-1160.59.1.el7.x86_64 #1 SMP Wed Feb 23 16:47:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux Others: Docker version 24.0.6, build ed223bc containerd containerd.io 1.6.22 8165feabfdfe38c65b599c4993d227328c231fca crictl version v1.26.0 kubernetes-cni 1.2.0-0 kubeadm 1.28.2-0 kubectl 1.28.2-0 kubelet 1.28.2-0 Hey @byeong0, thank you for the report! This looks like an issue with containerd rather than cri-tools. I checked on Containerd 1.6.18 and sudo crictl pull docker.io/calico/node:v3.25.1 worked. Can you check with ctr with verbose logging? Is there an issue with any other images? /close to continue on this, please open the bug in Containerd repository or ask at Containerd slack for support.
gharchive/issue
2023-09-17T15:07:50
2025-04-01T06:39:19.578391
{ "authors": [ "SergeyKanzhelev", "byeong0", "saschagrunert" ], "repo": "kubernetes-sigs/cri-tools", "url": "https://github.com/kubernetes-sigs/cri-tools/issues/1266", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
441517770
Switch from glog to klog #695 is WIP, but this switch to use klog is needed in order to enable webhook dependencies in a follow-up PR. Posting on behalf of @pmorie who did the work. Fixes #694 @font I guess neither you nor @pmorie noticed that @poothia had already started work on this, and that I made the following suggestion: https://github.com/kubernetes-sigs/federation-v2/pull/695#issuecomment-477244315 @font I guess neither you nor @pmorie noticed that @poothia had already started work on this in #695? @poothia fyi, you can close yours. @marun I made a comment about it that it looks like it is WIP and hasn't been updated recently. It's not my intent to take the work from @poothia (apologies!), it was more to see it through since a follow-up PR for the webhook framework which depends on it is forthcoming. @marun @xunpan I've pushed a commit that reorders things based on our converged convention. We should really document that in the development guide. Thanks @font and @pmorie! /lgtm
gharchive/pull-request
2019-05-08T02:06:37
2025-04-01T06:39:19.584403
{ "authors": [ "font", "marun" ], "repo": "kubernetes-sigs/federation-v2", "url": "https://github.com/kubernetes-sigs/federation-v2/pull/857", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2432269961
GRPCRoute timeout - GEP-3139 What type of PR is this? /kind gep What this PR does / why we need it: Staying consistent with the HTTPRoute timeout feature, opening a GEP to allow for GRPCRoute timeouts Which issue(s) this PR fixes: Fixes # https://github.com/kubernetes-sigs/gateway-api/issues/3139 Does this PR introduce a user-facing change?: cc @robscott @arkodg thanks for authoring this GEP @xtineskim and @gnossen for reviewing this in depth ! thinking out loud for gRPC timeouts, thoughts on the below semantics for GRPCRoute ? If no timeout section is defined, rely on grpc-timeout header for deciding a per request timeout https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-HTTP2.md#requests timeouts.maxStreamDuration which overrides grpc-timeout header timeout and instead enforces a HTTP/2 stream duration timeout Thanks @arkodg 😄 ! for your point here: timeouts.maxStreamDuration which overrides grpc-timeout header timeout and instead enforces a HTTP/2 stream duration timeout I wonder if this should be the opposite - if a request were to propagate to another service, could it just continually be growing in duration 🤔 Thanks @arkodg 😄 ! for your point here: timeouts.maxStreamDuration which overrides grpc-timeout header timeout and instead enforces a HTTP/2 stream duration timeout I wonder if this should be the opposite - if a request were to propagate to another service, could it just continually be growing in duration 🤔 i meant the timeouts.maxStreamDuration would override the timeout value defined in the header, but not overwrite the grpc-timeout header itself /remove-lifecycle stale
gharchive/pull-request
2024-07-26T13:42:01
2025-04-01T06:39:19.592036
{ "authors": [ "arkodg", "mikemorris", "xtineskim" ], "repo": "kubernetes-sigs/gateway-api", "url": "https://github.com/kubernetes-sigs/gateway-api/pull/3219", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
981653289
Another round of v1alpha2 cleanup What type of PR is this? /kind cleanup What this PR does / why we need it: This takes care of another big chunk of feedback from #780 as well as some of the items in #790. Does this PR introduce a user-facing change?: * "Controller" has been renamed to "ControllerName" * "Admitted" condition has been renamed to "Accepted" and now defaults to an "Unknown" state instead of "False" @youngnick @hbagdi @howardjohn Thanks for the great feedback on this! I think I've responded to everything, PTAL. I think you got it @robscott, nice work. /lgtm /hold for another lgtm though. Just a couple of formatting nits. /lgtm /lgtm /hold cancel
gharchive/pull-request
2021-08-27T23:09:47
2025-04-01T06:39:19.595431
{ "authors": [ "jpeach", "robscott", "youngnick" ], "repo": "kubernetes-sigs/gateway-api", "url": "https://github.com/kubernetes-sigs/gateway-api/pull/839", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
853704248
Correct kpng api subcommand subcommands are conflicting, and I spent like 3 hours to figure out why :P do not merge yet, forgot another part here :/ /close will fix other things here before :D
gharchive/pull-request
2021-04-08T17:44:48
2025-04-01T06:39:19.605948
{ "authors": [ "rikatz" ], "repo": "kubernetes-sigs/kpng", "url": "https://github.com/kubernetes-sigs/kpng/pull/2", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1357729985
Remove static headerfiles Remove static headerfiles and instead rely on UAPI from https://github.com/libbpf/libbpf/tree/master/include/uapi/linux bpf helpers from https://github.com/libbpf/libbpf/tree/master/src from libbpf version v0.8.0 Add new make target which handles downloading these headers if needed /hold I want to get some more opinions before merging this /un-hold /remove hold /unhold /lgtm
gharchive/pull-request
2022-08-31T18:55:24
2025-04-01T06:39:19.608914
{ "authors": [ "astoycos", "dougsland" ], "repo": "kubernetes-sigs/kpng", "url": "https://github.com/kubernetes-sigs/kpng/pull/338", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1711647919
update(docs): update readme to explain what it is briefly What type of PR is this? /kind documentation What this PR does / why we need it: update docs to explain what it is briefly Which issue(s) this PR fixes: Fixes #3 Special notes for your reviewer: Does this PR introduce a user-facing change? No. Sorry, opened it because I wanted to check the bot's behavior. Actually, it's WIP yet. /hold /kind documentation /unhold /label tide/merge-method-squash PTAL @kerthcet @codefromthecrypt @kerthcet could you check this one again please? /lgtm /label tide/merge-method-squash
gharchive/pull-request
2023-05-16T09:41:19
2025-04-01T06:39:19.612674
{ "authors": [ "kerthcet", "sanposhiho" ], "repo": "kubernetes-sigs/kube-scheduler-wasm-extension", "url": "https://github.com/kubernetes-sigs/kube-scheduler-wasm-extension/pull/4", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1759857407
Updates wazero to 1.2.1 What type of PR is this? /kind cleanup What this PR does / why we need it: We're constantly tracking performance concerns and updating to the latest wazero release makes notable improvements without any change to the guest. Which issue(s) this PR fixes: Special notes for your reviewer: Performance related changes in the latest patch were thanks to @ncruces Does this PR introduce a user-facing change? NONE What are the benchmark results of this change? goos: darwin goarch: arm64 pkg: sigs.k8s.io/kube-scheduler-wasm-extension/internal/e2e │ v1.2.0.txt │ v1.2.1.txt │ │ sec/op │ sec/op vs base │ PluginFilter/noop-wat/params:_small-12 267.4n ± 2% 257.6n ± 4% -3.65% (p=0.024 n=6) PluginFilter/noop-wat/params:_real-12 270.4n ± 1% 270.1n ± 1% ~ (p=0.623 n=6) PluginFilter/noop/params:_small-12 333.2n ± 1% 329.2n ± 0% -1.19% (p=0.002 n=6) PluginFilter/noop/params:_real-12 337.0n ± 0% 336.2n ± 1% ~ (p=0.701 n=6) PluginFilter/test/params:_small-12 6.770µ ± 0% 6.279µ ± 0% -7.25% (p=0.002 n=6) PluginFilter/test/params:_real-12 122.6µ ± 5% 114.7µ ± 1% -6.51% (p=0.002 n=6) PluginScore/noop-wat/params:_small-12 256.2n ± 2% 257.8n ± 0% ~ (p=0.327 n=6) PluginScore/noop-wat/params:_real-12 260.2n ± 1% 260.6n ± 2% ~ (p=0.331 n=6) PluginScore/noop/params:_small-12 374.0n ± 1% 375.4n ± 1% ~ (p=0.626 n=6) PluginScore/noop/params:_real-12 343.2n ± 1% 348.3n ± 1% +1.49% (p=0.004 n=6) PluginScore/test/params:_small-12 3.786µ ± 1% 3.604µ ± 0% -4.82% (p=0.002 n=6) PluginScore/test/params:_real-12 43.48µ ± 16% 46.18µ ± 1% ~ (p=0.394 n=6) PluginFilterAndScore/noop-wat/params:_small-12 378.8n ± 3% 376.1n ± 1% ~ (p=0.061 n=6) PluginFilterAndScore/noop-wat/params:_real-12 382.1n ± 1% 380.9n ± 0% ~ (p=0.074 n=6) PluginFilterAndScore/noop/params:_small-12 576.7n ± 1% 578.0n ± 1% ~ (p=0.260 n=6) PluginFilterAndScore/noop/params:_real-12 543.1n ± 1% 543.4n ± 0% ~ (p=0.509 n=6) PluginFilterAndScore/test/params:_small-12 10.62µ ± 0% 10.06µ ± 1% -5.23% (p=0.002 n=6) PluginFilterAndScore/test/params:_real-12 176.5µ ± 0% 168.0µ ± 1% -4.80% (p=0.002 n=6) geomean 1.450µ 1.429µ -1.48% /lgtm
gharchive/pull-request
2023-06-16T03:23:57
2025-04-01T06:39:19.615670
{ "authors": [ "codefromthecrypt", "sanposhiho" ], "repo": "kubernetes-sigs/kube-scheduler-wasm-extension", "url": "https://github.com/kubernetes-sigs/kube-scheduler-wasm-extension/pull/41", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1827279258
Fix Flaky test: Kueue when Creating a Job With Queueing [It] Should unsuspend a job and set nodeSelectors What type of PR is this? /kind bug What this PR does / why we need it: Create const LongTimeout for e2e test. Pods in test sometimes takes long time because of the image pulling. Which issue(s) this PR fixes: Fixes #1021 Special notes for your reviewer: Does this PR introduce a user-facing change? NONE /ok-to-test /approve +1 on @mimowo's suggestion There is another flaky test: https://github.com/kubernetes-sigs/kueue/issues/1027 /retest At this point I think it is better to implement pre-pull. I will open an Issue
gharchive/pull-request
2023-07-29T02:41:34
2025-04-01T06:39:19.663175
{ "authors": [ "BinL233", "alculquicondor", "mimowo", "tenzen-y" ], "repo": "kubernetes-sigs/kueue", "url": "https://github.com/kubernetes-sigs/kueue/pull/1025", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2444195646
Add dependabot npm configuration for the site directory. What type of PR is this? /kind cleanup What this PR does / why we need it: Add dependabot npm configuration for the site directory. Which issue(s) this PR fixes: Fixes # Special notes for your reviewer: Does this PR introduce a user-facing change? NONE /cc @tenzen-y @mimowo /lgtm Thanks! /approve
gharchive/pull-request
2024-08-02T06:43:59
2025-04-01T06:39:19.666237
{ "authors": [ "mbobrovskyi", "mimowo" ], "repo": "kubernetes-sigs/kueue", "url": "https://github.com/kubernetes-sigs/kueue/pull/2759", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2634602822
Update latest version to v0.8.3 What type of PR is this? /kind documentation What this PR does / why we need it: Which issue(s) this PR fixes: Part-of https://github.com/kubernetes-sigs/kueue/issues/3441 Special notes for your reviewer: Does this PR introduce a user-facing change? NONE /hold for official release /assign @mimowo /cherry-pick website /hold cancel /lgtm /approve
gharchive/pull-request
2024-11-05T07:02:12
2025-04-01T06:39:19.669889
{ "authors": [ "mimowo", "tenzen-y" ], "repo": "kubernetes-sigs/kueue", "url": "https://github.com/kubernetes-sigs/kueue/pull/3444", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1591641981
Bump k8s.io deps to v0.26.1 What type of PR is this? /kind cleanup What this PR does / why we need it: Which issue(s) this PR fixes: Fixes https://github.com/kubernetes-sigs/kueue/pull/586 Special notes for your reviewer: can you bump all the k8s libraries together? /retest Failed test: https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/kubernetes-sigs_kueue/588/pull-kueue-test-unit-main/1628060447326343168 The test error was introduced in https://github.com/kubernetes-sigs/controller-runtime/pull/2025, for fakeClient will check the indexes but we didn't register any in unit tests. But we can register these indexes when building the client. Yes, the test error fixed. All addressed except https://github.com/kubernetes-sigs/kueue/pull/588#discussion_r1114570049. /lgtm @kerthcet we can also get rid of checks like this: https://github.com/kubernetes-sigs/kueue/blob/c45d3dd98e3cae593e844092e2f334879a40ce7c/pkg/queue/manager.go#L172
gharchive/pull-request
2023-02-20T11:14:05
2025-04-01T06:39:19.675780
{ "authors": [ "alculquicondor", "kerthcet" ], "repo": "kubernetes-sigs/kueue", "url": "https://github.com/kubernetes-sigs/kueue/pull/588", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2168074270
Update readme expression What type of PR is this? /kind documentation What this PR does / why we need it Which issue(s) this PR fixes Fixes # Special notes for your reviewer Does this PR introduce a user-facing change? /ok-to-test
gharchive/pull-request
2024-03-05T01:06:42
2025-04-01T06:39:19.692527
{ "authors": [ "kerthcet", "liurupeng" ], "repo": "kubernetes-sigs/lws", "url": "https://github.com/kubernetes-sigs/lws/pull/37", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2048134716
Release v0.8.2 Ref https://github.com/kubernetes-sigs/security-profiles-operator/pull/2030 Release notes # Release notes Welcome to our glorious v0.8.2 release of the **security-profiles-operator**! The general usage and setup can be found [in our documentation][0]. :partying_face: :dancers: To install the operator, run: ``` $ kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/security-profiles-operator/v0.8.2/deploy/operator.yaml ``` You can also verify the container image signature by using [cosign][1]: ``` $ cosign verify \ --certificate-identity krel-trust@k8s-releng-prod.iam.gserviceaccount.com \ --certificate-oidc-issuer https://accounts.google.com \ registry.k8s.io/security-profiles-operator/security-profiles-operator:v0.8.2 ``` Beside the operator image, we now also ship `spoc`, the official Security Profiles Operator Command Line Interface! Binaries for `amd64` and `arm64` are attached to this release. To verify the signature of `spoc`. download all release artifacts and run for `amd64` (works in the same way for `arm64`: ``` $ cosign verify-blob \ --certificate-identity sgrunert@redhat.com \ --certificate-oidc-issuer https://github.com/login/oauth \ --certificate spoc.amd64.cert \ --signature spoc.amd64.sig \ spoc.amd64 ``` To verify the Bill of Materials (BOM) using the [`bom`](https://github.com/kubernetes-sigs/bom) tool, download the artifacts into a `build` directory and run: ``` > bom validate -e spoc.spdx -d build/ +-------------------+-------+-----------------------------+----------------+ | FILENAME | VALID | MESSAGE | INVALID HASHES | +-------------------+-------+-----------------------------+----------------+ | spoc.amd64 | OK | File validated successfully | - | | spoc.amd64.cert | OK | File validated successfully | - | | spoc.amd64.sha512 | OK | File validated successfully | - | | spoc.amd64.sig | OK | File validated successfully | - | | spoc.arm64 | OK | File validated successfully | - | | spoc.arm64.cert | OK | File validated successfully | - | | spoc.arm64.sha512 | OK | File validated successfully | - | | spoc.arm64.sig | OK | File validated successfully | - | +-------------------+-------+-----------------------------+----------------+ ``` The `.spdx` file is signed as well and we also provide `.sha512` sum files for the binaries. Feel free to provide us any kind of feedback in the official [Kubernetes Slack #security-profiles-operator channel][2]. [0]: https://github.com/kubernetes-sigs/security-profiles-operator/blob/v0.8.2/installation-usage.md [1]: https://github.com/sigstore/cosign [2]: https://app.slack.com/client/T09NY5SBT/C013FQNB0A2 ## Changes by Kind ### Failing Test - Fixed upgrade issue introduced in v0.8.1. (#2023, @yuumasato) ## Dependencies ### Added - github.com/DATA-DOG/go-sqlmock: [v1.5.0](https://github.com/DATA-DOG/go-sqlmock/tree/v1.5.0) - github.com/Khan/genqlient: [v0.6.0](https://github.com/Khan/genqlient/tree/v0.6.0) - github.com/alexflint/go-arg: [v1.4.2](https://github.com/alexflint/go-arg/tree/v1.4.2) - github.com/alexflint/go-scalar: [v1.0.0](https://github.com/alexflint/go-scalar/tree/v1.0.0) - github.com/aws/aws-sdk-go-v2/feature/s3/manager: [v1.11.76](https://github.com/aws/aws-sdk-go-v2/feature/s3/manager/tree/v1.11.76) - github.com/buildkite/go-pipeline: [v0.2.0](https://github.com/buildkite/go-pipeline/tree/v0.2.0) ### Changed - cloud.google.com/go/compute: v1.23.2 → v1.23.3 - cloud.google.com/go/iam: v1.1.4 → v1.1.5 - cloud.google.com/go/kms: v1.15.4 → v1.15.5 - cloud.google.com/go: v0.110.9 → v0.110.10 - github.com/Azure/azure-sdk-for-go/sdk/azcore: [v1.8.0 → v1.9.0](https://github.com/Azure/azure-sdk-for-go/sdk/azcore/compare/v1.8.0...v1.9.0) - github.com/Azure/azure-sdk-for-go/sdk/internal: [v1.4.0 → v1.5.0](https://github.com/Azure/azure-sdk-for-go/sdk/internal/compare/v1.4.0...v1.5.0) - github.com/DataDog/datadog-agent/pkg/obfuscate: [v0.48.1 → v0.48.0](https://github.com/DataDog/datadog-agent/pkg/obfuscate/compare/v0.48.1...v0.48.0) - github.com/DataDog/datadog-agent/pkg/remoteconfig/state: [v0.48.1 → 2549ba9](https://github.com/DataDog/datadog-agent/pkg/remoteconfig/state/compare/v0.48.1...2549ba9) - github.com/DataDog/sketches-go: [v1.4.3 → v1.4.2](https://github.com/DataDog/sketches-go/compare/v1.4.3...v1.4.2) - github.com/andybalholm/brotli: [v1.0.6 → v1.0.1](https://github.com/andybalholm/brotli/compare/v1.0.6...v1.0.1) - github.com/aws/aws-sdk-go-v2/config: [v1.19.1 → v1.25.11](https://github.com/aws/aws-sdk-go-v2/config/compare/v1.19.1...v1.25.11) - github.com/aws/aws-sdk-go-v2/credentials: [v1.13.43 → v1.16.9](https://github.com/aws/aws-sdk-go-v2/credentials/compare/v1.13.43...v1.16.9) - github.com/aws/aws-sdk-go-v2/feature/ec2/imds: [v1.13.13 → v1.14.9](https://github.com/aws/aws-sdk-go-v2/feature/ec2/imds/compare/v1.13.13...v1.14.9) - github.com/aws/aws-sdk-go-v2/internal/configsources: [v1.1.43 → v1.2.8](https://github.com/aws/aws-sdk-go-v2/internal/configsources/compare/v1.1.43...v1.2.8) - github.com/aws/aws-sdk-go-v2/internal/endpoints/v2: [v2.4.37 → v2.5.8](https://github.com/aws/aws-sdk-go-v2/internal/endpoints/v2/compare/v2.4.37...v2.5.8) - github.com/aws/aws-sdk-go-v2/internal/ini: [v1.3.45 → v1.7.1](https://github.com/aws/aws-sdk-go-v2/internal/ini/compare/v1.3.45...v1.7.1) - github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding: [v1.9.14 → v1.10.3](https://github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding/compare/v1.9.14...v1.10.3) - github.com/aws/aws-sdk-go-v2/service/internal/presigned-url: [v1.9.37 → v1.10.8](https://github.com/aws/aws-sdk-go-v2/service/internal/presigned-url/compare/v1.9.37...v1.10.8) - github.com/aws/aws-sdk-go-v2/service/kms: [v1.24.7 → v1.27.2](https://github.com/aws/aws-sdk-go-v2/service/kms/compare/v1.24.7...v1.27.2) - github.com/aws/aws-sdk-go-v2/service/sso: [v1.15.2 → v1.18.2](https://github.com/aws/aws-sdk-go-v2/service/sso/compare/v1.15.2...v1.18.2) - github.com/aws/aws-sdk-go-v2/service/ssooidc: [v1.17.3 → v1.21.2](https://github.com/aws/aws-sdk-go-v2/service/ssooidc/compare/v1.17.3...v1.21.2) - github.com/aws/aws-sdk-go-v2/service/sts: [v1.23.2 → v1.26.2](https://github.com/aws/aws-sdk-go-v2/service/sts/compare/v1.23.2...v1.26.2) - github.com/aws/aws-sdk-go-v2: [v1.21.2 → v1.23.5](https://github.com/aws/aws-sdk-go-v2/compare/v1.21.2...v1.23.5) - github.com/aws/aws-sdk-go: [v1.47.0 → v1.48.11](https://github.com/aws/aws-sdk-go/compare/v1.47.0...v1.48.11) - github.com/aws/smithy-go: [v1.15.0 → v1.18.1](https://github.com/aws/smithy-go/compare/v1.15.0...v1.18.1) - github.com/buildkite/agent/v3: [v3.58.0 → v3.59.0](https://github.com/buildkite/agent/v3/compare/v3.58.0...v3.59.0) - github.com/buildkite/bintest/v3: [v3.1.1 → v3.2.0](https://github.com/buildkite/bintest/v3/compare/v3.1.1...v3.2.0) - github.com/cert-manager/cert-manager: [v1.13.2 → v1.13.3](https://github.com/cert-manager/cert-manager/compare/v1.13.2...v1.13.3) - github.com/containers/common: [v0.57.0 → v0.57.1](https://github.com/containers/common/compare/v0.57.0...v0.57.1) - github.com/ebitengine/purego: [v0.5.0 → v0.5.0-alpha.1](https://github.com/ebitengine/purego/compare/v0.5.0...v0.5.0-alpha.1) - github.com/felixge/httpsnoop: [v1.0.3 → v1.0.4](https://github.com/felixge/httpsnoop/compare/v1.0.3...v1.0.4) - github.com/gabriel-vasile/mimetype: [v1.4.3 → v1.4.2](https://github.com/gabriel-vasile/mimetype/compare/v1.4.3...v1.4.2) - github.com/go-openapi/spec: [v0.20.9 → v0.20.11](https://github.com/go-openapi/spec/compare/v0.20.9...v0.20.11) - github.com/go-openapi/strfmt: [v0.21.7 → v0.21.8](https://github.com/go-openapi/strfmt/compare/v0.21.7...v0.21.8) - github.com/go-openapi/validate: [v0.22.1 → v0.22.3](https://github.com/go-openapi/validate/compare/v0.22.1...v0.22.3) - github.com/go-rod/rod: [v0.114.4 → v0.114.5](https://github.com/go-rod/rod/compare/v0.114.4...v0.114.5) - github.com/google/go-tpm-tools: [v0.4.1 → v0.4.2](https://github.com/google/go-tpm-tools/compare/v0.4.1...v0.4.2) - github.com/gorilla/mux: [v1.8.0 → v1.8.1](https://github.com/gorilla/mux/compare/v1.8.0...v1.8.1) - github.com/hashicorp/go-retryablehttp: [v0.7.4 → v0.7.5](https://github.com/hashicorp/go-retryablehttp/compare/v0.7.4...v0.7.5) - github.com/jellydator/ttlcache/v3: [v3.1.0 → v3.1.1](https://github.com/jellydator/ttlcache/v3/compare/v3.1.0...v3.1.1) - github.com/montanaflynn/stats: [v0.6.6 → 1bf9dbc](https://github.com/montanaflynn/stats/compare/v0.6.6...1bf9dbc) - github.com/open-policy-agent/opa: [v0.58.0 → v0.59.0](https://github.com/open-policy-agent/opa/compare/v0.58.0...v0.59.0) - github.com/pierrec/lz4/v4: [v4.1.18 → v4.1.2](https://github.com/pierrec/lz4/v4/compare/v4.1.18...v4.1.2) - github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring: [v0.69.1 → v0.70.0](https://github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/compare/v0.69.1...v0.70.0) - github.com/sigstore/cosign/v2: [v2.2.1 → v2.2.2](https://github.com/sigstore/cosign/v2/compare/v2.2.1...v2.2.2) - github.com/sigstore/rekor: [v1.3.3 → v1.3.4](https://github.com/sigstore/rekor/compare/v1.3.3...v1.3.4) - github.com/sigstore/sigstore/pkg/signature/kms/aws: [v1.7.5 → v1.7.6](https://github.com/sigstore/sigstore/pkg/signature/kms/aws/compare/v1.7.5...v1.7.6) - github.com/sigstore/sigstore/pkg/signature/kms/azure: [v1.7.5 → v1.7.6](https://github.com/sigstore/sigstore/pkg/signature/kms/azure/compare/v1.7.5...v1.7.6) - github.com/sigstore/sigstore/pkg/signature/kms/gcp: [v1.7.5 → v1.7.6](https://github.com/sigstore/sigstore/pkg/signature/kms/gcp/compare/v1.7.5...v1.7.6) - github.com/sigstore/sigstore/pkg/signature/kms/hashivault: [v1.7.5 → v1.7.6](https://github.com/sigstore/sigstore/pkg/signature/kms/hashivault/compare/v1.7.5...v1.7.6) - github.com/sigstore/sigstore: [v1.7.5 → v1.7.6](https://github.com/sigstore/sigstore/compare/v1.7.5...v1.7.6) - github.com/stretchr/objx: [v0.5.1 → v0.5.0](https://github.com/stretchr/objx/compare/v0.5.1...v0.5.0) - github.com/theupdateframework/go-tuf: [v0.6.1 → v0.7.0](https://github.com/theupdateframework/go-tuf/compare/v0.6.1...v0.7.0) - github.com/tidwall/pretty: [v1.2.1 → v1.2.0](https://github.com/tidwall/pretty/compare/v1.2.1...v1.2.0) - github.com/urfave/cli/v2: [v2.25.7 → v2.26.0](https://github.com/urfave/cli/v2/compare/v2.25.7...v2.26.0) - github.com/xanzy/go-gitlab: [v0.93.2 → v0.94.0](https://github.com/xanzy/go-gitlab/compare/v0.93.2...v0.94.0) - go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc: v0.45.0 → v0.46.0 - go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp: v0.45.0 → v0.46.1 - go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc: v1.19.0 → v1.21.0 - go.opentelemetry.io/otel/exporters/otlp/otlptrace: v1.19.0 → v1.21.0 - go.opentelemetry.io/otel/metric: v1.19.0 → v1.21.0 - go.opentelemetry.io/otel/sdk: v1.19.0 → v1.21.0 - go.opentelemetry.io/otel/trace: v1.19.0 → v1.21.0 - go.opentelemetry.io/otel: v1.19.0 → v1.21.0 - go.step.sm/crypto: v0.36.1 → v0.38.0 - golang.org/x/crypto: v0.16.0 → v0.17.0 - golang.org/x/exp: 7918f67 → 2478ac8 - golang.org/x/oauth2: v0.13.0 → v0.15.0 - golang.org/x/time: v0.3.0 → v0.5.0 - golang.org/x/tools: v0.14.0 → v0.15.0 - google.golang.org/api: v0.149.0 → v0.152.0 - google.golang.org/genproto/googleapis/api: 49dd2c1 → bbf56f3 - google.golang.org/genproto/googleapis/bytestream: d783a09 → 83a465c - google.golang.org/genproto/googleapis/rpc: 49dd2c1 → 83a465c - google.golang.org/genproto: 49dd2c1 → bbf56f3 - google.golang.org/grpc: v1.59.0 → v1.60.1 - k8s.io/api: v0.28.4 → v0.29.0 - k8s.io/apiextensions-apiserver: v0.28.3 → v0.28.4 - k8s.io/apimachinery: v0.28.4 → v0.29.0 - k8s.io/apiserver: v0.28.3 → v0.28.4 - k8s.io/cli-runtime: v0.28.4 → v0.29.0 - k8s.io/client-go: v0.28.4 → v0.29.0 - k8s.io/code-generator: v0.28.3 → v0.28.4 - k8s.io/component-base: v0.28.3 → v0.28.4 - k8s.io/kms: v0.28.3 → v0.28.4 - k8s.io/utils: 3b25d92 → b307cd5 - sigs.k8s.io/structured-merge-diff/v4: v4.3.0 → v4.4.1 ### Removed - github.com/99designs/gqlgen: [v0.17.36](https://github.com/99designs/gqlgen/tree/v0.17.36) - github.com/DataDog/gostackparse: [v0.7.0](https://github.com/DataDog/gostackparse/tree/v0.7.0) - github.com/IBM/sarama: [v1.40.0](https://github.com/IBM/sarama/tree/v1.40.0) - github.com/Shopify/sarama: [v1.38.1](https://github.com/Shopify/sarama/tree/v1.38.1) - github.com/aws/aws-sdk-go-v2/service/dynamodb: [v1.21.4](https://github.com/aws/aws-sdk-go-v2/service/dynamodb/tree/v1.21.4) - github.com/aws/aws-sdk-go-v2/service/ec2: [v1.93.2](https://github.com/aws/aws-sdk-go-v2/service/ec2/tree/v1.93.2) - github.com/aws/aws-sdk-go-v2/service/eventbridge: [v1.20.4](https://github.com/aws/aws-sdk-go-v2/service/eventbridge/tree/v1.20.4) - github.com/aws/aws-sdk-go-v2/service/internal/endpoint-discovery: [v1.7.34](https://github.com/aws/aws-sdk-go-v2/service/internal/endpoint-discovery/tree/v1.7.34) - github.com/aws/aws-sdk-go-v2/service/kinesis: [v1.18.4](https://github.com/aws/aws-sdk-go-v2/service/kinesis/tree/v1.18.4) - github.com/aws/aws-sdk-go-v2/service/sfn: [v1.19.4](https://github.com/aws/aws-sdk-go-v2/service/sfn/tree/v1.19.4) - github.com/aws/aws-sdk-go-v2/service/sns: [v1.21.4](https://github.com/aws/aws-sdk-go-v2/service/sns/tree/v1.21.4) - github.com/aws/aws-sdk-go-v2/service/sqs: [v1.24.4](https://github.com/aws/aws-sdk-go-v2/service/sqs/tree/v1.24.4) - github.com/bradfitz/gomemcache: [acc6962](https://github.com/bradfitz/gomemcache/tree/acc6962) - github.com/bytedance/sonic: [v1.10.0](https://github.com/bytedance/sonic/tree/v1.10.0) - github.com/chenzhuoyu/base64x: [296ad89](https://github.com/chenzhuoyu/base64x/tree/296ad89) - github.com/chenzhuoyu/iasm: [v0.9.0](https://github.com/chenzhuoyu/iasm/tree/v0.9.0) - github.com/confluentinc/confluent-kafka-go/v2: [v2.2.0](https://github.com/confluentinc/confluent-kafka-go/v2/tree/v2.2.0) - github.com/confluentinc/confluent-kafka-go: [v1.9.2](https://github.com/confluentinc/confluent-kafka-go/tree/v1.9.2) - github.com/decred/dcrd/crypto/blake256: [v1.0.1](https://github.com/decred/dcrd/crypto/blake256/tree/v1.0.1) - github.com/denisenkom/go-mssqldb: [v0.11.0](https://github.com/denisenkom/go-mssqldb/tree/v0.11.0) - github.com/dimfeld/httptreemux/v5: [v5.5.0](https://github.com/dimfeld/httptreemux/v5/tree/v5.5.0) - github.com/dvyukov/go-fuzz: [6a8e9d1](https://github.com/dvyukov/go-fuzz/tree/6a8e9d1) - github.com/eapache/go-resiliency: [v1.4.0](https://github.com/eapache/go-resiliency/tree/v1.4.0) - github.com/eapache/go-xerial-snappy: [c322873](https://github.com/eapache/go-xerial-snappy/tree/c322873) - github.com/eapache/queue: [v1.1.0](https://github.com/eapache/queue/tree/v1.1.0) - github.com/elastic/elastic-transport-go/v8: [v8.1.0](https://github.com/elastic/elastic-transport-go/v8/tree/v8.1.0) - github.com/elastic/go-elasticsearch/v6: [v6.8.5](https://github.com/elastic/go-elasticsearch/v6/tree/v6.8.5) - github.com/elastic/go-elasticsearch/v7: [v7.17.1](https://github.com/elastic/go-elasticsearch/v7/tree/v7.17.1) - github.com/elastic/go-elasticsearch/v8: [v8.4.0](https://github.com/elastic/go-elasticsearch/v8/tree/v8.4.0) - github.com/emicklei/go-restful: [v2.16.0+incompatible](https://github.com/emicklei/go-restful/tree/v2.16.0) - github.com/garyburd/redigo: [v1.6.4](https://github.com/garyburd/redigo/tree/v1.6.4) - github.com/gin-contrib/sse: [v0.1.0](https://github.com/gin-contrib/sse/tree/v0.1.0) - github.com/gin-gonic/gin: [v1.9.1](https://github.com/gin-gonic/gin/tree/v1.9.1) - github.com/globalsign/mgo: [eeefdec](https://github.com/globalsign/mgo/tree/eeefdec) - github.com/go-pg/pg/v10: [v10.11.1](https://github.com/go-pg/pg/v10/tree/v10.11.1) - github.com/go-pg/zerochecker: [v0.2.0](https://github.com/go-pg/zerochecker/tree/v0.2.0) - github.com/go-playground/assert/v2: [v2.2.0](https://github.com/go-playground/assert/v2/tree/v2.2.0) - github.com/go-redis/redis/v7: [v7.4.1](https://github.com/go-redis/redis/v7/tree/v7.4.1) - github.com/go-redis/redis/v8: [v8.11.5](https://github.com/go-redis/redis/v8/tree/v8.11.5) - github.com/go-redis/redis: [v6.15.9+incompatible](https://github.com/go-redis/redis/tree/v6.15.9) - github.com/go-stack/stack: [v1.8.0](https://github.com/go-stack/stack/tree/v1.8.0) - github.com/gobuffalo/attrs: [a9411de](https://github.com/gobuffalo/attrs/tree/a9411de) - github.com/gobuffalo/depgen: [v0.1.0](https://github.com/gobuffalo/depgen/tree/v0.1.0) - github.com/gobuffalo/envy: [v1.7.0](https://github.com/gobuffalo/envy/tree/v1.7.0) - github.com/gobuffalo/genny: [v0.1.1](https://github.com/gobuffalo/genny/tree/v0.1.1) - github.com/gobuffalo/gitgen: [cc08618](https://github.com/gobuffalo/gitgen/tree/cc08618) - github.com/gobuffalo/gogen: [v0.1.1](https://github.com/gobuffalo/gogen/tree/v0.1.1) - github.com/gobuffalo/logger: [86e12af](https://github.com/gobuffalo/logger/tree/86e12af) - github.com/gobuffalo/mapi: [v1.0.2](https://github.com/gobuffalo/mapi/tree/v1.0.2) - github.com/gobuffalo/packd: [v0.1.0](https://github.com/gobuffalo/packd/tree/v0.1.0) - github.com/gobuffalo/packr/v2: [v2.2.0](https://github.com/gobuffalo/packr/v2/tree/v2.2.0) - github.com/gobuffalo/syncx: [33c2958](https://github.com/gobuffalo/syncx/tree/33c2958) - github.com/gocql/gocql: [0eacd31](https://github.com/gocql/gocql/tree/0eacd31) - github.com/gofiber/fiber/v2: [v2.50.0](https://github.com/gofiber/fiber/v2/tree/v2.50.0) - github.com/gofrs/uuid: [v4.4.0+incompatible](https://github.com/gofrs/uuid/tree/v4.4.0) - github.com/golang-sql/civil: [b832511](https://github.com/golang-sql/civil/tree/b832511) - github.com/golang-sql/sqlexp: [v0.1.0](https://github.com/golang-sql/sqlexp/tree/v0.1.0) - github.com/gomodule/redigo: [v1.8.9](https://github.com/gomodule/redigo/tree/v1.8.9) - github.com/googleapis/gnostic: [v0.5.5](https://github.com/googleapis/gnostic/tree/v0.5.5) - github.com/graph-gophers/graphql-go: [v1.5.0](https://github.com/graph-gophers/graphql-go/tree/v1.5.0) - github.com/hailocab/go-hostpool: [e80d13c](https://github.com/hailocab/go-hostpool/tree/e80d13c) - github.com/hashicorp/go-uuid: [v1.0.3](https://github.com/hashicorp/go-uuid/tree/v1.0.3) - github.com/hashicorp/golang-lru/v2: [v2.0.3](https://github.com/hashicorp/golang-lru/v2/tree/v2.0.3) - github.com/jackc/pgpassfile: [v1.0.0](https://github.com/jackc/pgpassfile/tree/v1.0.0) - github.com/jackc/pgservicefile: [091c0ba](https://github.com/jackc/pgservicefile/tree/091c0ba) - github.com/jackc/pgx/v5: [v5.3.1](https://github.com/jackc/pgx/v5/tree/v5.3.1) - github.com/jcmturner/aescts/v2: [v2.0.0](https://github.com/jcmturner/aescts/v2/tree/v2.0.0) - github.com/jcmturner/dnsutils/v2: [v2.0.0](https://github.com/jcmturner/dnsutils/v2/tree/v2.0.0) - github.com/jcmturner/gofork: [v1.7.6](https://github.com/jcmturner/gofork/tree/v1.7.6) - github.com/jcmturner/gokrb5/v8: [v8.4.4](https://github.com/jcmturner/gokrb5/v8/tree/v8.4.4) - github.com/jcmturner/rpc/v2: [v2.0.3](https://github.com/jcmturner/rpc/v2/tree/v2.0.3) - github.com/jinzhu/gorm: [v1.9.16](https://github.com/jinzhu/gorm/tree/v1.9.16) - github.com/jinzhu/inflection: [v1.0.0](https://github.com/jinzhu/inflection/tree/v1.0.0) - github.com/jinzhu/now: [v1.1.5](https://github.com/jinzhu/now/tree/v1.1.5) - github.com/joho/godotenv: [v1.3.0](https://github.com/joho/godotenv/tree/v1.3.0) - github.com/karrick/godirwalk: [v1.10.3](https://github.com/karrick/godirwalk/tree/v1.10.3) - github.com/klauspost/cpuid/v2: [v2.2.5](https://github.com/klauspost/cpuid/v2/tree/v2.2.5) - github.com/konsorten/go-windows-terminal-sequences: [v1.0.2](https://github.com/konsorten/go-windows-terminal-sequences/tree/v1.0.2) - github.com/labstack/echo/v4: [v4.11.1](https://github.com/labstack/echo/v4/tree/v4.11.1) - github.com/labstack/echo: [v3.3.10+incompatible](https://github.com/labstack/echo/tree/v3.3.10) - github.com/labstack/gommon: [v0.4.0](https://github.com/labstack/gommon/tree/v0.4.0) - github.com/markbates/oncer: [bf2de49](https://github.com/markbates/oncer/tree/bf2de49) - github.com/markbates/safe: [v1.0.1](https://github.com/markbates/safe/tree/v1.0.1) - github.com/microsoft/go-mssqldb: [v0.21.0](https://github.com/microsoft/go-mssqldb/tree/v0.21.0) - github.com/richardartoul/molecule: [32cfee0](https://github.com/richardartoul/molecule/tree/32cfee0) - github.com/segmentio/kafka-go: [v0.4.42](https://github.com/segmentio/kafka-go/tree/v0.4.42) - github.com/spaolacci/murmur3: [v1.1.0](https://github.com/spaolacci/murmur3/tree/v1.1.0) - github.com/tidwall/btree: [v1.6.0](https://github.com/tidwall/btree/tree/v1.6.0) - github.com/tidwall/buntdb: [v1.3.0](https://github.com/tidwall/buntdb/tree/v1.3.0) - github.com/tidwall/gjson: [v1.16.0](https://github.com/tidwall/gjson/tree/v1.16.0) - github.com/tidwall/grect: [v0.1.4](https://github.com/tidwall/grect/tree/v0.1.4) - github.com/tidwall/match: [v1.1.1](https://github.com/tidwall/match/tree/v1.1.1) - github.com/tidwall/rtred: [v0.1.2](https://github.com/tidwall/rtred/tree/v0.1.2) - github.com/tidwall/tinyqueue: [v0.1.1](https://github.com/tidwall/tinyqueue/tree/v0.1.1) - github.com/tmthrgd/go-hex: [447a304](https://github.com/tmthrgd/go-hex/tree/447a304) - github.com/twitchtv/twirp: [v8.1.3+incompatible](https://github.com/twitchtv/twirp/tree/v8.1.3) - github.com/twitchyliquid64/golang-asm: [v0.15.1](https://github.com/twitchyliquid64/golang-asm/tree/v0.15.1) - github.com/ugorji/go/codec: [v1.2.11](https://github.com/ugorji/go/codec/tree/v1.2.11) - github.com/valyala/bytebufferpool: [v1.0.0](https://github.com/valyala/bytebufferpool/tree/v1.0.0) - github.com/valyala/fasthttp: [v1.50.0](https://github.com/valyala/fasthttp/tree/v1.50.0) - github.com/valyala/fasttemplate: [v1.2.2](https://github.com/valyala/fasttemplate/tree/v1.2.2) - github.com/valyala/tcplisten: [v1.0.0](https://github.com/valyala/tcplisten/tree/v1.0.0) - github.com/vmihailenco/bufpool: [v0.1.11](https://github.com/vmihailenco/bufpool/tree/v0.1.11) - github.com/vmihailenco/msgpack/v5: [v5.3.5](https://github.com/vmihailenco/msgpack/v5/tree/v5.3.5) - github.com/vmihailenco/tagparser/v2: [v2.0.0](https://github.com/vmihailenco/tagparser/v2/tree/v2.0.0) - github.com/vmihailenco/tagparser: [v0.1.2](https://github.com/vmihailenco/tagparser/tree/v0.1.2) - github.com/zenazn/goji: [v1.0.1](https://github.com/zenazn/goji/tree/v1.0.1) - golang.org/x/arch: v0.4.0 - gopkg.in/jinzhu/gorm.v1: v1.9.2 - gopkg.in/olivere/elastic.v3: v3.0.75 - gopkg.in/olivere/elastic.v5: v5.0.84 - gorm.io/driver/mysql: v1.0.1 - gorm.io/driver/postgres: v1.4.6 - gorm.io/driver/sqlserver: v1.4.2 - gorm.io/gorm: v1.25.3 - honnef.co/go/gotraceui: v0.2.0 - mellium.im/sasl: v0.3.1 Done
gharchive/issue
2023-12-19T08:08:24
2025-04-01T06:39:19.699736
{ "authors": [ "saschagrunert" ], "repo": "kubernetes-sigs/security-profiles-operator", "url": "https://github.com/kubernetes-sigs/security-profiles-operator/issues/2031", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
299942514
Update readme to reflect move from incubator to stable for cockroachdb CockroachDB is now in stable. Updated the install section, and one more place referring to the incubator. /cc @a-robinson Does this look ok? /assign @mattfarina The build failed because I didn't bump the chart version. I've only updated README.md should I still update the chart version? If I do, I'll wait for #3769 to be merged first. Thanks @hvaara! This LGTM. Feel free to use the same version number as #3769, I can update that one if this gets in first. /ok-to-test /lgtm /approve @a-robinson Thanks a lot for taking a look! I've updated the chart version. /lgtm
gharchive/pull-request
2018-02-24T13:50:50
2025-04-01T06:39:19.721491
{ "authors": [ "a-robinson", "hvaara", "unguiculus" ], "repo": "kubernetes/charts", "url": "https://github.com/kubernetes/charts/pull/3858", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1179997861
[occm] Node ExternalIP not being added Is this a BUG REPORT or FEATURE REQUEST?: /kind bug What happened: Attempting to switch from in-tree cloud provider to occm. The nodes don't have external IP's as revealed with kubectl get nodes -o wide: NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME tf-rancher-cluster-demo-test-1 Ready controlplane,etcd,worker 58m v1.21.5 10.0.0.9 <none> Ubuntu 18.04.6 LTS 5.4.0-91-generic docker://20.10.11 tf-rancher-cluster-demo-test-2 Ready controlplane,etcd,worker 27d v1.21.5 10.0.0.13 <none> Ubuntu 18.04.6 LTS 5.4.0-91-generic docker://20.10.11 tf-rancher-cluster-demo-test-3 Ready controlplane,etcd,worker 27d v1.21.5 10.0.0.18 <none> Ubuntu 18.04.6 LTS 5.4.0-91-generic docker://20.10.11 The name of the single floating network is specified under public-network-name in the cloud.conf and when logging verbosity is increase to --v=4, occm is detecting the floating IPs (relevant instances.go entries): ... I0324 19:55:31.297534 1 instances.go:72] openstack.Instances() called I0324 19:55:31.297569 1 instances.go:131] NodeAddressesByProviderID () called I0324 19:55:31.297898 1 instances.go:116] NodeAddresses(tf-rancher-cluster-demo-test-1) called I0324 19:55:31.633329 1 instances.go:123] NodeAddresses(tf-rancher-cluster-demo-test-1) => [{InternalIP 10.0.0.9} {ExternalIP 10.102.2.147}] ... What you expected to happen: The actual node resources have external IPs How to reproduce it: I don't believe I'm doing anything out of the ordinary. This is a plain install from scratch with no other workloads. Anything else we need to know?: N/A Environment: openstack-cloud-controller-manager(or other related binary) version: 1.23 (using the chart) OpenStack version: Queens Others: k8s, os, and kernel versions are in the above snippets do you have floating ip set and which version you are using ? $ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME m2 Ready control-plane,master 16d v1.23.4 10.0.0.40 172.24.4.241 Ubuntu 18.04.6 LTS 4.15.0-167-generic docker://20.10.12 # nova list `... | f40cf042-3788-414d-8f06-efbd58e09da5 | m2 | ACTIVE | - | Running | private=10.0.0.40, fd81:d720:65dd:0:f816:3eff:fe85:4f98, 172.24.4.241 ... $ kubectl get pod openstack-cloud-controller-manager-5n8fg -n kube-system -o yaml | grep image image: docker.io/k8scloudprovider/openstack-cloud-controller-manager:latest I was on 1.23, but I just switched to latest and there is no change. Yes, the nodes have floating IPs: +--------------------------------------+--------------------------------+--------+------------+-------------+----------------------------------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+--------------------------------+--------+------------+-------------+----------------------------------------------------------+ | e5817584-7fc5-4900-a1ff-d6c7f01227e9 | tf-rancher-cluster-demo-test-1 | ACTIVE | - | Running | tf-rancher-cluster-demo-test-net=10.0.0.9, 10.102.2.147 | | 816972e2-3b6e-4b9f-b69e-ec834573db2f | tf-rancher-cluster-demo-test-2 | ACTIVE | - | Running | tf-rancher-cluster-demo-test-net=10.0.0.13, 10.102.2.100 | | 1815fbe4-5a8e-4a78-a256-b022c3435e66 | tf-rancher-cluster-demo-test-3 | ACTIVE | - | Running | tf-rancher-cluster-demo-test-net=10.0.0.18, 10.102.2.84 | +----------------------------------------------------------------------------------------------------------------------------------------------------------------------+ And in horizon: um... I think there might be some pre-condition not satisfied can you enable v==5 first then check whether there are any suspected logs might worth a look? https://github.com/kubernetes/cloud-provider/blob/master/controllers/node/node_controller.go#L323 is the code that set the node address, there are multiple checks and I guess some might break the logic in updating the ip... @vowywowy so, can you help paste full log of OCCM with --v = 4 so it might contains more info? Thanks Thanks for these messages! They ended up sending me down a rabbit hole to the solution. As you can probably guess based on the naming, everything here is terraform/rancher/rke. When I switched the cluster resource to stop using the in-tree cloud provider, somehow kubelet ended up with the flag --cloud-provider= which is obviously not correct. For anyone in the same position as me, instead of doing this to the rancher_cluster resource: resource "rancher_cluster" "example" { rke_config { cloud_provider {} # cloud_provider { # openstack_cloud_provider { # ... # } # } ... } ... } do this: resource "rancher_cluster" "example" { rke_config { cloud_provider { name = "external" } # <-- very important "name" attribute # cloud_provider { # openstack_cloud_provider { # ... # } # } ... } ... } The name field has poor documentation, and is only semi-alluded to in the rke provider (not the rancher2 provider). If you want to use an out-of-tree cloud provider, you have to name it external. Thanks again @jichenjc! Closing, since this was just user error. ok, glad it's solved :) and yes, seems it's user input instead of OCCM
gharchive/issue
2022-03-24T20:06:02
2025-04-01T06:39:19.754723
{ "authors": [ "jichenjc", "vowywowy" ], "repo": "kubernetes/cloud-provider-openstack", "url": "https://github.com/kubernetes/cloud-provider-openstack/issues/1818", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
209312536
cluster-autoscaler assumes kube-system as its namespace I'm trying to run cluster-autoscaler in a namespace other than kube-system and get below error: leaderelection.go:210] failed to renew lease kube-system/cluster-autoscaler The leaderElection section assumes that cluster-autoscaler is running in kube-system namespace. It should lookup the namespace OR at-least update the ReadMe to mention that this will only run in kube-system namespace. Hi @dhawal55! AFAIK, there's --namespace flag as of today, to override the namespace to which CA is scheduled, to whatever you'd like. Would it solve your problem? Oh great, that's what I was looking for. Closing the issue
gharchive/issue
2017-02-22T00:09:19
2025-04-01T06:39:19.766218
{ "authors": [ "dhawal55", "mumoshu" ], "repo": "kubernetes/contrib", "url": "https://github.com/kubernetes/contrib/issues/2402", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
243621327
Update deploy of grafana with version 4.4.1. Upgrade grafana version to 4.4.1 in the deploy file. This change is  /assign @piosz SGTM /lgtm Thanks a lot for the fix!
gharchive/pull-request
2017-07-18T07:16:40
2025-04-01T06:39:19.804437
{ "authors": [ "andyxning", "k8s-reviewable", "loburm", "piosz" ], "repo": "kubernetes/heapster", "url": "https://github.com/kubernetes/heapster/pull/1731", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
226395260
Upgrade fails with "already exists" This happens occasionally: $ helm upgrade [...] picaxe-staging Error: UPGRADE FAILED: release: already exists Log: [tiller-deploy-3161650477-5jj7b] 2017/05/04 20:07:15 storage.go:94: Listing all releases with filter [tiller-deploy-3161650477-5jj7b] 2017/05/04 20:07:19 storage.go:133: Getting release history for 'picaxe-staging' [tiller-deploy-3161650477-5jj7b] 2017/05/04 20:07:19 release_server.go:936: Executing pre-upgrade hooks for picaxe-staging [tiller-deploy-3161650477-5jj7b] 2017/05/04 20:07:19 release_server.go:965: Hooks complete for pre-upgrade picaxe-staging [tiller-deploy-3161650477-5jj7b] 2017/05/04 20:07:19 client.go:398: generating strategic merge patch for *runtime.Unstructured [tiller-deploy-3161650477-5jj7b] 2017/05/04 20:07:19 client.go:398: generating strategic merge patch for *runtime.Unstructured [tiller-deploy-3161650477-5jj7b] 2017/05/04 20:07:19 client.go:592: beginning wait for resources with timeout of 5m0s [tiller-deploy-3161650477-5jj7b] 2017/05/04 20:07:31 release_server.go:936: Executing post-upgrade hooks for picaxe-staging [tiller-deploy-3161650477-5jj7b] 2017/05/04 20:07:31 release_server.go:965: Hooks complete for post-upgrade picaxe-staging [tiller-deploy-3161650477-5jj7b] 2017/05/04 20:07:31 storage.go:53: Updating "picaxe-staging" (v51) in storage [tiller-deploy-3161650477-5jj7b] 2017/05/04 20:07:31 storage.go:45: Create release "picaxe-staging" (v52) in storage [tiller-deploy-3161650477-5jj7b] 2017/05/04 20:07:31 release_server.go:936: Executing post-upgrade hooks for picaxe-staging [tiller-deploy-3161650477-5jj7b] 2017/05/04 20:07:31 release_server.go:965: Hooks complete for post-upgrade picaxe-staging [tiller-deploy-3161650477-5jj7b] 2017/05/04 20:07:31 storage.go:53: Updating "picaxe-staging" (v51) in storage [tiller-deploy-3161650477-5jj7b] 2017/05/04 20:07:31 storage.go:45: Create release "picaxe-staging" (v52) in storage [tiller-deploy-3161650477-5jj7b] 2017/05/04 20:07:31 release_server.go:936: Executing post-upgrade hooks for picaxe-staging [tiller-deploy-3161650477-5jj7b] 2017/05/04 20:07:31 release_server.go:965: Hooks complete for post-upgrade picaxe-staging [tiller-deploy-3161650477-5jj7b] 2017/05/04 20:07:31 storage.go:53: Updating "picaxe-staging" (v51) in storage [tiller-deploy-3161650477-5jj7b] 2017/05/04 20:07:31 storage.go:45: Create release "picaxe-staging" (v52) in storage We don't use any hooks. Helm 2.3.0, Kubernetes 1.5.6. Thank you for providing the logs on this. A few questions. How "occasionally" does this occur? Have you tried with helm 2.3.1 or 2.4.1 to see if it solves the problem? It's only happened 2 times out of about 51 times so far, and it's not consistently reproducible, although I could of course write a stress-test script. Only tried 2.3.0. Just installed 2.4.1, will let you know if it happens again. That typically happens if the release name is already present and being used. Do you have the --install flag set? Does the error only happen after an upgrade fails (or when the last upgrade is still in progres)? Yes, we started using helm upgrade --install. And the release already exists when this happens. @thomastaylor312 @technosophos I'm facing the same issue here, I can't do an upgrade over an existing resource, even thought I used hooks. For instance, I already installed a release named myproject that had a secret kind: Secret metadata: name: secret annotations: "helm.sh/hook": pre-install,pre-upgrade labels: app: {{ .Values.global.environment.app }} environment: {{ $env }} type: Opaque {{- end }} when I do helm upgrade --install --force project --tiller-namespace dev dev/ I got this as result Error: UPGRADE FAILED: secrets "secret" already exists Logs of Tiller: [tiller] 2019/03/20 10:51:39 creating updated release for project [storage] 2019/03/20 10:51:39 creating release "project.v2" [tiller] 2019/03/20 10:51:39 performing update for project [tiller] 2019/03/20 10:51:39 executing 2 pre-upgrade hooks for project [kube] 2019/03/20 10:51:39 building resources from manifest [kube] 2019/03/20 10:51:40 creating 1 resource(s) [tiller] 2019/03/20 10:51:40 warning: Release project pre-upgrade project/templates/secret.yaml failed: secrets "secret" already exists [storage] 2019/03/20 10:58:34 listing all releases with filter Any help for this issue ? Thank you I have encountered the same issue in Helm 3: client.go:87: [debug] creating 377 resource(s) Error: secrets "helm3-test-rabbitmq" already exists helm.go:76: [debug] secrets "helm3-test-rabbitmq" already exists I have run helm upgrade --install, so the namespace is nearly empty... @tomislater We need a little more information to debug that. Is the secret managed as a hook, or as a regular resource? If you can give us content of the secret's metadata, that might be helpful in figuring out why it is not upgrading. But if it is a hook, its behavior is subject to all the caveats described in the manual. Attempting to install over an existing secret that was created by a hook will still not work. Again, though, I'm guessing about what your chart is trying to do... we really need more info to be able to provide any meaningful feedback. I would recommend opening a new issue with complete details, because I do not think it is the same issue as the one marked Closed here. @technosophos You are right, my issue is connected to https://github.com/helm/helm/issues/7093 Sorry for interruption!
gharchive/issue
2017-05-04T20:13:11
2025-04-01T06:39:19.812693
{ "authors": [ "HamzaK8s", "atombender", "technosophos", "thomastaylor312", "tomislater" ], "repo": "kubernetes/helm", "url": "https://github.com/kubernetes/helm/issues/2397", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
144384547
Expandybird choking on replicatedservice-3.tgz jackgr@jackgr-macbookpro:~/gopath/src/github.com/kubernetes/helm> helm deploy --name test1 gs://kubernetes-charts-testing/replicatedservice-v3.tgz [ERROR] {"status":"Bad Request","message":"cannot expand configuration:expandybird response:\nerror expanding chart: test1: ExpandyBird cannot do this kind of expansion: %!!(MISSING)(EXTRA string=Expandybird)\n\u0026{[0xc2080824e0]}\n"} /cc @sparkprime We should probably just remove that check, as it allows deploying several versions of expandybird with different "names". The expansion service should just assume that the thing it's given is of the kind it's designed to deal with. Sounds like a good approach. Probably best if you do it, since you know that code better than I do. Delete this code if chartFile.Expander.Name != "ExpandyBird" { message := fmt.Sprintf("ExpandyBird cannot do this kind of expansion: ", chartFile.Expander.Name) return nil, fmt.Errorf("%s: %s", chartInv.Name, message) } in cmd/expandybird/expander/expander.go Fixed by #486.
gharchive/issue
2016-03-29T20:54:14
2025-04-01T06:39:19.815462
{ "authors": [ "jackgr", "sparkprime" ], "repo": "kubernetes/helm", "url": "https://github.com/kubernetes/helm/issues/485", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
322732798
Add the possibility to delete a full table from values We had an where users wanted to delete some default configuration from the values.yaml, but it wasn't possible, this fix overcomes that issue, of course, I'm not 100%-ly sure in this implementation, but it is a good topic starter. @bonifaido would you mind adding unit tests and docs here to cover this use case? Thanks!
gharchive/pull-request
2018-05-14T09:13:54
2025-04-01T06:39:19.816670
{ "authors": [ "bacongobbler", "bonifaido" ], "repo": "kubernetes/helm", "url": "https://github.com/kubernetes/helm/pull/4046", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
255172718
fix two doc issues in nginx/README As discussed with @aledbf , these two doc issues are confirmed, and I'm fixing them smoothly. Link to issue #1296 Plus, there's another doc issue as described in issue1296 (the 3rd one), we can't really tell differences with or without default-ssl-certificate according to those two curl output examples. They are the almost the same, should be fixed at spare time. This change is  Coverage remained the same at 43.484% when pulling f6946738f893b35de527ac64f3f32da4b52e64a5 on dunjut:master into 85e1a650090b79c1dc53ce41835f65bc33d81e76 on kubernetes:master. /lgtm @dunjut thanks!
gharchive/pull-request
2017-09-05T06:34:07
2025-04-01T06:39:19.856688
{ "authors": [ "aledbf", "coveralls", "dunjut", "k8s-reviewable" ], "repo": "kubernetes/ingress", "url": "https://github.com/kubernetes/ingress/pull/1299", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1600184438
kubectl diff does not report missing ConfigMap keys I have noticed that when using kubectl diff to compare a YAML file containing a ConfigMap to the current state of the ConfigMap in the cluster, missing key-value pairs in the YAML file that exist in the current state of the ConfigMap are not reported as differences. This can lead to unexpected changes to the ConfigMap when applying the YAML file with kubectl apply. For example, if a key-value pair exists in the current ConfigMap in the cluster but is not present in the YAML file, and I apply the YAML file with kubectl apply, the key-value pair will be removed from the ConfigMap. However, kubectl diff does not report this as a difference, even though it would result in a change to the ConfigMap. It'll only be listed as changed if the key exist in the file but has a different value than what's in the cluster. I believe that removing a key-value pair from a ConfigMap is just as much a change as modifying its value. Therefore, I suggest that kubectl diff should report missing ConfigMap keys in the same way that it reports modified values. This would make it easier to identify situations where a key-value pair would be removed from the ConfigMap if the YAML file were applied. Thank you for your attention to this issue. @mbehm in which version are you using?, could you please provide steps for reproducing this issue?. Thanks. Tried to create a small test case to reproduce it and my bad it seems if the key doesn't exist in the new ConfigMap then apply will leave the current value in the cluster as is, I must have accidentally used create or done something to completely overwrite the existing one. Sorry for the inconvenience, closing the issue.
gharchive/issue
2023-02-26T20:08:31
2025-04-01T06:39:19.893282
{ "authors": [ "ardaguclu", "mbehm" ], "repo": "kubernetes/kubectl", "url": "https://github.com/kubernetes/kubectl/issues/1378", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
333406790
Add log-counter plugin written in go This PR adds a new binary to the node-problem-detector repository: log-counter. The new binary uses the kmsg log watcher to get kmsg log events, and checks the number of events that occurred. The binary accepts command-line flags for the pattern, count, and period of time to look back. It sets the condition NodeRecreationRequired when it sees the unregister_netdevice error 3 times in 20 minutes, and runs every 10 minutes. /assign @Random-Liu can you tripple check the changes to the Makefile? I am not really sure what the proper structure is. This set of changes is mostly a guess. This is working now! /lgtm
gharchive/pull-request
2018-06-18T19:37:06
2025-04-01T06:39:20.333010
{ "authors": [ "Random-Liu", "dashpole" ], "repo": "kubernetes/node-problem-detector", "url": "https://github.com/kubernetes/node-problem-detector/pull/180", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
767529471
Validate config before running test What would you like to be added: Validate if config is correct before running test. Why is this needed: Currently, step can be either measurement step or phase step (modules are being added in #1634) https://github.com/kubernetes/perf-tests/blob/18e90fc65c6b95c0bff96938458a77d74e19b27a/clusterloader2/api/types.go#L60 Unfortunately, this behaviour is not well implemented: https://github.com/kubernetes/perf-tests/blob/14afe3828ba8d9eaa98aa8a9f5ddd0cdc5f8eebf/clusterloader2/api/extensions.go#L54 https://github.com/kubernetes/perf-tests/blob/14afe3828ba8d9eaa98aa8a9f5ddd0cdc5f8eebf/clusterloader2/pkg/test/simple_test_executor.go#L143 Current implementation does not validate if config is correct before running test. From user experience perspective it would be much better to return error right after test starts. Also, If I'm not wrong, if user specifies both measurements and phases, only measurements will be executed. @marseel could I help out with this one? @marseel could I help out with this one? @lyzs90 Yes sure, I would appreciate it :) If you will have some PR I can help with reviewing. @lyzs90 Yes sure, I would appreciate it :) If you will have some PR I can help with reviewing. @lyzs90 - this long-standing PR is relevant to this one: https://github.com/kubernetes/perf-tests/pull/142 It shows how should probably approach this problem - feel free to pick it up and continue working on that. @lyzs90 - this long-standing PR is relevant to this one: https://github.com/kubernetes/perf-tests/pull/142 It shows how should probably approach this problem - feel free to pick it up and continue working on that. Thanks @wojtek-t I'll check it out /assign Thanks @wojtek-t I'll check it out /assign @wojtek-t I had a look at https://github.com/kubernetes/perf-tests/pull/142 and had some thoughts: Currently Modules are being recursively compiled into steps at runtime. Correct me if I am wrong but the mapping passed to the Module seems static and there is no use case where runtime variables are passed in (unlike Objects). Hence to support validation of Modules, may I suggest to move loading of modules before test execution - after https://github.com/kubernetes/perf-tests/blob/master/clusterloader2/pkg/test/test.go#L66, so that validation of the entire Config (less Objects) can be validated ahead of time I understand that the current style of validation used to keep consistent with k8s, and I'm happy to stick with it. Just wanted to find out what are your thoughts on using a declarative approach like json schema? https://github.com/xeipuuv/gojsonschema Checks like this and this could be simplified with https://json-schema.org/understanding-json-schema/reference/combining.html#combining-schemas Basically cuts down on boilerplate, but we may still have to implement some custom validation for things like IsDNS1123Subdomain, file exists at objectTemplatePath, referenced tuning set has been declared etc. @wojtek-t I had a look at https://github.com/kubernetes/perf-tests/pull/142 and had some thoughts: Currently Modules are being recursively compiled into steps at runtime. Correct me if I am wrong but the mapping passed to the Module seems static and there is no use case where runtime variables are passed in (unlike Objects). Hence to support validation of Modules, may I suggest to move loading of modules before test execution - after https://github.com/kubernetes/perf-tests/blob/master/clusterloader2/pkg/test/test.go#L66, so that validation of the entire Config (less Objects) can be validated ahead of time I understand that the current style of validation used to keep consistent with k8s, and I'm happy to stick with it. Just wanted to find out what are your thoughts on using a declarative approach like json schema? https://github.com/xeipuuv/gojsonschema Checks like this and this could be simplified with https://json-schema.org/understanding-json-schema/reference/combining.html#combining-schemas Basically cuts down on boilerplate, but we may still have to implement some custom validation for things like IsDNS1123Subdomain, file exists at objectTemplatePath, referenced tuning set has been declared etc. Currently Modules are being recursively compiled into steps at runtime. Correct me if I am wrong but the mapping passed to the Module seems static and there is no use case where runtime variables are passed in (unlike Objects). Hence to support validation of Modules, may I suggest to move loading of modules before test execution - after https://github.com/kubernetes/perf-tests/blob/master/clusterloader2/pkg/test/test.go#L66, so that validation of the entire Config (less Objects) can be validated ahead of time @mm4tt - for thoughts I understand that the current style of validation used to keep consistent with k8s, and I'm happy to stick with it. Just wanted to find out what are your thoughts on using a declarative approach like json schema? https://github.com/xeipuuv/gojsonschema The consistency with k8s has a benefit that you don't have to learn new stuff when you're operating in a single k8s ecosystem. I agree that this can reduce the boilerplate, but (a) this boilerplate is well-separated (b) it's fairly trivial, so I don't think it's actually that huge advantage. So I would rather stick to that for now. Currently Modules are being recursively compiled into steps at runtime. Correct me if I am wrong but the mapping passed to the Module seems static and there is no use case where runtime variables are passed in (unlike Objects). Hence to support validation of Modules, may I suggest to move loading of modules before test execution - after https://github.com/kubernetes/perf-tests/blob/master/clusterloader2/pkg/test/test.go#L66, so that validation of the entire Config (less Objects) can be validated ahead of time @mm4tt - for thoughts I understand that the current style of validation used to keep consistent with k8s, and I'm happy to stick with it. Just wanted to find out what are your thoughts on using a declarative approach like json schema? https://github.com/xeipuuv/gojsonschema The consistency with k8s has a benefit that you don't have to learn new stuff when you're operating in a single k8s ecosystem. I agree that this can reduce the boilerplate, but (a) this boilerplate is well-separated (b) it's fairly trivial, so I don't think it's actually that huge advantage. So I would rather stick to that for now. The consistency with k8s has a benefit that you don't have to learn new stuff when you're operating in a single k8s ecosystem. I agree that this can reduce the boilerplate, but (a) this boilerplate is well-separated (b) it's fairly trivial, so I don't think it's actually that huge advantage. So I would rather stick to that for now. Noted on this 👍 The consistency with k8s has a benefit that you don't have to learn new stuff when you're operating in a single k8s ecosystem. I agree that this can reduce the boilerplate, but (a) this boilerplate is well-separated (b) it's fairly trivial, so I don't think it's actually that huge advantage. So I would rather stick to that for now. Noted on this 👍 Currently Modules are being recursively compiled into steps at runtime. Correct me if I am wrong but the mapping passed to the Module seems static and there is no use case where runtime variables are passed in (unlike Objects). Hence to support validation of Modules, may I suggest to move loading of modules before test execution - after https://github.com/kubernetes/perf-tests/blob/master/clusterloader2/pkg/test/test.go#L66, so that validation of the entire Config (less Objects) can be validated ahead of time I think it makes sense, but let's double check with @mm4tt Currently Modules are being recursively compiled into steps at runtime. Correct me if I am wrong but the mapping passed to the Module seems static and there is no use case where runtime variables are passed in (unlike Objects). Hence to support validation of Modules, may I suggest to move loading of modules before test execution - after https://github.com/kubernetes/perf-tests/blob/master/clusterloader2/pkg/test/test.go#L66, so that validation of the entire Config (less Objects) can be validated ahead of time I think it makes sense, but let's double check with @mm4tt It makes a lot of sense :) Just a nit, you should move it after this to make sure that the config is valid after a custom modifications have been applied. There is also testConfig.Validate here that we could probably reuse or remove - it doesn't make sense to have two "validate" components. In general, big +1 for extracting things out of ExecuteTest method. In my opinion, this method is too big. It does too many things and doesn't adhere to the single principle rule - it should just execute the test that is prepared and validated. It makes a lot of sense :) Just a nit, you should move it after this to make sure that the config is valid after a custom modifications have been applied. There is also testConfig.Validate here that we could probably reuse or remove - it doesn't make sense to have two "validate" components. In general, big +1 for extracting things out of ExecuteTest method. In my opinion, this method is too big. It does too many things and doesn't adhere to the single principle rule - it should just execute the test that is prepared and validated. @mm4tt gotcha, I'll first submit a PR to extract module compilation out of ExecuteTest @mm4tt gotcha, I'll first submit a PR to extract module compilation out of ExecuteTest @mm4tt Actually since there might be multiple tests (via test suite / config paths), would it be worth moving the compilation and validation logic even higher up eg. before https://github.com/kubernetes/perf-tests/blob/master/clusterloader2/cmd/clusterloader.go#L317 and https://github.com/kubernetes/perf-tests/blob/master/clusterloader2/cmd/clusterloader.go#L323. That way we can fail faster if any config is invalid. This would mean RunTest would only be left with basic checks, namespace deletion and test execution @mm4tt Actually since there might be multiple tests (via test suite / config paths), would it be worth moving the compilation and validation logic even higher up eg. before https://github.com/kubernetes/perf-tests/blob/master/clusterloader2/cmd/clusterloader.go#L317 and https://github.com/kubernetes/perf-tests/blob/master/clusterloader2/cmd/clusterloader.go#L323. That way we can fail faster if any config is invalid. This would mean RunTest would only be left with basic checks, namespace deletion and test execution Sounds good. Let me know if there is anything I can help you with. Thanks! Sounds good. Let me know if there is anything I can help you with. Thanks!
gharchive/issue
2020-12-15T12:12:40
2025-04-01T06:39:20.386482
{ "authors": [ "lyzs90", "marseel", "mm4tt", "wojtek-t" ], "repo": "kubernetes/perf-tests", "url": "https://github.com/kubernetes/perf-tests/issues/1636", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
609043572
Move Eviction Policy into Scheduling & Eviction section This is a Feature Request What would you like to be added Migrate the content from https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/#eviction-policy into somewhere inside https://kubernetes.io/docs/concepts/scheduling-eviction/ Why is this needed https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/#eviction-policy is not task documentation; instead, it's conceptual background. Comments /language en /kind cleanup Initially, a straightforward cut-and-paste would be fine, so I'll mark this as: /good-first-issue /assign @sftim do I need to move all the page content or only the eviction-policy section? Just the Eviction Policy section. Overall, https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/#eviction-policy is a task page and can stay as a task page. @sftim Eviction Policy is a kind of a separate concept. I guess it should be an entirely new page in Scheduling and Eviction section, such that it adds an entry in accordion menu as well. entirely new page in Scheduling and Eviction section There's more than one way to do this, and that's one of the viable approaches. If anyone's ready to tackle this: feel free! /assign @sftim i am interested to do pick this up.. let me know if these changes are still required? This work still needs doing. The changes in https://github.com/kubernetes/website/pull/20724/commits/032a7ea337978697dda435e0950fc53d83113695 looked like a good starting point. /close @sftim Is this issue ready to close? Yep /close
gharchive/issue
2020-04-29T13:08:14
2025-04-01T06:39:20.472159
{ "authors": [ "ShivamGoyal1899", "gm7y8", "kbhawkey", "pranshu-s18", "sftim" ], "repo": "kubernetes/website", "url": "https://github.com/kubernetes/website/issues/20649", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
635537855
Clarify the explanation when environment variables refer to each other This is a Feature Request What would you like to be added Why is this needed If the environment variables started by the pod are cyclically dependent and mutually referenced, the resulting environment variable values ​​will not be what we expected. At present, there is no detailed description about this in the official document, it is recommended to reopen a page to describe it in detail Comments https://github.com/kubernetes/website/pull/21553 https://github.com/kubernetes/kubernetes/issues/90466 /kind feature /assign
gharchive/issue
2020-06-09T15:30:10
2025-04-01T06:39:20.475432
{ "authors": [ "Cweiping", "sftim", "wawa0210" ], "repo": "kubernetes/website", "url": "https://github.com/kubernetes/website/issues/21605", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
1078381136
Remove docker from Node-pressure Eviction URL: https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/ File: /docs/concepts/scheduling-eviction/node-pressure-eviction.md Umbrella issue: #30771 Partially fixes: #30921 What to do On line 238, remove docker from the examples of system daemons. How to do it Refer to the Contributor Guide for instructions. Do the following: Fork k/website and switch to a new branch Remove docker from line 238 of the /docs/concepts/scheduling-eviction/node-pressure-eviction.md file Open a PR for the change and mention this issue to close it when the PR is merged /triage accepted /sig docs /help-wanted /language en I would like to work on this /assign @killerkc12 @killerkc12 do you intend to work on this? If not I'd like to reassign it to someone who will. No hard feelings either way! @celestehorgan I think this issue has been resolved in this PR here #30913 /close
gharchive/issue
2021-12-13T10:57:32
2025-04-01T06:39:20.480192
{ "authors": [ "PurneswarPrasad", "celestehorgan", "jihoon-seo", "killerkc12", "shannonxtreme" ], "repo": "kubernetes/website", "url": "https://github.com/kubernetes/website/issues/30896", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
1525484816
Pos OS spec description is not clear. While reading the documentation about the OS spec for Pods it states: Pod OS FEATURE STATE: Kubernetes v1.25 [stable] You should set the .spec.os.name field to either windows or linux to indicate the OS on which you want the pod to run. These two are the only operating systems supported for now by Kubernetes. In future, this list may be expanded. In Kubernetes v1.26, the value you set for this field has no effect on scheduling of the pods. Setting the .spec.os.name helps to identify the pod OS authoratitively and is used for validation. The kubelet refuses to run a Pod where you have specified a Pod OS, if this isn't the same as the operating system for the node where that kubelet is running. The Pod security standards also use this field to avoid enforcing policies that aren't relevant to that operating system. The second paragraph mentions 1.26 doesn't take this field into consideration to schedule the pod but later in the same paragraph it states kubelet will take this field in consideration to schedule the pod (assigning the pod to a matching OS node). Am I reading it incorrectly or the paragraph should read "In Kubernetes v1.25, the vaule you set for this field has no effect on scheduling" ? In Kubernetes v1.26, the value you set for this field has no effect on scheduling of the pods is correct, but perhaps misleading. How about changing the order round: Kubernetes v1.26 uses the value of .spec.os.name to validate Pods (the kubelet checks that the Pod OS matches the operating system that the kubelet is running on). If you create (or try to create a Pod) in a namespace that uses Pod security admission, the control plane also uses the value of .spec.os.name to work out what restrictions to verify and / or enforce. In Kubernetes v1.26, the value of .spec.os.name does not affect how the kube-scheduler picks a Pod to run a node. In any cluster where there is more than one operating system for nodes, you should set the kubernetes.io/os label correctly on each nodes, and define Pods with a nodeSelector than matches the correct operating system. If you set .spec.os.name anddo not specify a nodeSelector based on the operating system label, the scheduler assigns your pod to a node based on other criteria and may or may not succeed in picking a suitable node placement where the node OS is right for the containers in that Pod. ? We could link to a task page that explains how to assign Linux Pods to Linux nodes, and Windows Pods to Windows nodes (if we had such a task page). /sig node Hi @sftim by your message I understand that: Kubelet will fail to run a pod with a non matching .spec.os.name in the node that is running (as if a Linux pod is assigned to a Windows Node for example) Kube-Scheduler will not take into account .spec.os.name to define in which node to run a pod (unless a kubernetes.io/os label is specified) so it would potentially scheduling a Linux pod in a Windows node. Am I getting it right? if so I believe your message is great to be a replacement of the one in the current documentation. Just there is a missing space in this paragraph (getting a bit picky sorry 😂 ) If you set .spec.os.name anddo not specify a nodeSelector based on the operating system label, the scheduler assigns your pod to a node based on other criteria and may or may not succeed in picking a suitable node placement where the node OS is right for the containers in that Pod. This could warrant a PR since the fix is clear. /help /triage accepted /assign Also see https://github.com/kubernetes/website/issues/40825 /retitle Pod OS spec description is not clear /assign
gharchive/issue
2023-01-09T12:17:21
2025-04-01T06:39:20.491276
{ "authors": [ "javiermarasco", "madhumita-kundo", "mrgiles", "sftim", "tengqm" ], "repo": "kubernetes/website", "url": "https://github.com/kubernetes/website/issues/38843", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
2694676859
[fr] Inactive interactive tutorial in "Deploying an application" page Tutorial description : Pour interagir avec le terminal, veuillez utiliser la version bureau / tablette. Though I am using my desktop. Page reported in issue (based on issue title): https://kubernetes.io/fr/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive/ /language fr /retitle [fr] Inactive interactive tutorial in "Deploying an application" page /kind bug /area localization @polocto Katacoda environment for the Kubernetes tutorial has been shut down. Refer the announcement here. We have an umbrella issue raised https://github.com/kubernetes/website/issues/41496 to remove tutorials that rely on Katacoda for all localized documents.
gharchive/issue
2024-11-26T13:26:35
2025-04-01T06:39:20.494993
{ "authors": [ "dipesh-rawat", "polocto" ], "repo": "kubernetes/website", "url": "https://github.com/kubernetes/website/issues/48849", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
302570293
Issue with k8s.io/docs/concepts/storage/dynamic-provisioning/ Requirement is to have new filesystem created on top of the provisioned volumes. Since while we created containers, we need to have say a folder by name "license" where we need to store the license key. Hence my container will expect me to create a folder by name "license". Details of this is not available any where. I guess it should be a feature request [ x ] Feature Request [ ] Bug Report Problem: Option to create new file and directory under provisioned volumes is missing. Container need this flexibility since the packaged container will have a requirement to have some file and directory available in the host environment Proposed Solution: YAML fie should support mkdir and create file option Page to Update: https://kubernetes.io/... hostpath and mountpath needs to be same...as of now , I am not able to create a directory..Hope I am not missing out anything..
gharchive/issue
2018-03-06T05:37:27
2025-04-01T06:39:20.499063
{ "authors": [ "RajeshJeyapaul" ], "repo": "kubernetes/website", "url": "https://github.com/kubernetes/website/issues/7652", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
354968681
Change RedSpread link Fix #10103 RedSpread was acquired by CoreOS. The site does not exist. Therefore, I changed to GitHub repository. I signed CLA. Preview https://deploy-preview-10124--kubernetes-io-master-staging.netlify.com/docs/setup/minikube/#design Nice catch! Thank you! /lgtm /approve
gharchive/pull-request
2018-08-29T02:14:28
2025-04-01T06:39:20.501228
{ "authors": [ "Bradamant3", "zembutsu" ], "repo": "kubernetes/website", "url": "https://github.com/kubernetes/website/pull/10124", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
411292010
cluster-myfirst.html creating new cluster ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Please delete this note before submitting the pull request. For 1.14 Features: set Milestone to 1.14 and Base Branch to dev-1.14 For Chinese localization, base branch to release-1.12 For Korean Localization: set Base Branch to dev-1.13-ko. Help editing and submitting pull requests: https://kubernetes.io/docs/contribute/start/#improve-existing-content. Help choosing which branch to use: https://kubernetes.io/docs/contribute/start#choose-which-git-branch-to-use. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ /check-cla Thanks for the PR @fauwazalijdpro !!!! Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). Please follow instructions at https://git.k8s.io/community/CLA.md#the-contributor-license-agreement to sign the CLA. It may take a couple minutes for the CLA signature to be fully registered; after that, please reply here with a new comment and we'll verify. Thanks. Hey there! @fauwazalijdpro, looks like you haven't signed the CLA yet. Could I please have you do that? https://github.com/kubernetes/community/blob/master/CLA.md /close @fauwazalijdpro Thanks for your PR, but we need you to take a few additional steps. if you want to re-open it, or start a new one, please first sign the CLA. It's also not clear why the proposed change is needed. It's true that the topic is pretty generic, but it does also address Minikube specifically and if the issue is the tangle around starting the interactive tutorial vs starting with Minikube, changing the title only does not clear up any confusion.
gharchive/pull-request
2019-02-18T03:57:48
2025-04-01T06:39:20.507598
{ "authors": [ "Bradamant3", "Rajakavitha1", "fauwazalijdpro", "zparnold" ], "repo": "kubernetes/website", "url": "https://github.com/kubernetes/website/pull/12680", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
413511845
Update the fine-parallel-processing-work-queue.md task file to remove $ and remvoe text not appropiate for end user this PR address one task mentioned in https://github.com/kubernetes/website/issues/12740 which is [ ] https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/ remove the section about If you are working from the website source tree, you can go to the following directory and start a temporary Pod running Redis and a service so we can find it. End user doesn't work from git repo /assign @steveperry-53 any help here with reviewing please ? reduced the scope to keep it isolated as per @zacharysarah comment on another PR /assign @zparnold /unassign @steveperry-53 /lgtm /approve
gharchive/pull-request
2019-02-22T17:52:30
2025-04-01T06:39:20.511815
{ "authors": [ "DanyC97", "zparnold" ], "repo": "kubernetes/website", "url": "https://github.com/kubernetes/website/pull/12793", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
432699629
Update minikube.md It is minor, but I think it is more readable by moving the comments out of the code blocks since it is the document not code. This is how it looks currently here https://kubernetes.io/docs/setup/minikube/: @yzhong52 could you please sign the CLA? @DanyC97 Just signed. Sorry for the delay. /lgtm /approve
gharchive/pull-request
2019-04-12T18:52:48
2025-04-01T06:39:20.514187
{ "authors": [ "DanyC97", "tengqm", "yzhong52" ], "repo": "kubernetes/website", "url": "https://github.com/kubernetes/website/pull/13803", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
448448330
Update node glossary page When reviewing the spanish localization for this page #14360, we spotted minor issues with the content. This PR: Removes the services tooltip as it points to the Service object and in this context, refers to Kubernetes processes and agents running on the nodes. Replaces Docker for the Container Runtime interface, as Kubernetes supports other CRI, not only Docker. Adds the tooltip for each kubernetes service: cri, kubelet, kube-proxy I just saw your PR #14317 that aims the same thing and the conversation already started there, we can close this PR. Thanks @sftim !!
gharchive/pull-request
2019-05-25T10:06:09
2025-04-01T06:39:20.516102
{ "authors": [ "raelga" ], "repo": "kubernetes/website", "url": "https://github.com/kubernetes/website/pull/14523", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }