id
stringlengths
7
7
title
stringlengths
4
299
selftext
stringlengths
0
12k
score
int64
0
2.18k
upvote_ratio
float64
0.06
1
num_comments
int64
0
405
created_utc
float64
1.74B
1.76B
author
stringlengths
3
20
permalink
stringlengths
29
82
url
stringlengths
18
149
is_self
bool
2 classes
domain
stringclasses
84 values
flair
stringclasses
7 values
subreddit
stringclasses
4 values
nsfw
bool
1 class
spoiler
bool
2 classes
stickied
bool
2 classes
awards
int64
0
0
scraped_at
stringdate
2025-09-26 10:20:07
2025-09-26 10:56:37
comments
listlengths
0
386
comments_count
int64
0
386
1m32nqz
Looking for a Lightweight Kubernetes Deployment Approach (Outside Our GitLab CI/CD)
Hi everyone! I'm looking for a new solution for my Kubernetes deployments, and maybe you can give me some ideas... We’re a software development company with several clients — most of them rely on us to manage their AWS infrastructure. In those cases, we have our full CI/CD integrated into our own GitLab, using its Kubernetes agents to trigger deployments every time there's a change in the config repos. The problem now is that a major client asked us for a time-limited project, and after 10 months we’ll need to hand over all the code *and* the deployment solution. So we don't want to integrate it into our GitLab. We'd prefer a solution that doesn't depend so much on our stack. I thought about using ArgoCD to run deployments from within the cluster… but I’m not fully convinced — it feels a bit overkill for this case. It's not that many microservices... but I'm trying to avoid having manual scripts that I create myself in Jenkins for ex. Any suggestions?
1
0.56
9
1,752,846,110
tmp2810
/r/kubernetes/comments/1m32nqz/looking_for_a_lightweight_kubernetes_deployment/
https://www.reddit.com/r/kubernetes/comments/1m32nqz/looking_for_a_lightweight_kubernetes_deployment/
true
self.kubernetes
null
kubernetes
false
false
false
0
2025-09-26T10:56:20.535141
[ { "author": "minimalniemand", "awards": 0, "body": "GitOps is your friend. I prefer flux over Argo tbh\n\nIt’s not about scale it’s about imperative vs declarative. You want all your configs as code for auditability and the ability to roll back etc.", "created_utc": 1752847815, "id": "n3tirh1", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m32nqz/looking_for_a_lightweight_kubernetes_deployment/n3tirh1/", "post_id": "1m32nqz", "score": 9, "stickied": false }, { "author": "[deleted]", "awards": 0, "body": "[removed]", "created_utc": 1752885591, "id": "n3wzcq7", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m32nqz/looking_for_a_lightweight_kubernetes_deployment/n3wzcq7/", "post_id": "1m32nqz", "score": 3, "stickied": false }, { "author": "Zestyclose_Ad8420", "awards": 0, "body": "From a business side of things: why dont you just install gitlab in their infra and hand over the whole thing as is?\nNo new work neither procedures for you.\n\nIf you want to improve your own procedures and want to use this project to explore that's another approach.\n\n\n\n\n\nTechnically speaking there's a gazilion ways to do this, personally I find the function to choose which way is best depends a lot on the skillset you want to leverage (and have at your disposal) \n\n\nPeople here already mentioned a few, a less orthodox but fully viable option is to leverage ansible running in a gitlab runner and use kubernetes modules.\nThis is what I mean by there's a gazilion ways to do it and it depends a lot on what skills you have at your disposal. ", "created_utc": 1752920628, "id": "n3z117g", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m32nqz/looking_for_a_lightweight_kubernetes_deployment/n3z117g/", "post_id": "1m32nqz", "score": 2, "stickied": false }, { "author": "CaelFrost", "awards": 0, "body": "Kluctl.", "created_utc": 1753102244, "id": "n4bzjbz", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m32nqz/looking_for_a_lightweight_kubernetes_deployment/n4bzjbz/", "post_id": "1m32nqz", "score": 2, "stickied": false }, { "author": "aphelio", "awards": 0, "body": "ArgoCD is only for continuous delivery. I'm guessing your GitLab stuff just uses kubectl for deployments, which is a very basic CD approach. So in other words, you already have a simple, portable CD solution (kubectl) that doesn't depend on your business's internal platform. So even if you were to implement ArgoCD, you'd barely even change the problem.\n\nIt sounds like the real problem to solve is the CI. And for this, I recommend Tekton.", "created_utc": 1752854881, "id": "n3u7qwa", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m32nqz/looking_for_a_lightweight_kubernetes_deployment/n3u7qwa/", "post_id": "1m32nqz", "score": 1, "stickied": false }, { "author": "jameshearttech", "awards": 0, "body": "Without knowing more about the project, it's not possible to make a useful recommendation. All we know is that you have a project to build something for a customer in 10 months.\n\nWhat is the scope of the project?\nWhat is the customers existing stack?", "created_utc": 1752960794, "id": "n42a324", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m32nqz/looking_for_a_lightweight_kubernetes_deployment/n42a324/", "post_id": "1m32nqz", "score": 1, "stickied": false }, { "author": "tmp2810", "awards": 0, "body": "Completely agree... that's the direction we want to go. Right now we have too many \"manual things\" with `kubectl` that someone has to do, and there's not much control. Regarding other parts of the architecture like volumes, Redis, Rabbit, etc... do you also use Flux for that?", "created_utc": 1752848517, "id": "n3tl57u", "is_submitter": true, "parent_id": "n3tirh1", "permalink": "/r/kubernetes/comments/1m32nqz/looking_for_a_lightweight_kubernetes_deployment/n3tl57u/", "post_id": "1m32nqz", "score": 1, "stickied": false }, { "author": "minimalniemand", "awards": 0, "body": "Yes. Apart from the initial bootstrapping, everything is in gitops. \n\nWhen you have to move resources elsewhere you just need to bootstrap flux, configure the repo and flux does the rest. \n\nData still needs to be migrated but there are tools that can help with that, too (like Velero)", "created_utc": 1752850953, "id": "n3ttr2r", "is_submitter": false, "parent_id": "n3tl57u", "permalink": "/r/kubernetes/comments/1m32nqz/looking_for_a_lightweight_kubernetes_deployment/n3ttr2r/", "post_id": "1m32nqz", "score": 2, "stickied": false } ]
8
1m30qnm
DEMO: Create MCP servers from cobra.Command CLIs like Helm and Kubectl FAST
0
0.5
0
1,752,840,941
njayp
/r/kubernetes/comments/1m30qnm/demo_create_mcp_servers_from_cobracommand_clis/
/r/golang/comments/1m30p3t/demo_create_mcp_servers_from_cobracommand_clis/
false
null
kubernetes
false
false
false
0
2025-09-26T10:56:21.677525
[]
0
1m2y9tk
Weekly: Share your victories thread
Got something working? Figure something out? Make progress that you are excited about? Share here!
3
1
1
1,752,832,839
gctaylor
/r/kubernetes/comments/1m2y9tk/weekly_share_your_victories_thread/
https://www.reddit.com/r/kubernetes/comments/1m2y9tk/weekly_share_your_victories_thread/
true
self.kubernetes
Periodic
kubernetes
false
false
false
0
2025-09-26T10:56:22.844075
[ { "author": "NotAnAverageMan", "awards": 0, "body": "Released my open source package manager [Anemos](https://github.com/ohayocorp/anemos). It didn't get much traction on Reddit or HN, but i think it is very solid and has a bright future. 🤞", "created_utc": 1752840550, "id": "n3swp1b", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2y9tk/weekly_share_your_victories_thread/n3swp1b/", "post_id": "1m2y9tk", "score": 0, "stickied": false } ]
1
1m2x19h
What’s the most ridiculous reason your Kubernetes cluster broke — and how long did it take to find it?
Just today, I spent 2 hours chasing a “pod not starting” issue… only to realize someone had renamed a secret and forgot to update the reference 😮‍💨 It got me thinking — we’ve all had those **“WTF is even happening”** moments where: * Everything *looks* healthy, but nothing works * A YAML typo brings down half your microservices * `CrashLoopBackOff` hides a silent DNS failure * You spend hours debugging… only to fix it with one line 🙃 So I’m asking: >
135
0.94
95
1,752,828,088
DevOps_Lead
/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/
https://www.reddit.com/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/
true
self.kubernetes
null
kubernetes
false
false
false
0
2025-09-26T10:56:23.961339
[ { "author": "MC101101", "awards": 0, "body": "Imagine posting a nice little share for a Friday and then all the comments are just lecturing you for how “couldn’t be me bro”", "created_utc": 1752828755, "id": "n3s8iur", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3s8iur/", "post_id": "1m2x19h", "score": 142, "stickied": false }, { "author": "totomz", "awards": 0, "body": "AWS EKS cluster with 90 nodes, coredns set as replicaset with 80 replicas, no anti-affinity rule. \nI don't know how, but 78 of 80 replicas were on the same node. Everything was up&running, nothing was working. \nAWS throttles dns requests by ip, since all coredns pods were in a single ec2 node, all dns traffic was being throttled...", "created_utc": 1752830177, "id": "n3sazbc", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3sazbc/", "post_id": "1m2x19h", "score": 106, "stickied": false }, { "author": "yebyen", "awards": 0, "body": "So you think you can set requests and limits to positive effect, so you look for the most efficient way to do this. Vertical Pod Autoscaler has a recommending & updating mode, that sounds nice. It's got this feature called humanize-memory - I'm a human that sounds nice.\n\n\nIt produces numbers like 1.1Gi instead of 103991819472 - that's pretty nice.\n\n\nHey, wait a second, Headlamp is occasionally showing thousands of gigabytes of memory, when we actually have like 100 GB max. That's not very nice. What the hell is a millibytes? Oh, Headlamp didn't believe in Millibytes, so it just converts that number silently into bytes?\n\n\nHmm, I wonder what else is doing that?\n\n\nOh, it has infected the whole cluster now. I can't get a roll-up of memory metrics without seeing millibytes. It's on this crossplane-aws-family provider, I didn't install that... how did it get there? I'll just delete it...\n\n\nOh... I should not have done that. I should not have done that.....", "created_utc": 1752830385, "id": "n3sbca3", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3sbca3/", "post_id": "1m2x19h", "score": 49, "stickied": false }, { "author": "bltsponge", "awards": 0, "body": "Etcd *really* doesn't like running on HDDs.", "created_utc": 1752842551, "id": "n3t26zm", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3t26zm/", "post_id": "1m2x19h", "score": 42, "stickied": false }, { "author": "CeeMX", "awards": 0, "body": "K3s single node cluster on prem at a client. At some point DNS stopped working on the whole host, which was caused by the client’s admin retired a Domain controller in the network without telling us.\n\nUpdated the DNS and called it a day, since on the host it worked again.\n\nDidn’t make the calculation with CoreDNS inside the cluster, which did not see this change and failed every dns resolution to external hosts after the cache expired. Was a quick fix by restarting CoreDNS, but at first I was very confused why something like that would just break.\n\n_It’s always DNS._", "created_utc": 1752845504, "id": "n3tb7c9", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3tb7c9/", "post_id": "1m2x19h", "score": 20, "stickied": false }, { "author": "CharlesGarfield", "awards": 0, "body": "In my homelab:\n\n\n- All managed via gitops\n- Gitops repo is hosted in Gitea, which is itself running on the cluster\n- Turned on auto-pruning for Gitea namespace\n\n\nThis one didn’t take too long to troubleshoot.", "created_utc": 1752850162, "id": "n3tqxdh", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3tqxdh/", "post_id": "1m2x19h", "score": 19, "stickied": false }, { "author": "till", "awards": 0, "body": "After a k8s upgrade network was broken on one node, which came down to Calico running with auto detection which interface to use to build the vxlan tunnel and it now detected the wrong one.\n\nLogs, etc. utterly useless (so much noise), calicoctl needed docker in some cases to produce output.\n\nFound the deviation in the iface config hours later (selected iface is shown briefly in logs when calico-node starts), set it to use the right interface and everything worked again.\n\nEven condensed everything in a ticket for calico, which was closed without resolution later.\n\nStellar experience! 😂", "created_utc": 1752842320, "id": "n3t1jep", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3t1jep/", "post_id": "1m2x19h", "score": 14, "stickied": false }, { "author": "conall88", "awards": 0, "body": "I've got a local testing setup using Vagrant, K3s, Virtualbox, and had overhauled a lot of it to automate some app deploys to make local repros low effort, and was wondering why i couldn't exec into pods, turns out the CNI was binding to the wrong network interface (en0) instead of my host-only network so I had to make some detection logic. oops.", "created_utc": 1752828800, "id": "n3s8lo3", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3s8lo3/", "post_id": "1m2x19h", "score": 13, "stickied": false }, { "author": "my_awesome_username", "awards": 0, "body": "Lost a dev cluster one, during our routine quarterly patching. We operate in a whitelist only environment, so there is a surricata firewall filtering everything.\n\nUpgraded linkerd, our monitoring stack, few other things. All of a sudden a bunch of apps were failing, just non stop TLS errors.\n\nIn the end it was the latest (then) version of go, tweaked how TLS 1.3 packets were created, which the firewall deemed were too long and therefore invalid.\nThat was a fun day chasing down", "created_utc": 1752841837, "id": "n3t06mb", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3t06mb/", "post_id": "1m2x19h", "score": 11, "stickied": false }, { "author": "kri3v", "awards": 0, "body": "—", "created_utc": 1752833509, "id": "n3sh06a", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3sh06a/", "post_id": "1m2x19h", "score": 11, "stickied": false }, { "author": "Powerful-Internal953", "awards": 0, "body": "Not prod. But the guys broke the dev environment running on AKS by pushing recent application version that had spring boot version 3.5.\n\nNobody had a clue why the application didn't connect to the key vault. We had a managed identity setup for the cluster that handled the authentication which was beyond the scope of our application code. But somehow it didn't work. \n\nPeople created a Simple code that just connects to KV and it works. \n\nApparently we had a HTTP_PROXY for a couple of urls and the IMDS endpoint introduced part of msal4j wasn't part of it. There was no documentation whatsoever that covered this new endpoint that was burried in Azure documentation.\n\nClassic microsoft shenanigan I would say.\n\nNeedless to say we figured out in the first 5 minutes it was a problem with key vault connectivity. But there was no information in the logs nor the documentation so it took a painful weekend to go through the azure sdk code base to find the issue.", "created_utc": 1752832760, "id": "n3sflor", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3sflor/", "post_id": "1m2x19h", "score": 9, "stickied": false }, { "author": "SomeGuyNamedPaul", "awards": 0, "body": "\"kube proxy? We don't need that.\" *delete*", "created_utc": 1752882474, "id": "n3wqq7w", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3wqq7w/", "post_id": "1m2x19h", "score": 3, "stickied": false }, { "author": "KubeKontrol", "awards": 0, "body": "Certificates expired! Without kubeadm the situation is harder to solve...", "created_utc": 1753188410, "id": "n4ilibt", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n4ilibt/", "post_id": "1m2x19h", "score": 3, "stickied": false }, { "author": "small_e", "awards": 0, "body": "Isn’t that logging on the pod events? ", "created_utc": 1752828882, "id": "n3s8qsv", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3s8qsv/", "post_id": "1m2x19h", "score": 13, "stickied": false }, { "author": "Former_Machine5978", "awards": 0, "body": "Spent hours debugging a port clash error, where the pod ran just fine and inherited it's config from a config map, but as soon as we made it a service it ignored the config and started trying to run both servers on the pod on the same port.\n\nIt turns out that the server was using viper for config, which has a built in environment variable override for the port config, which just so happened to be exactly the same environment variable as kube creates under the hood when you create a service.", "created_utc": 1752849845, "id": "n3tpt0n", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3tpt0n/", "post_id": "1m2x19h", "score": 3, "stickied": false }, { "author": "Gerkibus", "awards": 0, "body": "When having some networking issues on a single node and reporting it in a trouble ticket, the datacenter seemed to let a newbie handle things ... they rebooted EVERY SINGLE NODE at the exact same time (I think it was around 20 at the time). Caused so much chaos as things were coming back online and pods were bouncing around all over the place that it was easier to just nuke and re-deploy the entire cluster.\n\nThat was not a fun day that day.", "created_utc": 1752861136, "id": "n3uu1mg", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3uu1mg/", "post_id": "1m2x19h", "score": 3, "stickied": false }, { "author": "total_tea", "awards": 0, "body": "A pod worked fine in dev but moving it to prod would fail intermittently. Took a day and it turned out DNS was failing due to certain DNS lookups failing. \n\nThe DNS lookups where failing as certain DNS lookups returned a large amount of DNS entries and the DNS protocol switches over to TCP rather than the usual UDP. \n\nTurns out the library in the OS level libraries in the container had a bug in them. \n\nIt was ridiculous because who expects a container cant do a DNS lookup correctly.", "created_utc": 1752868819, "id": "n3vk5qi", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3vk5qi/", "post_id": "1m2x19h", "score": 3, "stickied": false }, { "author": "coderanger", "awards": 0, "body": "A mutating webhook for Pods built against an older client-go silently dropping the sidecar RestartPolicy resulting in baffling validation errors. About 6 hours. Twice.", "created_utc": 1752832589, "id": "n3sfa6r", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3sfa6r/", "post_id": "1m2x19h", "score": 5, "stickied": false }, { "author": "popcorn-03", "awards": 0, "body": "It didnt just destroy it self i needed to restart longhorn because it descieded to just quit on me and i accendentaly deleted the namespace with it as i used a Helm Chart custom resource for it with namespace on top. I thought no worys i habe backups everything fine. But the Namespace just didnt want to delete itself so ist was stuck in termination even after removing content and finalizers it just didnt quit. Made me reconsider my homelab needs and i quit kubernetes usage in my homelab.", "created_utc": 1752852205, "id": "n3ty7do", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3ty7do/", "post_id": "1m2x19h", "score": 2, "stickied": false }, { "author": "Neat_System_7253", "awards": 0, "body": "ha yep, totally been there. we hear this kinda thing all the time..everything’s green, tests are passing, cluster says it’s healthy… and yet nothing works. maybe DNS is silently failing, or someone changed a secret and didn’t update a reference, or a sidecar’s crashing but not loud enough to trigger anything. it’s maddening.\n\nthat’s actually a big reason teams use testkube (yes I work there). you can run tests *inside* your kubernetes cluster for smoke tests, load tests, sanity checks, whatever and it helps you catch stuff early. like, before it hits staging or worse, production. we’ve seen teams catch broken health checks, messed up ingress configs, weird networking issues, the kind of stuff that takes hours to debug after the fact just by having testkube wired into their workflows.\n\nit’s kinda like giving your cluster its own “wtf detector.” honestly saves people from a lot of late-night panic.", "created_utc": 1752857744, "id": "n3uhsw5", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3uhsw5/", "post_id": "1m2x19h", "score": 2, "stickied": false }, { "author": "utunga", "awards": 0, "body": "Ok so.. I was going through setting up a new cluster. One of the earlier things I did was get the nvidia gpu-operator thingy going. Relatively easy install. But I was worried that things 'later' in my install process (mistake! I wasn't thinking kubernetes style) would try to install it again or muck it (specifically the install for a thing called kubeflow) so anyway I got it into my pretty little head to whack this label on my GPUs nodes 'nvidia.com/gpu.deploy.operands=false'\n\nMuch later on I'm like oh dang gpu-operator not working something must've broken let me try a reinstall. maybe I need to redo my containers config blah blah blah.. was tearing my hair out for literally a day and a half trying to figure this out. finally I resort to asking for help from the 'wise person who knows this stuff' and in the process of explaining notice my little note to self about adding that label.\n\nDo'h! Literally added a label that basically says 'dont install the operator on these nodes' and then spent a day and a half trying to work out why the operator wouldn't install ! \n\nArgh. Once I removed that label .. everything started work sweet again.\n\nSo stupid lol 😂", "created_utc": 1752909501, "id": "n3yhcbi", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3yhcbi/", "post_id": "1m2x19h", "score": 2, "stickied": false }, { "author": "user26e8qqe", "awards": 0, "body": "Six months after moving from Ubuntu 22 to 24, an unattended upgrade caused the systemd network restart, which dismissed AWS CNI outbound routing rules on ~15% of the nodes across all production regions. Everything looked healthy, but nothing worked. \n\nFor fix see https://github.com/kubernetes/kops/issues/17433.\n\nHope it saves you from some trouble!", "created_utc": 1752920653, "id": "n3z12x7", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3z12x7/", "post_id": "1m2x19h", "score": 2, "stickied": false }, { "author": "Otherwise_Tailor6342", "awards": 0, "body": "Oh man, my team, along with AWS support spent 36 hrs trying to figure out why token refreshes in apps deployed on our cluster were erroring and causing apps to crash…\n\nturns out that way back when security team insisted that we only pull time from our corporate time servers. Security team then migrated those time servers to a new data center… changed IPs and never told us. Time drift on some of our nodes was over 45 mins caused all kinds of weird stuff!\n\nLesson learned… always setup monitors for NTP Time Drift", "created_utc": 1753063142, "id": "n49pnp5", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n49pnp5/", "post_id": "1m2x19h", "score": 2, "stickied": false }, { "author": "Patient_Suspect2358", "awards": 0, "body": "Haha, totally relatable! Amazing how the smallest changes can cause the biggest headaches", "created_utc": 1753125446, "id": "n4e72vj", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n4e72vj/", "post_id": "1m2x19h", "score": 2, "stickied": false }, { "author": "buckypimpin", "awards": 0, "body": "how does a person who manages a reasonable sized cluster not first check the statuses a misbehaving pod is throwing\n\nor have tools (like argocd) show the warning/errors immediately.\n\nan inoccrect secret reference fires all sorts of alarms how did you miss all those?", "created_utc": 1752828669, "id": "n3s8deo", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3s8deo/", "post_id": "1m2x19h", "score": 10, "stickied": false }, { "author": "_O_I_O_", "awards": 0, "body": "That’s when you realize the importance of restricting access and automating the process hehe. \n.\n.\nTGIF", "created_utc": 1752861719, "id": "n3uw34o", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3uw34o/", "post_id": "1m2x19h", "score": 1, "stickied": false }, { "author": "PlexingtonSteel", "awards": 0, "body": "Didn't really broke a running cluster but wasn't able to bring cilium cluster to live for a long time.\nFirst node and second node were working fine. As soon as I joined the third node I got unexplainable network failures (inconsistent network timeouts, coreDNS not reachable, etc.).\n\nFound out that the combination of ciliums UDP encapsulation, vmware virtualization and our linux distro prevented any cluster internal network connectivity.\n\nSince then I need to disable the checksum offload calculation feature via network settings on every k8s VM to make it work.", "created_utc": 1752876418, "id": "n3w8zm3", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3w8zm3/", "post_id": "1m2x19h", "score": 1, "stickied": false }, { "author": "awesomeplenty", "awards": 0, "body": "Not really broken but we had 2 clusters running at the same time as active active in case one breaks down, however for the life of us we couldn't figure out why one cluster's pods were starting up way faster than the other consistently, it wasn't a huge difference like one cluster starts in 20 seconds and the other starts at 40 seconds. After weeks of investigation and Aws support tickets, we found out there was a variable to load all env vars on one cluster and the other did not, somehow we didn't even specify this variable on both clusters but only one has it enabled. It's called the enableservielinks. Thanks kubernetes for the hidden feature.", "created_utc": 1752898730, "id": "n3xw81i", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3xw81i/", "post_id": "1m2x19h", "score": 1, "stickied": false }, { "author": "-Zb17-", "awards": 0, "body": "I accidentally updated the EKS AWS Auth ConfigMap with malformed values and broke any access to the k8s api relying on IAM authentication (IRSA, all of users’ access, etc.). Turns out, kubelet is also in that list cause all the Nodes just started showing up as NotReady cause they were all failing to authenticate. \n\nLuckily, I had ArgoCD deployed to that cluster and managing all the workloads with vanilla ServiceAccount credentials. So was able to SSH into the EC2 and then into the container to grab them and fix the ConfigMap. Finding the Node was interesting, too. \n\nWas hectic as hell! Took", "created_utc": 1752902859, "id": "n3y4xhe", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3y4xhe/", "post_id": "1m2x19h", "score": 1, "stickied": false }, { "author": "CarIcy6146", "awards": 0, "body": "How did you not spot this in the pod logs in like 5 min?", "created_utc": 1752934022, "id": "n3zxmtx", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3zxmtx/", "post_id": "1m2x19h", "score": 1, "stickied": false }, { "author": "Anantabanana", "awards": 0, "body": "Had a weird one once, with nginx ingress controllers.\nThey have geoip2 enabled and it needs a maxmind key to be able to download databases.\n\nSymptoms were just that in AWS, all nodes connected to the ELB for the ingress were reporting unhealthy.\n\nFound that the ingress, despite having not changed in months, started failing to start and stuck on a restart loop.\n\nTurns out those maxmind keys now have a maximum download limit, and nxing was failing to download the databases, then switched off geoip2.\n\nThe catch is that the nginx log included geoip2 variables (now not found) and failed to start.\n\nNot the most straight forward thing to troubleshoot when all your ingresses are unresponsive.", "created_utc": 1752999880, "id": "n44unat", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n44unat/", "post_id": "1m2x19h", "score": 1, "stickied": false }, { "author": "r1z4bb451", "awards": 0, "body": "I am scratching my head.\n\nDon't knows what creeps in when I install CNI or may be it's something in there before CNI. Or my VMs were created with insufficient resources.\n\nI am using latest version of OS, VirtualBox, Kubernetes, and CNI.\n\nThings were still ok when I was using Windows 10 on L0 but Ubuntu 24 LTS has not given me a stable cluster as yes. I ditched Windows 10 on L0 due to frequent BSODs.\n\nNow thinking of trying with Debian 12 on L0.\n\nAny clue, please.", "created_utc": 1753000928, "id": "n44wf9d", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n44wf9d/", "post_id": "1m2x19h", "score": 1, "stickied": false }, { "author": "Hot-Entrepreneur2934", "awards": 0, "body": "One of our services wasn't autoscaling. We pushed config every way we would think of, but our cluster was not updating those values. We even manually updated the values but they reverted as part of the next deploy.\n\nThen we realized that the kubernettes file in the repo that we were changing and pushing was being overwritten by a script at deployment time...", "created_utc": 1753131910, "id": "n4etopv", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n4etopv/", "post_id": "1m2x19h", "score": 1, "stickied": false }, { "author": "ThatOneGuy4321", "awards": 0, "body": "When I was learning Kubernetes and trying to set up Traefik as an ingress controller, I got stuck and spent an embarrassing number of hours trying to use Traefik to manage certificates on a persistent volume claim. I would get a \"Permission denied\" error in my initContainer no matter what settings I used and it nearly drove me mad. I gave up trying to move my services to k8s for over a year because of it. \n\nEventually I figured out that my cloud provider (digital ocean) doesn't support the proper permissions on volume claims that Traefik requires to store certs, and I'd been working on a dead end the whole time. Felt pretty dumb after that. Used cert-manager instead and it worked fine.", "created_utc": 1753146564, "id": "n4g2ecc", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n4g2ecc/", "post_id": "1m2x19h", "score": 1, "stickied": false }, { "author": "waitingforcracks", "awards": 0, "body": "Most common issue I have faced and temporarily borked cluster is with validating or mutating webhook and the service/deployment serving the hooks becoming 503. This problem gets exacerbated when you have auto sync enabled via ArgoCD, which immediately reapplies the hooks if you try to delete them for get stuff flowing again.\n\nImagine this\n\n1. Kyverno broke\n2. Kyverno is deployed via ArgoCD and is set to Autosync\n3. ArgoCD UI (argo server) also broke\n 1. But ArgoCD controller is still running hence its doing sync\n 2. ArgoCD has admin login disabled and only login via SSO\n4. Trying to disable argocd auto sync via kubectl edit not working, webhook block\n5. Trying to scale down scale down argocd controller, blocked by webhoook\n\nAlmost any action that we tried to take to delete the webhooks and get back kubectl functionality was blocked.\n\n \nWe did finally manage to unlock the cluster but I'll only tell you how once you give me some suggestions how I could have unblocked it. I'll tell you if we tried that or didn't cross my mind.", "created_utc": 1752908530, "id": "n3yflfi", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3yflfi/", "post_id": "1m2x19h", "score": 1, "stickied": false }, { "author": "Ok-Lavishness5655", "awards": 0, "body": "Not managing your Kubernetes trough Ansible or Terraform?", "created_utc": 1752828415, "id": "n3s7xal", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3s7xal/", "post_id": "1m2x19h", "score": -14, "stickied": false }, { "author": "Fruloops", "awards": 0, "body": "Peak reddit", "created_utc": 1752828936, "id": "n3s8u3f", "is_submitter": false, "parent_id": "n3s8iur", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3s8u3f/", "post_id": "1m2x19h", "score": 47, "stickied": false }, { "author": "kri3v", "awards": 0, "body": "Why do you need 80 coredns replicas? This is crazy\n\nFor the sake of comparison we have a couple of 60 nodes clusters with 3 coredns pods, no nodelocalcache, aws, not even close to hit throttling", "created_utc": 1752833281, "id": "n3sgkq8", "is_submitter": false, "parent_id": "n3sazbc", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3sgkq8/", "post_id": "1m2x19h", "score": 39, "stickied": false }, { "author": "smarzzz", "awards": 0, "body": "That’s the moment nodelocalcache becomes a necessity.\nI always enjoy DNS issues on k8s. With ndots5 it has its own scaling issues..!", "created_utc": 1752832340, "id": "n3setfp", "is_submitter": false, "parent_id": "n3sazbc", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3setfp/", "post_id": "1m2x19h", "score": 13, "stickied": false }, { "author": "BrunkerQueen", "awards": 0, "body": "I don't know what's craziest here, 80 coredns replicas or that AWS runs stateful tracking on your internal network.", "created_utc": 1752843268, "id": "n3t4amk", "is_submitter": false, "parent_id": "n3sazbc", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3t4amk/", "post_id": "1m2x19h", "score": 8, "stickied": false }, { "author": "Le_Vagabond", "awards": 0, "body": "https://kubernetes.io/docs/tasks/administer-cluster/nodelocaldns/", "created_utc": 1752850777, "id": "n3tt4tg", "is_submitter": false, "parent_id": "n3sazbc", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3tt4tg/", "post_id": "1m2x19h", "score": 1, "stickied": false }, { "author": "bwrca", "awards": 0, "body": "Read this in Hagrid's voice", "created_utc": 1752831627, "id": "n3sdi89", "is_submitter": false, "parent_id": "n3sbca3", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3sdi89/", "post_id": "1m2x19h", "score": 11, "stickied": false }, { "author": "gorkish", "awards": 0, "body": "I don’t believe in millibytes either", "created_utc": 1752837285, "id": "n3soriv", "is_submitter": false, "parent_id": "n3sbca3", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3soriv/", "post_id": "1m2x19h", "score": 10, "stickied": false }, { "author": "calibrono", "awards": 0, "body": "Next homelab project - run etcd on a raid of floppies.", "created_utc": 1752861206, "id": "n3uuai9", "is_submitter": false, "parent_id": "n3t26zm", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3uuai9/", "post_id": "1m2x19h", "score": 16, "stickied": false }, { "author": "drsupermrcool", "awards": 0, "body": "Yeah it gives me ptsd from my ex - \"If I don't hear from you in 100ms I know you're down at her place\"", "created_utc": 1752858656, "id": "n3ul3hr", "is_submitter": false, "parent_id": "n3t26zm", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3ul3hr/", "post_id": "1m2x19h", "score": 12, "stickied": false }, { "author": "Think_Barracuda6578", "awards": 0, "body": "Yeah. Throw in some applications that use the etcd as a fucking database for storing their CRs while it could be just an object on some pvc, like wtf bro . Leave my etcd alone !", "created_utc": 1752866735, "id": "n3vd58i", "is_submitter": false, "parent_id": "n3t26zm", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3vd58i/", "post_id": "1m2x19h", "score": 2, "stickied": false }, { "author": "Think_Barracuda6578", "awards": 0, "body": "Also. And yeah , you can hate me for this, what if… what if kubectl delete node contolrplane will actually also remove that member from the etcd cluster ? I know fucking wild ideas", "created_utc": 1752867049, "id": "n3ve77m", "is_submitter": false, "parent_id": "n3t26zm", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3ve77m/", "post_id": "1m2x19h", "score": 1, "stickied": false }, { "author": "till", "awards": 0, "body": "I totally forgot about my etcd ptsd. I really love kine (etcd shim with support for sql databases).", "created_utc": 1752880991, "id": "n3wmel4", "is_submitter": false, "parent_id": "n3t26zm", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3wmel4/", "post_id": "1m2x19h", "score": 1, "stickied": false }, { "author": "[deleted]", "awards": 0, "body": "[removed]", "created_utc": 1752937754, "id": "n409e1z", "is_submitter": false, "parent_id": "n3tb7c9", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n409e1z/", "post_id": "1m2x19h", "score": 1, "stickied": false }, { "author": "PlexingtonSteel", "awards": 0, "body": "We encountered that problem a couple of times. It was maddening. Spent a couple hours finding it the first time.\n\nI even had to fix the kubernetes: internalIP setting into a kyverno rule because RKE updates reseted the CNI settings without notice (now there is a small note when updating).\n\nI even crawled into a rabbit hole of tcpdump into net namespaces. Found out that calico wasn't even trying to use the wrong interface. The traffic just didn't left the correct network interface. No indication why not.\n\nAs a result we avoid calico completely and switched to cilium for every new cluster.", "created_utc": 1752875912, "id": "n3w7g2y", "is_submitter": false, "parent_id": "n3t1jep", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3w7g2y/", "post_id": "1m2x19h", "score": 4, "stickied": false }, { "author": "Le_Vagabond", "awards": 0, "body": "https://addons.mozilla.org/en-US/firefox/addon/em-dash-detector/", "created_utc": 1752850931, "id": "n3ttobq", "is_submitter": false, "parent_id": "n3sh06a", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3ttobq/", "post_id": "1m2x19h", "score": 6, "stickied": false }, { "author": "Powerful-Internal953", "awards": 0, "body": "I like how everyone understood what the problem was. Also how does your IDE not detect it?", "created_utc": 1752895692, "id": "n3xp9u1", "is_submitter": false, "parent_id": "n3sh06a", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3xp9u1/", "post_id": "1m2x19h", "score": 2, "stickied": false }, { "author": "jack_of-some-trades", "awards": 0, "body": "Oi, I literally did that yesterday. Deleted the self managed kube-proxy thinking eks would take over. Eks did not. The one addon I was upgrading at the same time is what failed first. So I was looking in the wrong place for a while. \nReading more on it, I'm not sure I want AWS managing those addons.", "created_utc": 1752944223, "id": "n40u56h", "is_submitter": false, "parent_id": "n3wqq7w", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n40u56h/", "post_id": "1m2x19h", "score": 2, "stickied": false }, { "author": "kri3v", "awards": 0, "body": "Yep, this thread is a low effort LLM generated post", "created_utc": 1752834920, "id": "n3sjpxb", "is_submitter": false, "parent_id": "n3s8qsv", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3sjpxb/", "post_id": "1m2x19h", "score": 13, "stickied": false }, { "author": "CarIcy6146", "awards": 0, "body": "Right? This has burned a coworker twice now and it takes all of a few minutes for me to find", "created_utc": 1752934303, "id": "n3zyhi0", "is_submitter": false, "parent_id": "n3s8qsv", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3zyhi0/", "post_id": "1m2x19h", "score": 1, "stickied": false }, { "author": "kri3v", "awards": 0, "body": "For real. This feels like a low effort llm generated post\n\nA kubectl events will instantly tell you whats wrong\n\nThe em dashes — are a clear tell", "created_utc": 1752833453, "id": "n3sgwf9", "is_submitter": false, "parent_id": "n3s8deo", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3sgwf9/", "post_id": "1m2x19h", "score": 13, "stickied": false }, { "author": "Mr_Dvdo", "awards": 0, "body": "Time to start moving over to Access Entries. 🙃", "created_utc": 1752904248, "id": "n3y7m6x", "is_submitter": false, "parent_id": "n3y4xhe", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3y7m6x/", "post_id": "1m2x19h", "score": 1, "stickied": false }, { "author": "DevOps_Lead", "awards": 0, "body": "I faced something similar, but I was using Docker Compose", "created_utc": 1753156277, "id": "n4gshuu", "is_submitter": true, "parent_id": "n4g2ecc", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n4gshuu/", "post_id": "1m2x19h", "score": 1, "stickied": false }, { "author": "Eulerious", "awards": 0, "body": "Please tell me you don't deploy resources to Kubernetes with Ansible or Terraform...", "created_utc": 1752829042, "id": "n3s90wp", "is_submitter": false, "parent_id": "n3s7xal", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3s90wp/", "post_id": "1m2x19h", "score": 14, "stickied": false }, { "author": "Local-Cartoonist3723", "awards": 0, "body": "Redditoverflow vibes", "created_utc": 1752829031, "id": "n3s907a", "is_submitter": false, "parent_id": "n3s8u3f", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3s907a/", "post_id": "1m2x19h", "score": 18, "stickied": false }, { "author": "MC101101", "awards": 0, "body": "Haha right ??", "created_utc": 1752829461, "id": "n3s9qws", "is_submitter": false, "parent_id": "n3s8u3f", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3s9qws/", "post_id": "1m2x19h", "score": 3, "stickied": false }, { "author": "BrunkerQueen", "awards": 0, "body": "He's LARPing rootdns infrastructure :p", "created_utc": 1752833519, "id": "n3sh0t7", "is_submitter": false, "parent_id": "n3sgkq8", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3sh0t7/", "post_id": "1m2x19h", "score": 36, "stickied": false }, { "author": "totomz", "awards": 0, "body": "the coredns replicas are scaled accordingly to the cluster, to spread the requests across the nodes, but in that case it was misconfigured", "created_utc": 1752840985, "id": "n3sxuo8", "is_submitter": false, "parent_id": "n3sgkq8", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3sxuo8/", "post_id": "1m2x19h", "score": 5, "stickied": false }, { "author": "totomz", "awards": 0, "body": "I think the 80 replicas were because of nodelocal...but yeah, we got at least 3 big incident due to the dns & ndots", "created_utc": 1752840907, "id": "n3sxn50", "is_submitter": false, "parent_id": "n3setfp", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3sxn50/", "post_id": "1m2x19h", "score": 2, "stickied": false }, { "author": "TJonesyNinja", "awards": 0, "body": "The stateful tracking here is on AWS vpc dns servers/proxies not tracking the network itself. Pretty standard throttling behavior for a service with uptime guarantees. I do agree the 80 replicas is extremely excessive, if you aren’t doing a daemonset for node local dns.", "created_utc": 1752865538, "id": "n3v955i", "is_submitter": false, "parent_id": "n3t4amk", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3v955i/", "post_id": "1m2x19h", "score": 4, "stickied": false }, { "author": "yebyen", "awards": 0, "body": "Because it's a nonsense unit, but the Kubernetes API believes in Millibytes. And it will fuck up your shit, if you don't pay attention. You know who else doesn't believe in Millibytes? Karpenter, that's who. Yeah, I was loaded up on memory focused instances because Karpenter too thought \"that's a nonsense unit, must mean bytes\"", "created_utc": 1752838898, "id": "n3ssir1", "is_submitter": false, "parent_id": "n3soriv", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3ssir1/", "post_id": "1m2x19h", "score": 11, "stickied": false }, { "author": "bltsponge", "awards": 0, "body": "\"if you don't respond in 100ms I guess I'll just kill myself\" 🫩", "created_utc": 1752862192, "id": "n3uxpw3", "is_submitter": false, "parent_id": "n3ul3hr", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3uxpw3/", "post_id": "1m2x19h", "score": 11, "stickied": false }, { "author": "SyanticRaven", "awards": 0, "body": "I am honestly about to build a production multitenant project with either k3 or rke2 (honestly I'm thinking rke2 but not settled yet).", "created_utc": 1752938862, "id": "n40cz77", "is_submitter": false, "parent_id": "n409e1z", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n40cz77/", "post_id": "1m2x19h", "score": 2, "stickied": false }, { "author": "till", "awards": 0, "body": "Is the tooling with Cillium any better? Cillium looks amazing (I am a big fan of ebpf) but I don’t really have prod experience or what to do when things don’t work.\n\nWhen we started, calico seemed more stable. Also the recent acquisition made me think if I really wanted to go down this route.\n\nI think Calico’s response just struck me as odd. I even had someone respond in the beginning, but no one offered real insights into how their vxlan worked and then it was closed by one of their founders - “I thought this was done”.\n\nAlso generally not sure what the deal is with either of these CNIs in regard to enterprise v oss.\n\nI’ve also had fun with kube-proxy - iptables v nftables etc.. Wasn’t great either and took a day to troubleshoot but various oss projects (k0s, kube-proxy) rallied and helped.", "created_utc": 1752880792, "id": "n3wltek", "is_submitter": false, "parent_id": "n3w7g2y", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3wltek/", "post_id": "1m2x19h", "score": 1, "stickied": false }, { "author": "kri3v", "awards": 0, "body": "Thanks!", "created_utc": 1752852257, "id": "n3tye34", "is_submitter": false, "parent_id": "n3ttobq", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3tye34/", "post_id": "1m2x19h", "score": 4, "stickied": false }, { "author": "throwawayPzaFm", "awards": 0, "body": "The cool thing about Reddit is that despite this being a crappy AI post I still learned a lot from the comments.", "created_utc": 1752910056, "id": "n3yic07", "is_submitter": false, "parent_id": "n3sgwf9", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3yic07/", "post_id": "1m2x19h", "score": 4, "stickied": false }, { "author": "jack_of-some-trades", "awards": 0, "body": "We use some terraform and some straight-up kubectl apply in ci jobs. It was that way when I started, and not enough resources to move to something better.", "created_utc": 1752944493, "id": "n40v0kr", "is_submitter": false, "parent_id": "n3s90wp", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n40v0kr/", "post_id": "1m2x19h", "score": 1, "stickied": false }, { "author": "mvaaam", "awards": 0, "body": "That is a thing that people do though. It sucks to be the one to untangle it too", "created_utc": 1752829495, "id": "n3s9t2t", "is_submitter": false, "parent_id": "n3s90wp", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3s9t2t/", "post_id": "1m2x19h", "score": 1, "stickied": false }, { "author": "Ok-Lavishness5655", "awards": 0, "body": "Why not? What tools you using?", "created_utc": 1752829432, "id": "n3s9p40", "is_submitter": false, "parent_id": "n3s90wp", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3s9p40/", "post_id": "1m2x19h", "score": 0, "stickied": false }, { "author": "vqrs", "awards": 0, "body": "What's the problem with deploying resources with Terraform?", "created_utc": 1752832169, "id": "n3sehw3", "is_submitter": false, "parent_id": "n3s90wp", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3sehw3/", "post_id": "1m2x19h", "score": 0, "stickied": false }, { "author": "loogal", "awards": 0, "body": "I hate that I know exactly what this means despite having never seen it before", "created_utc": 1752832282, "id": "n3sepgx", "is_submitter": false, "parent_id": "n3s907a", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3sepgx/", "post_id": "1m2x19h", "score": 8, "stickied": false }, { "author": "waitingforcracks", "awards": 0, "body": "You should probably be running it as DaemonSet then. If you have 80 pods for 90 nodes, then another 10 pods will be meh. \nOn the other hand, 90 nodes should definitely not have \\~80 pods, more like 4-5 pods", "created_utc": 1752863252, "id": "n3v1c9g", "is_submitter": false, "parent_id": "n3sxuo8", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3v1c9g/", "post_id": "1m2x19h", "score": 12, "stickied": false }, { "author": "throwawayPzaFm", "awards": 0, "body": "> spread the requests across the nodes\n\nUsing a replicaset for that leads to unpredictable behaviour. DaemonSet.", "created_utc": 1752871751, "id": "n3vu4th", "is_submitter": false, "parent_id": "n3sxuo8", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3vu4th/", "post_id": "1m2x19h", "score": 4, "stickied": false }, { "author": "SyanticRaven", "awards": 0, "body": "I had found this recently with a new client - last team had hit the aws vpc throttle and decided the easiest quick win was each node must have a coredns instance. \n\nWe moved then from 120 coredns isntances to 6 with local dns cache. The main problem is they had burst workloads. Would go from 10 nodes to 1200 in a 20 minute window. \n\nDidnt help they also seemed to set up a prioritised spot for use in multi-hour non disruptable workflows.", "created_utc": 1752938607, "id": "n40c5g0", "is_submitter": false, "parent_id": "n3sxuo8", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n40c5g0/", "post_id": "1m2x19h", "score": 3, "stickied": false }, { "author": "smarzzz", "awards": 0, "body": "Nodelocal is a daemonset", "created_utc": 1752841426, "id": "n3sz1zt", "is_submitter": false, "parent_id": "n3sxn50", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3sz1zt/", "post_id": "1m2x19h", "score": 4, "stickied": false }, { "author": "gorkish", "awards": 0, "body": "I understand your desire to reiterate your frustration, though I assure you that it was not lost on me. I have this … gripe with an ambiguity in the PDF specification that caused great pain when different vendors handled it differently. Despite my effort to find what was actually intended and resolve the error in the spec, all I managed to do was get all the major vendors to handle it the same… the standard is still messed up though. Oh well.", "created_utc": 1753327031, "id": "n4u47wi", "is_submitter": false, "parent_id": "n3ssir1", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n4u47wi/", "post_id": "1m2x19h", "score": 2, "stickied": false }, { "author": "BrunkerQueen", "awards": 0, "body": "You can disable more features in K3s than in RKE2 which is nice, I'd use the embedded etcd, I've had weird issues with SQLite DB growing because of stuck nonexistent leases. ", "created_utc": 1753790526, "id": "n5s19vr", "is_submitter": false, "parent_id": "n40cz77", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n5s19vr/", "post_id": "1m2x19h", "score": 1, "stickied": false }, { "author": "PlexingtonSteel", "awards": 0, "body": "I would say cilium is a bit simpler and the documention is more intuitive for me. Calicos documentation sometimes feels like a jungle. You always have to make sure you are in the right section for onprem docs. It switches easily between onprem and cloud docs without notice. And the feature set between these two is a fair bit different.\n\nThe components in case of cilium are only one operator and a single daemonset, plus envoy ds if enabled inside the kube system ns. Calico is a bit more complex with multiple namespaces and different cat related crds.\n\nStability wise we had no complaint with either.\n\nFeature wise: cilium has some great features on paper that can replace many other components, like metallb, ingress, api gateway. But for our environment these integrated features always turned out to be not sufficient (only one ingress / gatewayclass, way less configurable loadbalancer and ingress controller). So we could't replace these parts with cilium.\n\nFor enterprise vs. oss: cilium for example has a great high available egress gateway feature in the enterprise edition, but the pricing, at least for on prem, ist beyond reasonable for a simple kubernetes network driver…\n\nCalico just deploys a deployment as an egress gateway which seems very crude.\n\nCalico has a bit of an advantage in case of ip address management for workloads. You can fine tune that stuff a bit more with calico.\n\nCilium networkpolicies are a bit more capable. For example dns based l7 policies.", "created_utc": 1752895818, "id": "n3xpk89", "is_submitter": false, "parent_id": "n3wltek", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3xpk89/", "post_id": "1m2x19h", "score": 3, "stickied": false }, { "author": "smarzzz", "awards": 0, "body": "ArgoCD", "created_utc": 1752832409, "id": "n3sey1d", "is_submitter": false, "parent_id": "n3s9p40", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3sey1d/", "post_id": "1m2x19h", "score": 9, "stickied": false }, { "author": "takeyouraxeandhack", "awards": 0, "body": "...helm", "created_utc": 1752829891, "id": "n3sahk0", "is_submitter": false, "parent_id": "n3s9p40", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3sahk0/", "post_id": "1m2x19h", "score": -2, "stickied": false }, { "author": "ok_if_you_say_so", "awards": 0, "body": "I have done this. It's not good. In my experience, the terraform kubernetes providers are for simple stuff like \"create an azure service principal and then stuff a client secret into a kubernetes Secret\". But trying to manage the entire lifecycle of your helm charts or manifests through terraform is not good. The two methodologies just don't jive well together.\n\nI can't point to a single clear \"this is why you should never do it\" but after many years of experience using both tools, I can say for sure I will never try to manage k8s apps via terraform again. It just creates a lot of extra churn and funky behavior. I think largely because both terraform and kubernetes are a \"reconcile loop\" style manager. After switching to argocd + gitops repo, I'm never looking back.\n\nOne thing I do know for sure, even if you do want to manage stuff in k8s via terraform, definitely don't do it in the same workspace where you created the cluster. That for sure causes all kinds of funky cyclical dependency issues.", "created_utc": 1752869447, "id": "n3vma46", "is_submitter": false, "parent_id": "n3sehw3", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3vma46/", "post_id": "1m2x19h", "score": 1, "stickied": false }, { "author": "Local-Cartoonist3723", "awards": 0, "body": "“Well actually I am a sr. Redditor and sr. Multi-Badge stack overflower so not sure I can relate to what you’re saying. You’re also not adding any valuable commentary, did you check our guidelines?”", "created_utc": 1752839918, "id": "n3sv2bw", "is_submitter": false, "parent_id": "n3sepgx", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3sv2bw/", "post_id": "1m2x19h", "score": 5, "stickied": false }, { "author": "Salander27", "awards": 0, "body": "Yeah a daemonset would have been a better option. With the service configured to route to the local pod first.", "created_utc": 1752871573, "id": "n3vtjbj", "is_submitter": false, "parent_id": "n3v1c9g", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3vtjbj/", "post_id": "1m2x19h", "score": 3, "stickied": false }, { "author": "Ok-Lavishness5655", "awards": 0, "body": "ok and there is no helm module for ansible? [https://docs.ansible.com/ansible/latest/collections/kubernetes/core/helm\\_module.html](https://docs.ansible.com/ansible/latest/collections/kubernetes/core/helm_module.html)\n\nYour explanation to why Terraform or Ansible is bad for Kubernetes is not there, so im asking again why not using Ansible or Terraform? Or is it that you just hating?", "created_utc": 1752830120, "id": "n3savq5", "is_submitter": false, "parent_id": "n3sahk0", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3savq5/", "post_id": "1m2x19h", "score": 6, "stickied": false }, { "author": "baronas15", "awards": 0, "body": "... He is asking why ....\n\n...", "created_utc": 1752830699, "id": "n3sbvxn", "is_submitter": false, "parent_id": "n3sahk0", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3sbvxn/", "post_id": "1m2x19h", "score": 2, "stickied": false }, { "author": "BrunkerQueen", "awards": 0, "body": "I use kubenix to render helm charts, they then get fed back into the kubenix module system as resources which I can override every single parameter on without touching the filthy Helm template language.\n\nThen it spits out a huge list of resources which I map to terranix resources which applies each object one by one (and if the resource has a namespace we depend on that namespace to be created first).\n\nIt isn't fully automated since the Kubernetes provider I'm using (kubectl) doesn't support recreating objects with immutable fields.\n\nBut I can also plug any terraform provider into terranix and use the same deployment method for resources across clouds.\n\nYour way isn't the only way, my way isn't the only way. You're interacting with a CRUD API, do it whatever way suits you.\n\nObjectively Helm really sucks however, they should've added Jsonnet and other functional languages rather than relying on string templating doohickeys", "created_utc": 1752834133, "id": "n3si779", "is_submitter": false, "parent_id": "n3sahk0", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3si779/", "post_id": "1m2x19h", "score": 2, "stickied": false }, { "author": "zedd_D1abl0", "awards": 0, "body": "What if I use Terraform to deploy a Helm chart?", "created_utc": 1752830582, "id": "n3sbokb", "is_submitter": false, "parent_id": "n3sahk0", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3sbokb/", "post_id": "1m2x19h", "score": 1, "stickied": false }, { "author": "Daffodil_Bulb", "awards": 0, "body": "One concrete example is, terraform will spend 20 minutes deleting and recreating stuff when you just want to modify existing resources.", "created_utc": 1753246739, "id": "n4nrvpv", "is_submitter": false, "parent_id": "n3vma46", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n4nrvpv/", "post_id": "1m2x19h", "score": 1, "stickied": false }, { "author": "loogal", "awards": 0, "body": "I believe this is a duplicate of <insert other similar-ish question for same package 14 versions ago>. Closed.", "created_utc": 1752843937, "id": "n3t6b4y", "is_submitter": false, "parent_id": "n3sv2bw", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3t6b4y/", "post_id": "1m2x19h", "score": 5, "stickied": false }, { "author": "Local-Cartoonist3723", "awards": 0, "body": "Yours is better haha", "created_utc": 1752845241, "id": "n3tadf6", "is_submitter": false, "parent_id": "n3t6b4y", "permalink": "/r/kubernetes/comments/1m2x19h/whats_the_most_ridiculous_reason_your_kubernetes/n3tadf6/", "post_id": "1m2x19h", "score": 3, "stickied": false } ]
95
1m2w6v9
finished my first full CI/CD pipeline project (GitHub/ ArgoCD/K8s) would love feedback
Hey folks, I recently wrapped up my first end-to-end DevOps lab project and I’d love some feedback on it, both technically and from a "would this help me get hired" perspective. The project is a basic phonebook app (frontend + backend + PostgreSQL), deployed with: * GitHub repo for source and manifests * Argo CD for GitOps-style deployment * Kubernetes cluster (self-hosted on my lab setup) * Separate dev/prod environments * CI pipeline auto-builds container images on push * CD auto-syncs to the cluster via ArgoCD * Secrets are managed cleanly, and services are split logically My background is in Network Security & Infrastructure but I’m aiming to get freelance or full-time work in DevSecOps / Platform / SRE roles, and trying to build projects that reflect what I'd do in a real job (infra as code, clean environments, etc.) What I’d really appreciate: * Feedback on how solid this project is as a portfolio piece * Would you hire someone with this on their GitHub? * What’s missing? Observability? Helm charts? RBAC? More services? * What would you build next after this to stand out? [Here is the repo](https://github.com/Alexbeav/devops-phonebook-demo) Appreciate any guidance or roast!
54
0.98
39
1,752,824,738
Alexbeav
/r/kubernetes/comments/1m2w6v9/finished_my_first_full_cicd_pipeline_project/
https://www.reddit.com/r/kubernetes/comments/1m2w6v9/finished_my_first_full_cicd_pipeline_project/
true
self.kubernetes
null
kubernetes
false
false
false
0
2025-09-26T10:56:25.647590
[ { "author": "Particular-Pumpkin11", "awards": 0, "body": "I think it is looking pretty good. A preference of mine is to use rendered manifest pattern over making ArgoCD render helm charts: https://akuity.io/blog/the-rendered-manifests-pattern here is a nice article on it 😊", "created_utc": 1752836805, "id": "n3snorb", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2w6v9/finished_my_first_full_cicd_pipeline_project/n3snorb/", "post_id": "1m2w6v9", "score": 12, "stickied": false }, { "author": "Actual_Acanthaceae47", "awards": 0, "body": "Your project is very good. I think to make it more GitOps, avoid hard-coded values like this. \n[https://github.com/Alexbeav/devops-phonebook-demo/blob/5bf690cefa76a4b176c0cfc441c732e06edaaaae/manifests/traefik.yaml#L14-L51](https://github.com/Alexbeav/devops-phonebook-demo/blob/5bf690cefa76a4b176c0cfc441c732e06edaaaae/manifests/traefik.yaml#L14-L51) \nJust use ref to take advantage of Gitops, for example. \n[https://github.com/ngodat0103/home-lab/blob/master/k3s/argocd-app/vaultwarden/argo-app.yaml](https://github.com/ngodat0103/home-lab/blob/master/k3s/argocd-app/vaultwarden/argo-app.yaml)", "created_utc": 1752859235, "id": "n3un6qy", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2w6v9/finished_my_first_full_cicd_pipeline_project/n3un6qy/", "post_id": "1m2w6v9", "score": 4, "stickied": false }, { "author": "Legitimate-Dog-4997", "awards": 0, "body": "Really good work. but from i what i see is a bit too much to maintained \\^\\^\n\non my Home-lab + Work\n\nwe use multiple argocd 1 per cluster ( didn't have choice here .. ) \nwith multiple environment (7 clusters and 9 environments)\n\nand i found a quick and easy solution to maintained visibilty over changed on MR/PR with this tools\n\nyou should check [https://github.com/dag-andersen/argocd-diff-preview](https://github.com/dag-andersen/argocd-diff-preview) \nit's lite and don't have the need to access on cluster", "created_utc": 1752870410, "id": "n3vpk6d", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2w6v9/finished_my_first_full_cicd_pipeline_project/n3vpk6d/", "post_id": "1m2w6v9", "score": 3, "stickied": false }, { "author": "mystic_skittles", "awards": 0, "body": "I like the runbook docs it's looking clean. I didn't see screenshots of the actual app anywhere, I know that the focus is the backend so it's not important but a little demo doc or video would be a nice touch. \n\nI've been interviewing for mid-lvl SRE roles and getting asked questions like \"if your boss told you that you need to cut last year's downtime in half, how would you attempt to go about that?\"\n\nAnd \"what metrics would you monitor to be proactive instead of reactive?\". In other words how do you catch disasters before they happen. And a lot of deep technical questions about k8s network policies / pod networking and CNIs. \n\nJust wanted to give some food for thought. Maybe start deep diving those topics and see if they can be applied to your homelab.", "created_utc": 1753204271, "id": "n4k40im", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2w6v9/finished_my_first_full_cicd_pipeline_project/n4k40im/", "post_id": "1m2w6v9", "score": 2, "stickied": false }, { "author": "Guilty_Way6830", "awards": 0, "body": "Looks nice! Keep up the good work. Can I ask you how much help did you had from AI in creating it ?", "created_utc": 1752962410, "id": "n42eyn7", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2w6v9/finished_my_first_full_cicd_pipeline_project/n42eyn7/", "post_id": "1m2w6v9", "score": 1, "stickied": false }, { "author": "WillDabbler", "awards": 0, "body": "As a senior DevOps myself and having done many interviews from both sides, those lab setup are cool but do not replace real life experiences. It can helps you score points for a junior role but do not expect this repo to be a major pivot for your recruitment.\n\nAs a recruiter I will take a more serious look at projects with real business case on the back than any educational projects. \n\nSorry I hate being the party pooper but because you asked if I would hire you with this on your GitHub, I wanted to share my opinion.", "created_utc": 1752859750, "id": "n3up1lb", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2w6v9/finished_my_first_full_cicd_pipeline_project/n3up1lb/", "post_id": "1m2w6v9", "score": -5, "stickied": false }, { "author": "[deleted]", "awards": 0, "body": "ArgoCD has a solution for this inbuilt: source hydrator", "created_utc": 1752842682, "id": "n3t2kld", "is_submitter": false, "parent_id": "n3snorb", "permalink": "/r/kubernetes/comments/1m2w6v9/finished_my_first_full_cicd_pipeline_project/n3t2kld/", "post_id": "1m2w6v9", "score": 8, "stickied": false }, { "author": "Particular-Pumpkin11", "awards": 0, "body": "I could not see your app credentials secrets are injected. What are you using there?", "created_utc": 1752836884, "id": "n3snv0a", "is_submitter": false, "parent_id": "n3snorb", "permalink": "/r/kubernetes/comments/1m2w6v9/finished_my_first_full_cicd_pipeline_project/n3snv0a/", "post_id": "1m2w6v9", "score": 2, "stickied": false }, { "author": "blue-reddit", "awards": 0, "body": "+1 this way you can store your dev and prod value files outside of the helm chart dir", "created_utc": 1752865459, "id": "n3v8vmi", "is_submitter": false, "parent_id": "n3un6qy", "permalink": "/r/kubernetes/comments/1m2w6v9/finished_my_first_full_cicd_pipeline_project/n3v8vmi/", "post_id": "1m2w6v9", "score": 3, "stickied": false }, { "author": "Alexbeav", "awards": 0, "body": "Thank you very much!!", "created_utc": 1752865608, "id": "n3v9dmb", "is_submitter": true, "parent_id": "n3un6qy", "permalink": "/r/kubernetes/comments/1m2w6v9/finished_my_first_full_cicd_pipeline_project/n3v9dmb/", "post_id": "1m2w6v9", "score": 1, "stickied": false }, { "author": "Alexbeav", "awards": 0, "body": "Thank you, that's invaluable! I'll add a pic of the app itself, but it's very basic. Thanks again", "created_utc": 1753210570, "id": "n4kqyfw", "is_submitter": true, "parent_id": "n4k40im", "permalink": "/r/kubernetes/comments/1m2w6v9/finished_my_first_full_cicd_pipeline_project/n4kqyfw/", "post_id": "1m2w6v9", "score": 2, "stickied": false }, { "author": "Alexbeav", "awards": 0, "body": "around 30% for code to minimize rewriting things I knew how to do well, and around 60% for review/documentation.", "created_utc": 1753000668, "id": "n44vz1m", "is_submitter": true, "parent_id": "n42eyn7", "permalink": "/r/kubernetes/comments/1m2w6v9/finished_my_first_full_cicd_pipeline_project/n44vz1m/", "post_id": "1m2w6v9", "score": 2, "stickied": false }, { "author": "AkiraTheNEET", "awards": 0, "body": "Hey so if you wouldn’t hire them how would they get a real life experience?", "created_utc": 1752899552, "id": "n3xy0ox", "is_submitter": false, "parent_id": "n3up1lb", "permalink": "/r/kubernetes/comments/1m2w6v9/finished_my_first_full_cicd_pipeline_project/n3xy0ox/", "post_id": "1m2w6v9", "score": 4, "stickied": false }, { "author": "Alexbeav", "awards": 0, "body": "No I understand, thank you for the feedback. Do you have any examples perhaps I could look at or something that caught your attention?", "created_utc": 1752865585, "id": "n3v9ax0", "is_submitter": true, "parent_id": "n3up1lb", "permalink": "/r/kubernetes/comments/1m2w6v9/finished_my_first_full_cicd_pipeline_project/n3v9ax0/", "post_id": "1m2w6v9", "score": 1, "stickied": false }, { "author": "Particular-Pumpkin11", "awards": 0, "body": "But that does not allow you to catch mistakes in GitOps PRs before it hits dev or prod. Does it?", "created_utc": 1752842983, "id": "n3t3g4c", "is_submitter": false, "parent_id": "n3t2kld", "permalink": "/r/kubernetes/comments/1m2w6v9/finished_my_first_full_cicd_pipeline_project/n3t3g4c/", "post_id": "1m2w6v9", "score": 3, "stickied": false }, { "author": "Alexbeav", "awards": 0, "body": "> source hydrator\n\nthat's a great idea to include in a future project, thanks!!", "created_utc": 1752849798, "id": "n3tpn1x", "is_submitter": true, "parent_id": "n3t2kld", "permalink": "/r/kubernetes/comments/1m2w6v9/finished_my_first_full_cicd_pipeline_project/n3tpn1x/", "post_id": "1m2w6v9", "score": 1, "stickied": false }, { "author": "Alexbeav", "awards": 0, "body": "App credentials (i.e. database usernames and passwords) are managed securely using SealedSecrets, this ensures that sensitive data is encrypted and safe to store in version control. In this project, SealedSecrets is deployed as part of the project as I wanted to make it as 'standalone' as possible. \n\n- Encrypted secrets are defined in sealedsecret-db-dev.yaml and sealedsecret-db-prod.yaml.\n\n- The SealedSecrets controller (deployed via manifests/sealed-secrets-app.yaml) decrypts these at runtime and injects them as standard Kubernetes Secrets.\n\n- The backend deployment consumes these secrets via environment variables, as templated in the Helm chart (charts/myapp/templates/backend-deployment.yaml).", "created_utc": 1752842209, "id": "n3t180h", "is_submitter": true, "parent_id": "n3snv0a", "permalink": "/r/kubernetes/comments/1m2w6v9/finished_my_first_full_cicd_pipeline_project/n3t180h/", "post_id": "1m2w6v9", "score": 2, "stickied": false }, { "author": "WillDabbler", "awards": 0, "body": "I've recruited and trained juniors with no experiences many times but it has never been because they had a good github repo.", "created_utc": 1752930232, "id": "n3zmwaj", "is_submitter": false, "parent_id": "n3xy0ox", "permalink": "/r/kubernetes/comments/1m2w6v9/finished_my_first_full_cicd_pipeline_project/n3zmwaj/", "post_id": "1m2w6v9", "score": 1, "stickied": false }, { "author": "WillDabbler", "awards": 0, "body": "To me I like it better seeing an advanced configuration on a specific part on the infra rather than many component with no custom conf.\n\nLet's take nginx for example.\n\nEveryone can run a \\`helm install ingress-nginx\\` and setup an ingress controler with no understanding on how it works. But once you work on a real life project you have stuff like layer 4 reverse proxy, http header size, rate limiting and many more parameters to take into account. Those issues never appears on home lab because there's no traffic, no users, no problems. \n\nShowing you've been intensively working with nginx by knowing internal mecanism it much more valuable to my eyes that just run a basic setup everyone can do. \n\nSame goes with any other tools. \n\nAgain don't get me wrong, as a junior it's better having this kind of well polished projects than nothing but the chances it will mak the difference between you and another candidate is near 0. \n\nThose home labs are for learning, not showing.", "created_utc": 1752930071, "id": "n3zmghh", "is_submitter": false, "parent_id": "n3v9ax0", "permalink": "/r/kubernetes/comments/1m2w6v9/finished_my_first_full_cicd_pipeline_project/n3zmghh/", "post_id": "1m2w6v9", "score": 2, "stickied": false }, { "author": "[deleted]", "awards": 0, "body": "It does if you use the added feature that uses a separate branch for hydration, allowing your PR flow, however you decide.\n\nSee: https://argo-cd.readthedocs.io/en/latest/user-guide/source-hydrator/#pushing-to-a-staging-branch (only available in 3.1 RC for now)", "created_utc": 1752843133, "id": "n3t3w5n", "is_submitter": false, "parent_id": "n3t3g4c", "permalink": "/r/kubernetes/comments/1m2w6v9/finished_my_first_full_cicd_pipeline_project/n3t3w5n/", "post_id": "1m2w6v9", "score": 5, "stickied": false }, { "author": "Particular-Pumpkin11", "awards": 0, "body": "There is no manifests/sealedsecret-db-dev.yaml in manifest or am I just blind? 😂", "created_utc": 1752842363, "id": "n3t1nxj", "is_submitter": false, "parent_id": "n3t180h", "permalink": "/r/kubernetes/comments/1m2w6v9/finished_my_first_full_cicd_pipeline_project/n3t1nxj/", "post_id": "1m2w6v9", "score": 3, "stickied": false }, { "author": "mystic_skittles", "awards": 0, "body": "Then what made you hire them?", "created_utc": 1753202674, "id": "n4jy3g4", "is_submitter": false, "parent_id": "n3zmwaj", "permalink": "/r/kubernetes/comments/1m2w6v9/finished_my_first_full_cicd_pipeline_project/n4jy3g4/", "post_id": "1m2w6v9", "score": 1, "stickied": false }, { "author": "Alexbeav", "awards": 0, "body": "I appreciate the response, but you didn't answer my question. Do you have an example of something that is \"showing\" I could look at? Thanks.", "created_utc": 1752930912, "id": "n3zor4j", "is_submitter": true, "parent_id": "n3zmghh", "permalink": "/r/kubernetes/comments/1m2w6v9/finished_my_first_full_cicd_pipeline_project/n3zor4j/", "post_id": "1m2w6v9", "score": 2, "stickied": false }, { "author": "Particular-Pumpkin11", "awards": 0, "body": "Oh, that is pretty nice. Going to look into that 🙌", "created_utc": 1752843179, "id": "n3t414e", "is_submitter": false, "parent_id": "n3t3w5n", "permalink": "/r/kubernetes/comments/1m2w6v9/finished_my_first_full_cicd_pipeline_project/n3t414e/", "post_id": "1m2w6v9", "score": 2, "stickied": false }, { "author": "Particular-Pumpkin11", "awards": 0, "body": "So it supports this pattern? \n\nhttps://framerusercontent.com/images/sxJ9v1Bo2HBpoxeMOGxtFzIBjeg.png", "created_utc": 1752843276, "id": "n3t4bgs", "is_submitter": false, "parent_id": "n3t3w5n", "permalink": "/r/kubernetes/comments/1m2w6v9/finished_my_first_full_cicd_pipeline_project/n3t4bgs/", "post_id": "1m2w6v9", "score": 1, "stickied": false }, { "author": "Alexbeav", "awards": 0, "body": "OH WOW! I forgot to include the steps! \n\nThese are my notes, I'll update the readme/setup to add these instructions. Thanks for catching that!\n\n(I'm using placeholder credentials here of course)\n\nHere’s a step-by-step guide to generate and apply real SealedSecrets for the DB credentials:\n\n---\n\n### 1. **Install kubeseal (if not already installed)**\n\n```bash\ncurl -OL \"https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.30.0/kubeseal-0.30.0-linux-amd64.tar.gz\"\ntar -xvzf kubeseal-0.30.0-linux-amd64.tar.gz kubeseal\nsudo install -m 755 kubeseal /usr/local/bin/kubeseal\n```\n\nConnect\n\n```bash\nkubeseal --controller-name=sealed-secrets --controller-namespace=sealed-secrets\n```\n\n\n### 2. **Create a Kubernetes Secret manifest (not applied, just used for sealing)**\n\nExample: `myapp-db-dev-secret.yaml`\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: myapp-db-dev\n namespace: myapp-dev\ntype: Opaque\ndata:\n username: $(echo -n 'myappuser' | base64)\n password: $(echo -n 'myapppassword' | base64)\n```\n\n### 3. **Seal the secret using kubeseal**\n\nProd Values:\n\nEncode the values first\n\n```bash\necho -n 'prodUser01' | base64\necho -n 'prodPass456@' | base64\n```\n\n```bash\nnano tmp-prod-secret.json\n```\n\nThen pass them:\n\n```json\n{\n \"apiVersion\": \"v1\",\n \"kind\": \"Secret\",\n \"metadata\": {\n \"name\": \"myapp-db-prod\",\n \"namespace\": \"myapp-prod\"\n },\n \"type\": \"Opaque\",\n \"data\": {\n \"username\": \"cHJvZFVzZXIwMQ==\",\n \"password\": \"cHJvZFBhc3M0NTZA\"\n }\n}\n```\n\n\n```bash\nkubeseal --controller-name=sealed-secrets --controller-namespace=sealed-secrets --format yaml < tmp-prod-secret.json > manifests/sealedsecret-db-prod.yaml\n```\n\n- Repeat for `myapp-db-dev` in `myapp-dev` namespace.\n\n### 4. **Apply the SealedSecret to your cluster**\n\n```bash\nkubectl apply -f manifests/sealedsecret-db-dev.yaml\nkubectl apply -f manifests/sealedsecret-db-prod.yaml\n```\n\n### 5. **Verify the secret is unsealed**\n\n```bash\nkubectl get secret myapp-db-dev -n myapp-dev -o yaml\nkubectl get secret myapp-db-prod -n myapp-prod -o yaml\n```\n\n### 6. **Sync your ArgoCD application**\n\n```bash\nargocd app sync phonebook-dev-app\nargocd app sync phonebook-prod-app\n```", "created_utc": 1752842914, "id": "n3t38ye", "is_submitter": true, "parent_id": "n3t1nxj", "permalink": "/r/kubernetes/comments/1m2w6v9/finished_my_first_full_cicd_pipeline_project/n3t38ye/", "post_id": "1m2w6v9", "score": 7, "stickied": false }, { "author": "[deleted]", "awards": 0, "body": "Yes, this is exactly what it solves", "created_utc": 1752843326, "id": "n3t4gug", "is_submitter": false, "parent_id": "n3t4bgs", "permalink": "/r/kubernetes/comments/1m2w6v9/finished_my_first_full_cicd_pipeline_project/n3t4gug/", "post_id": "1m2w6v9", "score": 2, "stickied": false }, { "author": "Particular-Pumpkin11", "awards": 0, "body": "Thanks great 😊", "created_utc": 1752843384, "id": "n3t4n83", "is_submitter": false, "parent_id": "n3t4gug", "permalink": "/r/kubernetes/comments/1m2w6v9/finished_my_first_full_cicd_pipeline_project/n3t4n83/", "post_id": "1m2w6v9", "score": 1, "stickied": false }, { "author": "Particular-Pumpkin11", "awards": 0, "body": "But you need to have some mechanism moving the manifests to the sync branch. So it does not solve it all it seems 😊", "created_utc": 1752843503, "id": "n3t5007", "is_submitter": false, "parent_id": "n3t4gug", "permalink": "/r/kubernetes/comments/1m2w6v9/finished_my_first_full_cicd_pipeline_project/n3t5007/", "post_id": "1m2w6v9", "score": 1, "stickied": false }, { "author": "[deleted]", "awards": 0, "body": "If you take out the staging branch, the behavior is a fully automated hydration. You mentioned PRs and catching mistakes, that's where argocd relaxes and let's you do the moving by not pushing directly to your sync branch. Am I misunderstanding you?", "created_utc": 1752843859, "id": "n3t62l1", "is_submitter": false, "parent_id": "n3t5007", "permalink": "/r/kubernetes/comments/1m2w6v9/finished_my_first_full_cicd_pipeline_project/n3t62l1/", "post_id": "1m2w6v9", "score": 1, "stickied": false }, { "author": "Particular-Pumpkin11", "awards": 0, "body": "No it is correct, it is just not the full pattern. You need some action and moving logic 😊", "created_utc": 1752843940, "id": "n3t6bi7", "is_submitter": false, "parent_id": "n3t62l1", "permalink": "/r/kubernetes/comments/1m2w6v9/finished_my_first_full_cicd_pipeline_project/n3t6bi7/", "post_id": "1m2w6v9", "score": 1, "stickied": false }, { "author": "Particular-Pumpkin11", "awards": 0, "body": "And I like it, good stuff. I have just some ci logic doing the helm render it self and shipping rendered manifest to branches 😊", "created_utc": 1752844022, "id": "n3t6kej", "is_submitter": false, "parent_id": "n3t62l1", "permalink": "/r/kubernetes/comments/1m2w6v9/finished_my_first_full_cicd_pipeline_project/n3t6kej/", "post_id": "1m2w6v9", "score": 1, "stickied": false }, { "author": "[deleted]", "awards": 0, "body": "Could you try explaining what's missing again? I use ArgoCD with kustomize templates. My helm charts are rendered to flat manifests in the source hydration process. I'm genuinely interested in understanding your use case if it's truly not covered already", "created_utc": 1752844252, "id": "n3t79xw", "is_submitter": false, "parent_id": "n3t6bi7", "permalink": "/r/kubernetes/comments/1m2w6v9/finished_my_first_full_cicd_pipeline_project/n3t79xw/", "post_id": "1m2w6v9", "score": 1, "stickied": false } ]
33
1m2vbem
Need help in finding a way to learn kubernetes and docker
Hello guys, I currently work in operations in a cyber security company, no knowledge in development, I really want to switch to cloud security and i have been told that kubernetes and docker are something that i really have to gets hands on and asked me to crack some certs, so how do i begin what are the pre requisites to get into this and resources that i can use, please help me out in getting into this side of tech!
0
0.22
5
1,752,821,405
iRogo1
/r/kubernetes/comments/1m2vbem/need_help_in_finding_a_way_to_learn_kubernetes/
https://www.reddit.com/r/kubernetes/comments/1m2vbem/need_help_in_finding_a_way_to_learn_kubernetes/
true
self.kubernetes
null
kubernetes
false
false
false
0
2025-09-26T10:56:26.931098
[ { "author": "Fling_this_to_space", "awards": 0, "body": "Not gonna lie; you are in for a rough time if you we not able to find one of the roughly one million blogs/courses/articles that are easily found with a simple search. \n \nThat said, if you want something like a course to follow along, ask if your company has a company subscription to a place like Pluralsight. \nIf not and you prefer online course, I would shell out the small price for a subscription and start the journy at [https://app.pluralsight.com/library/courses/docker-kubernetes-big-picture/table-of-contents](https://app.pluralsight.com/library/courses/docker-kubernetes-big-picture/table-of-contents)", "created_utc": 1752826255, "id": "n3s459q", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2vbem/need_help_in_finding_a_way_to_learn_kubernetes/n3s459q/", "post_id": "1m2vbem", "score": 9, "stickied": false }, { "author": "weigel23", "awards": 0, "body": "I just finished reading The Kubernetes Book by Nigel Poulton. It’s a great start to get an overview of Kubernetes and it finishes with a few chapters on security.", "created_utc": 1752830683, "id": "n3sbuy6", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2vbem/need_help_in_finding_a_way_to_learn_kubernetes/n3sbuy6/", "post_id": "1m2vbem", "score": 5, "stickied": false }, { "author": "unconceivables", "awards": 0, "body": "You just install it and start. It's free, and all the documentation is free.", "created_utc": 1752876997, "id": "n3waq20", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2vbem/need_help_in_finding_a_way_to_learn_kubernetes/n3waq20/", "post_id": "1m2vbem", "score": 3, "stickied": false } ]
3
1m2u5ba
Immediate or WaitforFirstConsumer - what to use and why?
In an on-premise datacenter, hitachi enterprises array connected via fc San to Cisco Ucs chassis, all nodes have storage connectivity. Can someone please help me understand which parameter to use for volumebindingmode. Immediate or waitforFirstConsumer. Any advantage disadvantages. Thank you.
7
0.89
4
1,752,817,099
Technical-Stress9807
/r/kubernetes/comments/1m2u5ba/immediate_or_waitforfirstconsumer_what_to_use_and/
https://www.reddit.com/r/kubernetes/comments/1m2u5ba/immediate_or_waitforfirstconsumer_what_to_use_and/
true
self.kubernetes
null
kubernetes
false
false
false
0
2025-09-26T10:56:28.066924
[ { "author": "LongerHV", "awards": 0, "body": "`WaitForFirstConsumer` is useful for Multi-AZ clusters wit non-replicated storage. `Immediate` will randomly chose a zone to provision the volume, but you may want an even spread enforced by anti affinity rules on your workloads.", "created_utc": 1752818939, "id": "n3rqrdv", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2u5ba/immediate_or_waitforfirstconsumer_what_to_use_and/n3rqrdv/", "post_id": "1m2u5ba", "score": 16, "stickied": false }, { "author": "vanlong-me", "awards": 0, "body": "At your point, they are the same. Think in multi-AZ environment (like public cloud), when you call to storageclass to create a new pvc, you should ensure that the pvc that you just created in the same AZ with the workernode (to avoid volume affinity conflict) and then workernode can attach this volume, finally mount it into pod", "created_utc": 1752819108, "id": "n3rr32j", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2u5ba/immediate_or_waitforfirstconsumer_what_to_use_and/n3rr32j/", "post_id": "1m2u5ba", "score": 3, "stickied": false }, { "author": "Technical-Stress9807", "awards": 0, "body": "Thank you for the quick response.", "created_utc": 1752819187, "id": "n3rr8gm", "is_submitter": true, "parent_id": "n3rqrdv", "permalink": "/r/kubernetes/comments/1m2u5ba/immediate_or_waitforfirstconsumer_what_to_use_and/n3rr8gm/", "post_id": "1m2u5ba", "score": 4, "stickied": false } ]
3
1m2m0gp
kubriX: Out of the Box Internal Developer Platform (IDP) for Kubernetes
[This post](https://itnext.io/kubrix-your-out-of-the-box-internal-developer-platform-idp-for-kubernetes-ba4c2671e6d1?source=friends_link&sk=d64aecabc267237db5a626049ca27682) by Artem Lajko is is a deep dive into [kubriX](https://kubrix.io/) and how kubriX integrates leading open source tools like Argo CD (GitOps), Kargo, and Backstage to deliver a fully functional IDP out of the box.
15
0.89
0
1,752,792,922
wineandcode
/r/kubernetes/comments/1m2m0gp/kubrix_out_of_the_box_internal_developer_platform/
https://www.reddit.com/r/kubernetes/comments/1m2m0gp/kubrix_out_of_the_box_internal_developer_platform/
true
self.kubernetes
null
kubernetes
false
false
false
0
2025-09-26T10:56:29.195913
[]
0
1m2ldnx
Reference Architecture: Kubernetes with Software-Defined Storage for High-Performance Block Workloads
A comprehensive guide to deploying a Kubernetes environment optimized for any workload - from general-purpose applications to high-performance workloads such as databases and AI/ML. Leveraging the combined power of software-defined block storage from Ceph and Lightbits, this architecture ensures robust storage solutions. It covers key aspects such as hardware setup, cluster configuration, storage integration, application deployment, monitoring, and cost optimization. A key advantage of this architecture is that software-defined storage can be added to an existing Kubernetes deployment without re-architecting, enabling a seamless upgrade path to software-defined infrastructure. By following this architecture, organizations can build highly available and scalable Kubernetes platforms to meet the diverse needs of modern applications running in containers, as well as legacy applications running as KubeVirt Virtual Machines (VMs).
0
0.4
0
1,752,791,285
Accurate_Funny6679
/r/kubernetes/comments/1m2ldnx/reference_architecture_kubernetes_with/
https://www.lightbitslabs.com/ra-kubernetes-storage-solutions-with-lightbits-and-supermicro-intel/
false
lightbitslabs.com
null
kubernetes
false
false
false
0
2025-09-26T10:56:30.318207
[]
0
1m2hi9i
first time set up hit an issues and internet is not helping
I am learning Kubernetes and am working with my company to get training but while I am negotiating that I want to get a far into the process as I can so I am not starting from zero. current set up is 3 ubuntu 24.04 images on Proxmox, with nested virtualization on. to make sure the process worked I installed a 2022 windows server and installed hyper v. before making the change to the set up it would not allow me to install hyper v but after the setting it worked. I am running off of the following instructions [https://www.cherryservers.com/blog/install-kubernetes-ubuntu](https://www.cherryservers.com/blog/install-kubernetes-ubuntu) originally I tried to run this on 3 raspberry pis since I had them but I had issues and I went this route. will try k3s later. I know I can run it as a snap in ubuntu but with all the trouble I had with getting Nextcloud to connect to mounts not within the snap environment I do not want to work through that again. every thing went well until I hit this step. https://preview.redd.it/notj0v78nhdf1.png?width=1808&format=png&auto=webp&s=50385371052347473eacb130371ce6e74303f3b4 this is what I am getting >**k8s-master-node-1**:**/etc/kubernetes**$ sudo kubectl create -f custom-resources.yaml >error: error validating "custom-resources.yaml": error validating data: failed to download openapi: Get "http://localhost:8080/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false I have the file in the folder and it is populated with what looks like it should be the right information so I thought maybe its just one of those flukes so I went to the next step >kubectl get nodes and according the instructions I should be able to see the control plane but this is what I am getting: > **k8s-master-node-1**:**/etc/kubernetes**$ sudo kubectl get nodes >E0717 19:33:01.798315    7736 memcache.go:265\] "Unhandled Error" err="couldn't get current server API group list: Get \\"http://localhost:8080/api?timeout=32s\\": dial tcp 127.0.0.1:8080: connect: connection refused" >The connection to the server localhost:8080 was refused - did you specify the right host or port? up to this point everything ran as the instruction said and when I searched the error code .(I use brave) I got no responses. I know nothing about this other than some of the basic terms and theories and my company is pushing Kubernetes and I am working to learn as much as I can, I will have a boot camp coming in the next few months but I would like to get through as much as possible so that when I do I am learning and not struggling to remember everything. I chose this link as it seemed to be the newest and most direct one I could find. if someone knows another one that is better I am very happy to try a different link. I have a udemy course that I am working through but it looks like it will be a while before doing any kind of installing.
0
0.25
13
1,752,782,027
nspireing
/r/kubernetes/comments/1m2hi9i/first_time_set_up_hit_an_issues_and_internet_is/
https://www.reddit.com/r/kubernetes/comments/1m2hi9i/first_time_set_up_hit_an_issues_and_internet_is/
true
self.kubernetes
null
kubernetes
false
false
false
0
2025-09-26T10:56:31.446950
[ { "author": "iamkiloman", "awards": 0, "body": "Do not start down the path of using docker and cri-dockerd as your container runtime. That is a dead end. Use containerd or cri-o.", "created_utc": 1752800491, "id": "n3qibm0", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2hi9i/first_time_set_up_hit_an_issues_and_internet_is/n3qibm0/", "post_id": "1m2hi9i", "score": 4, "stickied": false }, { "author": "Double_Intention_641", "awards": 0, "body": "Interesting. Docker wasn't an option as a container runtime for a while, i hadn't realized it was now viable again.\n\nSo looking at your guide - the kubeadm part went ok? Did you join any nodes at this point? Did you remember to copy the .kube config to the right place?", "created_utc": 1752785159, "id": "n3p70rl", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2hi9i/first_time_set_up_hit_an_issues_and_internet_is/n3p70rl/", "post_id": "1m2hi9i", "score": 2, "stickied": false }, { "author": "dutchman76", "awards": 0, "body": "Connection refused would look to me like the kube proxy or API server isn't running.", "created_utc": 1752803000, "id": "n3qp8hg", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2hi9i/first_time_set_up_hit_an_issues_and_internet_is/n3qp8hg/", "post_id": "1m2hi9i", "score": 1, "stickied": false }, { "author": "BraveNewCurrency", "awards": 0, "body": "Don't start on hard mode: Learn K8s using just a single server for now.\n\nThe experience and commands for \"Running apps on K8s\" is basically the same no matter how many nodes you have. Understanding how K8s scatters your apps across nodes isn't relevant to learning all the K8s commands, how containers work, ingress, etc, etc.", "created_utc": 1752813224, "id": "n3rf0vp", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2hi9i/first_time_set_up_hit_an_issues_and_internet_is/n3rf0vp/", "post_id": "1m2hi9i", "score": 1, "stickied": false }, { "author": "nspireing", "awards": 0, "body": "i have been trying to find an Official this is how you should do it or a best practices document. do you know of one. i know i am going down a widening rabbit hole but im learning.\n\nedit: I found this link that mentions containerD thoughts?\n\n[https://gist.github.com/NotHarshhaa/854ed5c12fff07acde88faf95b9decff](https://gist.github.com/NotHarshhaa/854ed5c12fff07acde88faf95b9decff)", "created_utc": 1752807691, "id": "n3r1t51", "is_submitter": true, "parent_id": "n3qibm0", "permalink": "/r/kubernetes/comments/1m2hi9i/first_time_set_up_hit_an_issues_and_internet_is/n3r1t51/", "post_id": "1m2hi9i", "score": 1, "stickied": false }, { "author": "nspireing", "awards": 0, "body": "Looking through the terminal history it looks like I did every up to the point of the screen shot, I believe adding the nodes was the next step.", "created_utc": 1752788593, "id": "n3piqss", "is_submitter": true, "parent_id": "n3p70rl", "permalink": "/r/kubernetes/comments/1m2hi9i/first_time_set_up_hit_an_issues_and_internet_is/n3piqss/", "post_id": "1m2hi9i", "score": 1, "stickied": false }, { "author": "nspireing", "awards": 0, "body": "I think I am now looking for a different install guide, I don't see any reference to those in the instructions and I found a few commands to check the status of the different pieces and they do not show at all", "created_utc": 1752808438, "id": "n3r3ou2", "is_submitter": true, "parent_id": "n3qp8hg", "permalink": "/r/kubernetes/comments/1m2hi9i/first_time_set_up_hit_an_issues_and_internet_is/n3r3ou2/", "post_id": "1m2hi9i", "score": 0, "stickied": false }, { "author": "iamkiloman", "awards": 0, "body": "Where are you finding all these random guides? Have you just tried the actual kubernetes docs for kubeadm?\nhttps://kubernetes.io/docs/setup/production-environment/tools/kubeadm/\n\nIf you are more of a `curl | bash` guy, you could try k3s: `curl -sfL https://get.k3s.io | sh -` - see the docs at https://docs.k3s.io/quick-start.\n\nNote, don't try to install one kubernetes distro on a node that you've already put a bunch of other crap on from tinkering around. Start with a fresh, clean node.", "created_utc": 1752810625, "id": "n3r92cq", "is_submitter": false, "parent_id": "n3r1t51", "permalink": "/r/kubernetes/comments/1m2hi9i/first_time_set_up_hit_an_issues_and_internet_is/n3r92cq/", "post_id": "1m2hi9i", "score": 3, "stickied": false }, { "author": "Double_Intention_641", "awards": 0, "body": "Ok. so bare metal install using kubeadm, you should have a ~/.kube/config with content in it.\n\nIf you don't, you need to copy it over. if you do and it's empty or malformed (missing cluster or user information) you need to fix that. Look at the commands for kubeadm, in particular `kubeadm reset` to give you a starting-over point.\n\ncheck /var/log/syslog as well (if you have rsyslog installed) to see what's going on under the hood.\n\nfeel free to post any logs you have as you run stuff. good luck!", "created_utc": 1752789651, "id": "n3pm7qb", "is_submitter": false, "parent_id": "n3piqss", "permalink": "/r/kubernetes/comments/1m2hi9i/first_time_set_up_hit_an_issues_and_internet_is/n3pm7qb/", "post_id": "1m2hi9i", "score": 1, "stickied": false }, { "author": "nspireing", "awards": 0, "body": "Guides from web searches, generaly try to find things that condense the process i have trouble when guides take 1,000 words to explain what could be explained in 20 to 100. Also official docs didn’t appear in my searches, im used to support docs behind paywalls. And yea the plan is to kill the vm and start from scratch.", "created_utc": 1752812919, "id": "n3recn3", "is_submitter": true, "parent_id": "n3r92cq", "permalink": "/r/kubernetes/comments/1m2hi9i/first_time_set_up_hit_an_issues_and_internet_is/n3recn3/", "post_id": "1m2hi9i", "score": -3, "stickied": false }, { "author": "nspireing", "awards": 0, "body": "Thanks I’m heading out but will go down that path and see what I can accomplish. Will report back when I have something", "created_utc": 1752792624, "id": "n3pvdvc", "is_submitter": true, "parent_id": "n3pm7qb", "permalink": "/r/kubernetes/comments/1m2hi9i/first_time_set_up_hit_an_issues_and_internet_is/n3pvdvc/", "post_id": "1m2hi9i", "score": 1, "stickied": false }, { "author": "nspireing", "awards": 0, "body": "I am fairly sure at this point the best option is to find a different set of instructions.", "created_utc": 1752808467, "id": "n3r3rh2", "is_submitter": true, "parent_id": "n3pm7qb", "permalink": "/r/kubernetes/comments/1m2hi9i/first_time_set_up_hit_an_issues_and_internet_is/n3r3rh2/", "post_id": "1m2hi9i", "score": 1, "stickied": false } ]
12
1m2h2ia
BrowserStation is an open source alternative to Browserbase.
We built BrowserStation, a Kubernetes-native framework for running sandboxed Chrome browsers in pods using a Ray + sidecar pattern. Each pod runs a Ray actor and a headless Chrome container with CDP exposed via WebSocket proxy. It works with LangChain, CrewAI, and other agent tools, and is easy to deploy on EKS, GKE, or local Kind. Would love feedback from the community repo here: [https://github.com/operolabs/browserstation](https://github.com/operolabs/browserstation) and more info [here](https://www.linkedin.com/posts/opero-labs_were-releasing-browserstation-an-open-source-activity-7351669927063244800-xLMy?utm_source=share&utm_medium=member_desktop&rcm=ACoAAFDzZ6IBDLXTSDxj_IzSX_0_2MHwPnmZ2dk).
40
0.91
2
1,752,780,998
Pleasant_Syllabub591
/r/kubernetes/comments/1m2h2ia/browserstation_is_an_open_source_alternative_to/
https://www.reddit.com/r/kubernetes/comments/1m2h2ia/browserstation_is_an_open_source_alternative_to/
true
self.kubernetes
null
kubernetes
false
false
false
0
2025-09-26T10:56:32.623639
[ { "author": "feroun", "awards": 0, "body": "Tried it out. Works great but a bit slow in deploying", "created_utc": 1752782135, "id": "n3owcbw", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2h2ia/browserstation_is_an_open_source_alternative_to/n3owcbw/", "post_id": "1m2h2ia", "score": 1, "stickied": false }, { "author": "Last-Specialist-1191", "awards": 0, "body": "really cool approach combining kubernetes ray and chrome sidecars curious how you handle auth across pods", "created_utc": 1752783384, "id": "n3p0poq", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2h2ia/browserstation_is_an_open_source_alternative_to/n3p0poq/", "post_id": "1m2h2ia", "score": 1, "stickied": false } ]
2
1m2ddqj
Scaling service to handle 20x capacity within 10-15 seconds
Hi everyone! This post is going to be a bit long, but bear with me. Our setup: 1. EKS cluster (300-350 Nodes M5.2xlarge and M5.4xlarge) (There are 6 ASGs 1 per zone per type for 3 zones) 2. ISTIO as a service mesh (side car pattern) 3. Two entry points to the cluster, one ALB at abcdef(dot)com and other ALB at api(dot)abcdef(dot)com 4. Cluster autoscaler configured to scale the ASGs based on demand. 5. Prometheus for metric collection, KEDA for scaling pods. 6. Pod startup time 10sec (including pulling image, and health checks) HPA Configuration (KEDA): 1. CPU - 80% 2. Memory - 60% 3. Custom Metric - Request Per Minute We have a service which is used by customers to stream data to our applications, usually the service is handling about 50-60K requests per minute in the peak hours and 10-15K requests other times. The service exposes a webhook endpoint which is specific to a user, for streaming data to our application user can hit that endpoint which will return a data hook id which can be used to stream the data. user initially hits POST https://api.abcdef.com/v1/hooks with his auth token this api will return a data hook id which he can use to stream the data at https://api.abcdef.com/v1/hooks/<hook-id>/data. Users can request for multiple hook ids to run a concurrent stream (something like multi-part upload but for json data). Each concurrent hook is called a connection. Users can post multiple JSON records to each connection it can be done in batches (or pages) of size not more than 1 mb. The service validates the schema, and for all the valid pages it creates a S3 document and posts a message to kafka with the document id so that the page can be processed. Invalid pages are stored in a different S3 bucket and can be retrieved by the users by posting to https://api.abcdef.com/v1/hooks/<hook-id>/errors . Now coming to the problem, We recently onboarded an enterprise who are running batch streaming jobs randomly at night IST, and due to those batch jobs the requests per minute are going from 15-20k per minute to beyond 200K per minute (in a very sudden spike of 30 seconds). These jobs last for about 5-8 minutes. What they are doing is requesting 50-100 concurrent connections with each connection posting around \~1200 pages (or 500 mb) per minute. Since we have only reactive scaling in place, our application takes about 45-80secs to scale up to handle the traffic during which about 10-12% of the requests for customer requests are getting dropped due to being timed out. As a temporary solution we have separated this user to a completely different deployment with 5 pods (enough to handle 50k requests per minute) so that it does not affect other users. Now we are trying to find out how to accommodate this type of traffic in our scaling infrastructure. We want to scale very quickly to handle 20x the load. We have looked into the following options, 1. Warm-up pools (maintaining 25-30% extra capacity than required) - Increases costing 2. Reducing Keda and Prometheus polling time to 5 secs each (currently 30s each) - increases the overall strain on the system for metric collection I have also read about proactive scaling but unable to understand how to implement it for such and unpredictable load. If anyone has dealt with similar scaling issues or has any leads on where to look for solutions please help with ideas. Thank you in advance. TLDR: - need to scale a stateless application to 20x capacity within seconds of load hitting the system. Edit: Thankyou all for all the suggestions, we went ahead with following measures for now which resolved our problems to a larger extent. 1. Asked the customer to limit the number of concurrent traffic (now they are using 25 connections over a span of 45 mins) 2. Reduced the polling frequency of prometheus and keda, added buffer capacity to the cluster (with this we were able to scale 2x pods in 45-90 secs. 3. Development team will be adding a rate limit on no. of concurrent connections a user can create 4. We worked on reducing the docker image size (from 400mb to 58mb) this reduces the scale up time. 5. Added a scale up/down stabilisation so that the pods don’t frequently scale up and down. 6. Finally, a long term change that we were able to convince the management for - instead of validating and uploading the data instantaneously application will save the streamed data first - only once the connection is closed it will validate and upload the data to s3 (this will greatly increase the throughput of each pod as the traffic is not consistent throughout the day)
62
0.94
63
1,752,772,547
delusional-engineer
/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/
https://www.reddit.com/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/
true
self.kubernetes
null
kubernetes
false
false
false
0
2025-09-26T10:56:33.760336
[ { "author": "iamkiloman", "awards": 0, "body": "Have you considered putting rate limits on your API? Rather than figuring out how to instantly scale to handle arbitrary bursts in load, put backpressure on the client by rate limiting the incoming requests. As you scale up the backend at whatever rate your infrastructure can actually handle, you can increase the limits to match.", "created_utc": 1752773244, "id": "n3o0xj0", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n3o0xj0/", "post_id": "1m2ddqj", "score": 60, "stickied": false }, { "author": "TomBombadildozer", "awards": 0, "body": "> cluster autoscaler\n\nAssuming you have no flexibility on the requirements that others have addressed, here's your first target. If you need to scale up capacity to handle new pods, there's no chance you'll make it in the time requirement with CA and ASGs. Kick that shit to the curb ASAP.\n\nMove everything to Karpenter and use Bottlerocket nodes. In my environments (Karpenter, Bottlerocket, AWS VPC CNI plus Cilium), nodes reliably boot in 10 seconds, which is already most of your budget.\n\nForget CPU and memory for your scaling metrics and use RPM and/or latency. You should be scaling on application KPIs. Resource consumption doesn't matter—you either fit inside the resources you've allocated, or you don't. If you're worried about resource costs, tune that independently.", "created_utc": 1752781372, "id": "n3otog2", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n3otog2/", "post_id": "1m2ddqj", "score": 22, "stickied": false }, { "author": "burunkul", "awards": 0, "body": "Have you tried Karpenter? It provisions nodes faster than the Cluster Autoscaler.", "created_utc": 1752779483, "id": "n3on63i", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n3on63i/", "post_id": "1m2ddqj", "score": 15, "stickied": false }, { "author": "Armestam", "awards": 0, "body": "I think you need to replace your API with a request queue. You can scale on the queue length instead. This will let you grab lots of the requests while your system scales. There will be a latency penalty on the first requests but you can tune to either catch up or just accept higher latency and finish a little after. \n\nThe other option, you said they are batch processing at night. Is this at the same time every night? Why don’t you scale up based on the wall clock time?", "created_utc": 1752795741, "id": "n3q4t70", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n3q4t70/", "post_id": "1m2ddqj", "score": 10, "stickied": false }, { "author": "Zackorrigan", "awards": 0, "body": "Are you using the keda httpaddon ?\n\nI’m wondering if you could set the requestRate to 1 and set the scaler on the hooks path as prefix. That way the scaler should create one pod per hook.", "created_utc": 1752775843, "id": "n3oae6l", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n3oae6l/", "post_id": "1m2ddqj", "score": 4, "stickied": false }, { "author": "psavva", "awards": 0, "body": "I would still go back to the enterprise client and ask.\nIf you don't ask, you will not get...\n\nIt may be a simple answer from them saying \"yeah sure, it won't make a difference to us...\"\n\nMy advice, is first understand your clients' needs, then decide on the solution...", "created_utc": 1752779276, "id": "n3omg6i", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n3omg6i/", "post_id": "1m2ddqj", "score": 8, "stickied": false }, { "author": "james-dev89", "awards": 0, "body": "Curious to see what others thing of this.\n\nWe had this exact problem, what we did was a combination of using HPA + queues\n\nWhen our application starts up, it needs to load data into memory, that process initially takes about 2 seconds which we were able to reduce down to 1 second.\n\nWhen the utilization was getting close to the limit set by the Kubernetes HPA, more replicas will be created.\n\nAlso, request that could not be processed were queued some fell into the DLQ so we don't loose them.\n\nAlso, we tuned the HPA to kick in early and spin up more replicas so as they traffic start to grow we don't want too long before we have more replicas up.\n\nAnother thing we did was pre-scaling based on trends, knowing that we receive 10x traffic in a time range, we increased in minReplicas.\n\nIt's still a work in progress but curious to see how others solved this issue.\n\n\n\nAlso, don't know if this is useful but also look into Pod Disruption Budget, for us, at some Point Pods started crashing while scaling up until we added a PDB\n\nOne more thing, don't just treat this as a spinning up more Pods to handle Scale, find ways to improve the the whole system. For example creating a new DB with read replicas helped us a lot to handle the scale.", "created_utc": 1752776583, "id": "n3oczs0", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n3oczs0/", "post_id": "1m2ddqj", "score": 3, "stickied": false }, { "author": "burunkul", "awards": 0, "body": "Why are you using m5 instances instead of a newer generation, like m6-m8?", "created_utc": 1752776113, "id": "n3obc9f", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n3obc9f/", "post_id": "1m2ddqj", "score": 2, "stickied": false }, { "author": "sionescu", "awards": 0, "body": "The proactive solution is to ask the customer to agree on a time window where they're going to issue those calls and pre-scale the pools. The shorter the time window, the better. Agree on an SLO, meaning that you can only guarantee 99%+ availability in that time window, otherwise they'll get lots of 503. Put WAF in front of the API to ensure they don't bring down the service for other customers, or even give them a dedicated API endpoint. A customer like this is indistinguishable from a DDOS attack.\n\nIf they don't agree on a specific time window, you need queueing while the autoscaler does its job, but then you're adding complexity.", "created_utc": 1752803599, "id": "n3qqvjl", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n3qqvjl/", "post_id": "1m2ddqj", "score": 2, "stickied": false }, { "author": "veryvivek", "awards": 0, "body": "If (very big if) you can move from http to let’s say Kafka. Then you can process all jobs asynchronously and not worry about instant scaling of apps. Just Kafka cluster. It would be huge architecture change but very fast provisioning of nodes will no longer be an issue.", "created_utc": 1752831792, "id": "n3sdsyx", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n3sdsyx/", "post_id": "1m2ddqj", "score": 2, "stickied": false }, { "author": "Dependent-Coyote2383", "awards": 0, "body": "how about having a more decoupled ingest system ? a veeeery light streaming api, which can scale up very fast, but is only responsible to post the data, raw, as fast as possible, to a processing queue in kafka ?", "created_utc": 1752838911, "id": "n3ssjvb", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n3ssjvb/", "post_id": "1m2ddqj", "score": 2, "stickied": false }, { "author": "Tzctredd", "awards": 0, "body": "Use AI. 😬\n\nI'm half joking here.", "created_utc": 1752858721, "id": "n3ulbs9", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n3ulbs9/", "post_id": "1m2ddqj", "score": 2, "stickied": false }, { "author": "DancingBestDoneDrunk", "awards": 0, "body": "Look at AI scaling. By that i mean look at tools that can look at patterns for when scaling should be done up front. Cloudwatch had this feature AFAIK, so it should be easy to trigger this at a regular interval.\n\n\nIts not THE solution to your problem, the thread has already mentioned a few ones (Carpenter, Bottlerocket etc).", "created_utc": 1752861741, "id": "n3uw5tb", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n3uw5tb/", "post_id": "1m2ddqj", "score": 1, "stickied": false }, { "author": "mrtes", "awards": 0, "body": "Short term solution? Use the number of active hooks to scale (expose that as a metric you can consume with HPA) and aggressively rate limit your hooks api to give you enough time to adjust. \n\nYou could even limit the number of concurrent hooks for a specific customer.\n\nNo matter how fast your autoscaling mechanism is, it will always be one against many. \nI would consider this more a design problem that you can then mitigate with some sound technology choices.", "created_utc": 1752910684, "id": "n3yjf96", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n3yjf96/", "post_id": "1m2ddqj", "score": 1, "stickied": false }, { "author": "guibirow", "awards": 0, "body": "You are looking for a technical solution to a business/process problem that could be solved without a technical solution. \n\nLike others mentioned, asking the customer should be the first option, you want to have a great relationship with them. I do work with many enterprise customers and they are usually open to these conversations. \nIf you don't let them know about their impact on your solution, your company will be seen as providing a bad service, and the problem will be worse. They might be open to fix it if you provide reasonable alternatives. \n\nWe also have in our contracts a clause stating that they have to notify us in advance when they expect to send spikes of load higher than usual, this will give enough time to prepare and will protect the business in case they use it to justify a breach of SLA. \n\nIf you talk to the customer and they don't cooperate, you should talk to stakeholders internally to discuss the mitigations options. Many business will be just okay to overprovision the clusters and absorb the costs.", "created_utc": 1752922010, "id": "n3z3nha", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n3z3nha/", "post_id": "1m2ddqj", "score": 1, "stickied": false }, { "author": "Diablo-x-", "awards": 0, "body": "Why not schedule the scaling based on peak hours instead ?", "created_utc": 1752926637, "id": "n3zds8r", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n3zds8r/", "post_id": "1m2ddqj", "score": 1, "stickied": false }, { "author": "M3talstorm", "awards": 0, "body": "You can schedule scaling in KEDA, almost like a cron, tell the customer to do their thing at x-y time at night and then scale up to what ever is needed just before those hours.", "created_utc": 1752956786, "id": "n41xund", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n41xund/", "post_id": "1m2ddqj", "score": 1, "stickied": false }, { "author": "markmsmith", "awards": 0, "body": "Do you HAVE to validate the docs synchronously, as they're being uploaded? \n\n\nYou could potentially sidestep the whole rapid-scaling issue by having the hooks endpoint return a [pre-signed S3 upload url](https://docs.aws.amazon.com/AmazonS3/latest/userguide/PresignedUrlUploadObject.html) that will only let them upload to the specified bucket + key (which could be something like customer id/date/hook id).  The uploads then go straight to S3 (which I'm pretty sure they'd struggle to max out) and you can have the bucket upload event feed to your Kafka queue to notify it's ready for verification and processing. \n\n\nYou then can tune the scaling of the processing pods based on the queue depth, and offer the business much more flexibility to trade-off processing rate vs costs, since it won't be synchronous on the customer's upload path any more.", "created_utc": 1752984210, "id": "n440f7o", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n440f7o/", "post_id": "1m2ddqj", "score": 1, "stickied": false }, { "author": "dreamszz88", "awards": 0, "body": "Btw I like your setup, good architecture. Nothing wrong with it.\n\nJust iterating a few very useful comments made and then add my own 2 cents:\n* Rate limit, perhaps at the outside using a level 7 WAF. You can add logic there _per customer_\n* Scale on KEDA drop the others, keep it simple\n* Use warm node pools, these startup, boot and then sleep. Much faster when needed\n* Separate them into their own \"tenant\" so they won't interfere with other customers. Since it's also traffic do consider the network this one customer will have on the others\n* Since its batch jobs, these are highly predictable, so you can scale up prior to the customer sending traffic ⭐⭐⭐ you can even arrange this for them, of they object to rate limits, for a fee of course, since they already bypass the normal limits 8ntentionally", "created_utc": 1753020788, "id": "n462zg9", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n462zg9/", "post_id": "1m2ddqj", "score": 1, "stickied": false }, { "author": "Formal-Pilot-9565", "awards": 0, "body": "Sounds to me like the api is a simple transport api for messages, \nso why not allow the caller to bundle multiple documents (say 100 or more) per api hit to boost the throughput?", "created_utc": 1753112785, "id": "n4cxmfj", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n4cxmfj/", "post_id": "1m2ddqj", "score": 1, "stickied": false }, { "author": "notospez", "awards": 0, "body": "Do you have an account manager at AWS? If so see if they can get you in touch with the SaaS Factory team. They will be able to help you with design patterns such as rate limiting, and your management might be more responsive to \"external experts\" saying the same thing you've been telling them.", "created_utc": 1753130066, "id": "n4en8bi", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n4en8bi/", "post_id": "1m2ddqj", "score": 1, "stickied": false }, { "author": "kellven", "awards": 0, "body": "Sounds like you have a customer problem not a technology problem .", "created_utc": 1752812425, "id": "n3rd94g", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n3rd94g/", "post_id": "1m2ddqj", "score": 0, "stickied": false }, { "author": "delusional-engineer", "awards": 0, "body": "We do have a rate limit (2000 requests per connection) but to by pass that they are creating more than 50 connections concurrently. \n\nAnd since this is the first enterprise client we have onboarded management is reluctant to ask them to change their methods.", "created_utc": 1752773567, "id": "n3o24sn", "is_submitter": true, "parent_id": "n3o0xj0", "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n3o24sn/", "post_id": "1m2ddqj", "score": 12, "stickied": false }, { "author": "azjunglist05", "awards": 0, "body": "I totally agree with this. The moment I saw using a cluster auto scaler with ASGs I wondered why they weren’t using Karpenter? It’s so fast at reacting to unscheduled pods. It’s hands down the best autoscaler granted it does require a bit of time to get used to some of its quirks.", "created_utc": 1752815836, "id": "n3rklwr", "is_submitter": false, "parent_id": "n3otog2", "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n3rklwr/", "post_id": "1m2ddqj", "score": 5, "stickied": false }, { "author": "Smashingeddie", "awards": 0, "body": "10 seconds from node claim to pods scheduling?", "created_utc": 1752798082, "id": "n3qbid5", "is_submitter": false, "parent_id": "n3otog2", "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n3qbid5/", "post_id": "1m2ddqj", "score": 1, "stickied": false }, { "author": "Iamwho-", "awards": 0, "body": "Second this solution. It is always a good practice to scale before the traffic hit, If you see req/sec increase on load balancer you can start scaling rather than waiting for CPU and memory to spike. You can configure to scale up faster ans scale down slower to keep the app going. I had hard time long-tie ago to keep the site going once the pods fail from heavy traffic it is hard recover after a point unless all the traffic is disabled for a bit.", "created_utc": 1752915605, "id": "n3ys504", "is_submitter": false, "parent_id": "n3otog2", "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n3ys504/", "post_id": "1m2ddqj", "score": 1, "stickied": false }, { "author": "delusional-engineer", "awards": 0, "body": "Not yet, will try to look into it.", "created_utc": 1752779915, "id": "n3ooo6h", "is_submitter": true, "parent_id": "n3on63i", "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n3ooo6h/", "post_id": "1m2ddqj", "score": 2, "stickied": false }, { "author": "delusional-engineer", "awards": 0, "body": "Thank you for this, I was also thinking on the same lines but these changes comes under developer teams purview. Will surely recommend to management.", "created_utc": 1753024040, "id": "n46d0q7", "is_submitter": true, "parent_id": "n3q4t70", "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n46d0q7/", "post_id": "1m2ddqj", "score": 1, "stickied": false }, { "author": "delusional-engineer", "awards": 0, "body": "We are using prometheus scaler as pf now.\n Haven’t tried this, will look into it.", "created_utc": 1752778778, "id": "n3okpee", "is_submitter": true, "parent_id": "n3oae6l", "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n3okpee/", "post_id": "1m2ddqj", "score": 3, "stickied": false }, { "author": "delusional-engineer", "awards": 0, "body": "Might not be my decision to go to client. Management is reluctant since this is our first big customer.\n\nAs for the need, this service is basically used to connect client’s ERP with our logistic and analytics system. Currently the customer is trying to import all of their order and shipment data from netsuite to our data-lake.", "created_utc": 1752780022, "id": "n3op1fc", "is_submitter": true, "parent_id": "n3omg6i", "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n3op1fc/", "post_id": "1m2ddqj", "score": 4, "stickied": false }, { "author": "delusional-engineer", "awards": 0, "body": "Thankyou for your suggestions, we have adopted a lot in the last one year. We do have pdb in place and to prevent over utilising a pod we are trying to scale up at 7000 req per min while a single pod can handle upwards of 12000 rpm.\n\nAs for the other parts we recently implemented kafka queues to process these requests and de-coupled the process into two parts one handles the ingestion and the other one handles the processing. Are there any other points you can suggest to improve this?\n\nHow did you tune HPA to kick-in early? \nWhat tool or method did you use to set-up pre-scaling? As we are growing we are also seeing patterns with few of other customers whose traffic is hitting every 15 or 30 mins. For now our HPA is able to handle those spikes but we want to be ready for greater spikes.", "created_utc": 1752779325, "id": "n3omma0", "is_submitter": true, "parent_id": "n3oczs0", "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n3omma0/", "post_id": "1m2ddqj", "score": 1, "stickied": false }, { "author": "delusional-engineer", "awards": 0, "body": "We set the cluster around 3 years back and being carrying forward the same configurations. Is there any benefit of using m6-m8 over m5?", "created_utc": 1752778992, "id": "n3olgf0", "is_submitter": true, "parent_id": "n3obc9f", "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n3olgf0/", "post_id": "1m2ddqj", "score": 3, "stickied": false }, { "author": "dreamszz88", "awards": 0, "body": "These are definitely worth the update! Esp compared to m5 Generation.\n\nAlso, get reserved instances for 70% of your predictable workload, use spot instances where possible and on-demand for the rest to reduce the bill. Getting annual RIs will let you update to newer hardware where it makes sense or enjoy a price benefit where it's needed. Not everything needs to be bleeding gaat as it completes in time", "created_utc": 1753021091, "id": "n463w7r", "is_submitter": false, "parent_id": "n3obc9f", "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n463w7r/", "post_id": "1m2ddqj", "score": 2, "stickied": false }, { "author": "Dr__Pangloss", "awards": 0, "body": "Why are you using such anemic instances?\nDo the documents ever fail validation?", "created_utc": 1752776381, "id": "n3oca3m", "is_submitter": false, "parent_id": "n3obc9f", "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n3oca3m/", "post_id": "1m2ddqj", "score": 1, "stickied": false }, { "author": "delusional-engineer", "awards": 0, "body": "yes we went for a similar solution, instead of synchronously receiving and validating the data we will receive first and only once the connection is closed we will validate and upload the data.", "created_utc": 1753024218, "id": "n46dl4s", "is_submitter": true, "parent_id": "n3ssjvb", "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n46dl4s/", "post_id": "1m2ddqj", "score": 1, "stickied": false }, { "author": "delusional-engineer", "awards": 0, "body": "They can, like I mentioned the api can handle uptill 1 mb per request.", "created_utc": 1753113287, "id": "n4czf5e", "is_submitter": true, "parent_id": "n4cxmfj", "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n4czf5e/", "post_id": "1m2ddqj", "score": 1, "stickied": false }, { "author": "iamkiloman", "awards": 0, "body": "So there's no limit on concurrent connections? Seems like an oversight.", "created_utc": 1752773652, "id": "n3o2g42", "is_submitter": false, "parent_id": "n3o24sn", "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n3o2g42/", "post_id": "1m2ddqj", "score": 29, "stickied": false }, { "author": "haywire", "awards": 0, "body": "Shouldn’t you use the enterprise bux to set them up their own cluster that they can spam to high heaven and bill them for the costs of the cluster? Or just have them run their own cluster, then it’s their problem.", "created_utc": 1752776741, "id": "n3odjlx", "is_submitter": false, "parent_id": "n3o24sn", "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n3odjlx/", "post_id": "1m2ddqj", "score": 8, "stickied": false }, { "author": "DandyPandy", "awards": 0, "body": "That sounds like abusive behavior if they’re circumventing the rate limits. This is a case where I would push back and tell the account team they need to work out a solution with the customer that doesn’t break the system.", "created_utc": 1752783427, "id": "n3p0v2y", "is_submitter": false, "parent_id": "n3o24sn", "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n3p0v2y/", "post_id": "1m2ddqj", "score": 6, "stickied": false }, { "author": "sionescu", "awards": 0, "body": "> We do have a rate limit (2000 requests per connection) but to by pass that they are creating more than 50 connections concurrently. \n\nYou need to have some sort of customer ID in the request and configure WAF to do global rate limiting, independent of connections.", "created_utc": 1752803720, "id": "n3qr7if", "is_submitter": false, "parent_id": "n3o24sn", "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n3qr7if/", "post_id": "1m2ddqj", "score": 4, "stickied": false }, { "author": "sogun123", "awards": 0, "body": "If you are unable to rate limit the front channel, think about limit internally. Especially when using Kafka, it should be doable. I imagine queing everything instead of direct reply and starting per customer (or some other partition) workers from keda. Then they can launch whatever they want - if they load too much requests they will wait until it is done. You can also split the api - unlimited frontend for queued batch processing and more limited one for immediate responses.", "created_utc": 1752820842, "id": "n3rucws", "is_submitter": false, "parent_id": "n3o24sn", "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n3rucws/", "post_id": "1m2ddqj", "score": 1, "stickied": false }, { "author": "AccomplishedSugar490", "awards": 0, "body": "If they’re doing batch runs they can’t legitimately expect to consume all your bandwidth to minimise how long the batch takes to run. I suggest, first corporate or not, talk to them to temper their expectations or face setting a precedent that will cost you dearly with this corporate and other you hope to bring on board.", "created_utc": 1752826661, "id": "n3s4ux7", "is_submitter": false, "parent_id": "n3o24sn", "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n3s4ux7/", "post_id": "1m2ddqj", "score": 1, "stickied": false }, { "author": "nijave", "awards": 0, "body": "\\>And since this is the first enterprise client we have onboarded management is reluctant to ask them to change their methods.\n\nNeed to work with sales regardless to make sure they're pricing the service right. Maybe you're okay taking a loss to acquire customers right now but it's also really common for sales to sell things without realizing you're taking a loss because big customers costs aren't linear.\n\nAlso good to know what's in the (sales/deal) pipeline so you can plan ahead and the customer doesn't get a poor initial experience.", "created_utc": 1753195990, "id": "n4ja3ba", "is_submitter": false, "parent_id": "n3o24sn", "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n4ja3ba/", "post_id": "1m2ddqj", "score": 1, "stickied": false }, { "author": "Arkoprabho", "awards": 0, "body": "What have been the quirks that you’ve come across?", "created_utc": 1752848053, "id": "n3tjk68", "is_submitter": false, "parent_id": "n3rklwr", "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n3tjk68/", "post_id": "1m2ddqj", "score": 1, "stickied": false }, { "author": "Grand_Musician_1260", "awards": 0, "body": "Probably 10 seconds just to provision a new node before it even joins the cluster or something, 10 seconds to get pods scheduled on the new node would be insane.", "created_utc": 1752808704, "id": "n3r4cwm", "is_submitter": false, "parent_id": "n3qbid5", "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n3r4cwm/", "post_id": "1m2ddqj", "score": 2, "stickied": false }, { "author": "suddenly_kitties", "awards": 0, "body": "Karpenter with EC2 Fleet instead of CAS and ASGs, Keda's HTTP scaler add-on (faster triggers than via Prometheus), Bottlerocket AMIs for faster boot, a bit more resource overhead (via evictable, low-priority pause pods) and you should be good.", "created_utc": 1752794494, "id": "n3q12eu", "is_submitter": false, "parent_id": "n3ooo6h", "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n3q12eu/", "post_id": "1m2ddqj", "score": 6, "stickied": false }, { "author": "ok_if_you_say_so", "awards": 0, "body": "Part of your job as a professional engineer is to help instruct the business when what they want isn't technically feasible.\n\nIf they're willing to throw unlimited dollars at it, just never scale down. Or give them their own dedicated cluster. But if there is pressure to meet the need without throwing ridiculous sums of money at it, that means a conversation needs to happen and it's the job of engineers to help inform the business about this need", "created_utc": 1752781471, "id": "n3ou0rr", "is_submitter": false, "parent_id": "n3op1fc", "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n3ou0rr/", "post_id": "1m2ddqj", "score": 8, "stickied": false }, { "author": "james-dev89", "awards": 0, "body": "This is a general guideline, your specific situation may require adjustments or may not be as exact as this.\n\n\n\nWe've setup a cronjob to scale the HPA based on some specific time period, i think this can be useful for you if you know traffic will spike every 15 - 30 mins.\n\nfor example, so you can configure it to run every 12 mins or so.\n\ni think KEDA can do this, not sure\n\n\n\nHow did we scale the HPA to kick in early:\n\nWe used a combination of memory & CPU utilization for scaling up the replica counts.\n\n\n\nOne thing we found was that our application was improperly using too much CPU, we optimized some Javascript functions (this is pretty common in some applications), basically, we reduced the application memory & CPU usage, then we set the the HPA averageUtilization lower.\n\nWe reduced the averageUtilization from 75% to 60%, we did some test on this to determine that as traffic starts growing, at 60% the Pods were able to scale up on time to meet the demand, obviously you don't want this to be too low or too high, this was based on some stress test, so before those Pods reach 100%, we already have more Pods that can handle the traffic.\n\n \nDefinitely look into Karpenter like someone said, that'll help you a lot", "created_utc": 1752781061, "id": "n3osmin", "is_submitter": false, "parent_id": "n3omma0", "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n3osmin/", "post_id": "1m2ddqj", "score": 3, "stickied": false }, { "author": "burunkul", "awards": 0, "body": "Better performance at the same cost — or even cheaper with Graviton instances.", "created_utc": 1752779251, "id": "n3omd3e", "is_submitter": false, "parent_id": "n3olgf0", "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n3omd3e/", "post_id": "1m2ddqj", "score": 8, "stickied": false }, { "author": "delusional-engineer", "awards": 0, "body": "We are using the savings plan, and for test/development envs we are using 50-50 reserved and spot instance mixture.", "created_utc": 1753024115, "id": "n46d9ab", "is_submitter": true, "parent_id": "n463w7r", "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n46d9ab/", "post_id": "1m2ddqj", "score": 1, "stickied": false }, { "author": "delusional-engineer", "awards": 0, "body": "We are currently doing at 0.7% error rate. Few of the errors can be auto-resolved by our application while others require customers to fix and start a retry.", "created_utc": 1752779629, "id": "n3onofz", "is_submitter": true, "parent_id": "n3oca3m", "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n3onofz/", "post_id": "1m2ddqj", "score": 1, "stickied": false }, { "author": "Dependent-Coyote2383", "awards": 0, "body": "I've done that for more than 10 years. an api that only take the data, save on disk (on filesystem for me, upgraded to S3 now), and send a UUID for the job process to the client immediately. The data is process by async workers on the rabbitmq server. If the client want the status of the processing or the processed data back, it can ask for it when it want, with the initial uuid of the task.\n\nWith that, you can scale up in seconds (in your case when the client first POST to the hook). In the mean time where the client make the second batch of requests, the pods are ready.", "created_utc": 1753089715, "id": "n4b86rk", "is_submitter": false, "parent_id": "n46dl4s", "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n4b86rk/", "post_id": "1m2ddqj", "score": 1, "stickied": false }, { "author": "Formal-Pilot-9565", "awards": 0, "body": "ok.\nI think you might be better off switching to SFTP for the transport.\nEither you could pull from the customers site og expose an SFTP per customer, for them to upload to.", "created_utc": 1753114089, "id": "n4d284e", "is_submitter": false, "parent_id": "n4czf5e", "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n4d284e/", "post_id": "1m2ddqj", "score": 1, "stickied": false }, { "author": "delusional-engineer", "awards": 0, "body": "yup! since most of our existing customers were using 5-6 concurrent connections at max we never put a limit on that.", "created_utc": 1752773711, "id": "n3o2nyo", "is_submitter": true, "parent_id": "n3o2g42", "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n3o2nyo/", "post_id": "1m2ddqj", "score": 5, "stickied": false }, { "author": "delusional-engineer", "awards": 0, "body": "Since this is one of our first clients of this size we haven’t yet looked upon provisioning private clouds for customers.\n\nBut thank you for the idea, will try to put it up with my mangement.", "created_utc": 1752778851, "id": "n3okylk", "is_submitter": true, "parent_id": "n3odjlx", "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n3okylk/", "post_id": "1m2ddqj", "score": 5, "stickied": false }, { "author": "azjunglist05", "awards": 0, "body": "- If the CRD for NodePools get deleted or pruned during an upgrade while the controller is up the controller interprets that as there no longer being any nodepools so it immediately starts removing nodes 🙃\n\n- How disruption budgets work takes a bit of time to tweak so that you’re ensuring that there’s always enough nodes during peak business hours\n\n- Ensuring one node always remains for a given pool requires that you deploy a dummy pod to it so Karpenter doesn’t reconcile it as empty or underutilized", "created_utc": 1752848806, "id": "n3tm5an", "is_submitter": false, "parent_id": "n3tjk68", "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n3tm5an/", "post_id": "1m2ddqj", "score": 2, "stickied": false }, { "author": "dreamszz88", "awards": 0, "body": "Alright that's great. I would try to, when you need to renew, switch that around:\n* RIs in prod for 70% of the workload that is predictable \n* Savings plan for the remainder because you know what you'll use, just not when\n* Savings plan for dev/test because of the freedom to try different instance types\n* Spot for everything else unless it's statefull", "created_utc": 1753026372, "id": "n46kmzx", "is_submitter": false, "parent_id": "n46d9ab", "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n46kmzx/", "post_id": "1m2ddqj", "score": 1, "stickied": false }, { "author": "delusional-engineer", "awards": 0, "body": "We have this option as well, but not every customer supports this, that’s why we have other connectors like http, agent upload etc.", "created_utc": 1753116538, "id": "n4dawyq", "is_submitter": true, "parent_id": "n4d284e", "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n4dawyq/", "post_id": "1m2ddqj", "score": 1, "stickied": false }, { "author": "[deleted]", "awards": 0, "body": "Classic noisy neighbor. You just slow them down.\n\nEnvoy can be configured to limit number or new conns per event loop and also number of requests before a connection is terminated.\n\nThere's a plethora of other options, but at the end of the day your customer facing folks need to be forward about the fact that they arent paying enough money to keep infra live to occasionally be thrashed by one customer", "created_utc": 1752809885, "id": "n3r79r3", "is_submitter": false, "parent_id": "n3o2nyo", "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n3r79r3/", "post_id": "1m2ddqj", "score": 10, "stickied": false }, { "author": "sionescu", "awards": 0, "body": "You always need to do per-customer rate limiting, even because a poorly configured client can easily DOS a service by retrying too quickly (and create a new connection each time). The classical case of that is running curl in a loop.", "created_utc": 1752803830, "id": "n3qrih3", "is_submitter": false, "parent_id": "n3okylk", "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n3qrih3/", "post_id": "1m2ddqj", "score": 3, "stickied": false }, { "author": "Arkoprabho", "awards": 0, "body": "1. I hope thats an edge case and not something youve had to deal with every upgrade.\n\n2. Will PDBs and topology spreads constraints help with this?\n\n3. Yeah. I have been trying to find a way to specify minimum CPU/memory specs. Similar to the limit spec. To keep a node warm.", "created_utc": 1752877921, "id": "n3wdg5r", "is_submitter": false, "parent_id": "n3tm5an", "permalink": "/r/kubernetes/comments/1m2ddqj/scaling_service_to_handle_20x_capacity_within/n3wdg5r/", "post_id": "1m2ddqj", "score": 1, "stickied": false } ]
61
1m29jlh
Anemos – Open source, single binary CLI tool to manage Kubernetes manifests using JavaScript and TypeScript
Hello Reddit, I am Yusuf from [Ohayocorp](https://ohayocorp.com). I have been developing a package manager for Kubernetes and I am excited to share it with you all. Currently, the go-to package manager for Kubernetes is Helm. Helm has many shortcomings and people have been looking for alternatives for a long time. There are actually several alternatives that have emerged, but none has gained significant traction to replace Helm. So, you might ask what makes Anemos different? Anemos uses JavaScript/TypeScript to define and manage your Kubernetes manifests. It is a single-binary tool that is written in Go and uses the Goja runtime (its Sobek fork to be pedantic) to execute JavaScript/TypeScript code. It supports templating via JavaScript template literals. It also allows you to use an object-oriented approach for type safety and better IDE experience. As a third option, it provides APIs for direct YAML node manipulation. You can mix and match these approaches in any way you like. Anemos allows you to define manifests for all your applications in a single project. You can also easily manage different environments like development, staging, and production in the same project. This brings centralized configuration management and makes it easier to maintain consistency across applications and environments. Another key feature of Anemos is its ability to modify generated manifests whether it's generated by your own code or by third-party packages. No need to wait for maintainers to add a feature or fix a bug. It also allows you to modify and inspect your manifests in bulk, such as adding some labels to all your manifests or replacing your ingresses with OpenShift routes or giving an error if a workload misses a security context field. Anemos also provides an easy way to use Helm charts in your projects, allowing you to leverage your existing charts while still benefiting from Anemos's features. You can migrate your Helm charts to Anemos at your own pace, without rewriting everything from scratch in one go. What currently lacks in Anemos to make it a complete solution is applying the manifests to a Kubernetes cluster. I have this on my roadmap and plan to implement it soon. I would appreciate any feedback, suggestions, or contributions from the community to help make Anemos better.
15
0.89
4
1,752,763,749
NotAnAverageMan
/r/kubernetes/comments/1m29jlh/anemos_open_source_single_binary_cli_tool_to/
https://github.com/ohayocorp/anemos
false
github.com
null
kubernetes
false
false
false
0
2025-09-26T10:56:35.186213
[ { "author": "robinvanderknaap", "awards": 0, "body": "This looks a lot like [cdk8s.io](https://cdk8s.io/).", "created_utc": 1752998323, "id": "n44ry0k", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m29jlh/anemos_open_source_single_binary_cli_tool_to/n44ry0k/", "post_id": "1m29jlh", "score": 2, "stickied": false }, { "author": "NotAnAverageMan", "awards": 0, "body": "Yes, they both have a similar approach to the problem, but differ in a few important ways.\n\n* Anemos doesn't require a dev environment setup. It is a single binary that you can download and use.\n* cdk8s supports multiple languages and I think that this makes it hard to create a cohesive package sharing mechanism. I don't know an easy way to use a package written in Python to be reused in Go.\n* cdk8s is much too object oriented. It requires to use data structures everywhere, while using templates is easier and more readable in many cases. Anemos is more YAML oriented and it supports templating, object oriented, and YAML node based approaches.", "created_utc": 1753040533, "id": "n47venf", "is_submitter": true, "parent_id": "n44ry0k", "permalink": "/r/kubernetes/comments/1m29jlh/anemos_open_source_single_binary_cli_tool_to/n47venf/", "post_id": "1m29jlh", "score": 2, "stickied": false }, { "author": "robinvanderknaap", "awards": 0, "body": "Thanx for your answer. I'm currently investigating if cdk8s could be a good fit for us. I will checkout anemos as well, looks interesting.", "created_utc": 1753370475, "id": "n4wxwso", "is_submitter": false, "parent_id": "n47venf", "permalink": "/r/kubernetes/comments/1m29jlh/anemos_open_source_single_binary_cli_tool_to/n4wxwso/", "post_id": "1m29jlh", "score": 2, "stickied": false }, { "author": "NotAnAverageMan", "awards": 0, "body": "Thanks! Happy to help if you need anything.", "created_utc": 1753433926, "id": "n51yvl5", "is_submitter": true, "parent_id": "n4wxwso", "permalink": "/r/kubernetes/comments/1m29jlh/anemos_open_source_single_binary_cli_tool_to/n51yvl5/", "post_id": "1m29jlh", "score": 1, "stickied": false } ]
4
1m28kww
Upcoming changes to the Bitnami catalog. Broadcom introduces Bitnami Secure Images for production-ready containerized applications
41
0.97
28
1,752,761,452
Medical_Principle836
/r/kubernetes/comments/1m28kww/upcoming_changes_to_the_bitnami_catalog_broadcom/
https://news.broadcom.com/app-dev/broadcom-introduces-bitnami-secure-images-for-production-ready-containerized-applications
false
news.broadcom.com
null
kubernetes
false
false
false
0
2025-09-26T10:56:36.333316
[ { "author": "trepz", "awards": 0, "body": "Oh thank you Broadcom for doing things like these right in the middle of summer, expecting companies to panick and subscribe when they count the amount of ImagePullBackOff they'll get.\n\nNow it's a urgent ticket for my team to: 1) advice developers to not rely on bitnami images anymore 2) new kyverno rules 3) find alternatives to the one we currently have deployed\n\nThank you thank you thank you", "created_utc": 1752837970, "id": "n3sqbz8", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m28kww/upcoming_changes_to_the_bitnami_catalog_broadcom/n3sqbz8/", "post_id": "1m28kww", "score": 22, "stickied": false }, { "author": "SomethingAboutUsers", "awards": 0, "body": "Wait Broadcom runs Bitnami now? How did I not know that? Damn.", "created_utc": 1752773319, "id": "n3o17j0", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m28kww/upcoming_changes_to_the_bitnami_catalog_broadcom/n3o17j0/", "post_id": "1m28kww", "score": 18, "stickied": false }, { "author": "bvierra", "awards": 0, "body": "in b4 Bitnami goes closed source and is only offered to the top 1% of the userbase at a 5000% markup", "created_utc": 1752771349, "id": "n3ntwaf", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m28kww/upcoming_changes_to_the_bitnami_catalog_broadcom/n3ntwaf/", "post_id": "1m28kww", "score": 17, "stickied": false }, { "author": "Eisbaer811", "awards": 0, "body": "Post title is misleading. \nShould read \"Bitnami paywalls all images and helm charts\"\n\nI for one don't want to work with only the main branch and the \"latest\" image tag. \n \nDespicable actions from Broadcom, enshittifying one more product", "created_utc": 1753086371, "id": "n4b2jd8", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m28kww/upcoming_changes_to_the_bitnami_catalog_broadcom/n4b2jd8/", "post_id": "1m28kww", "score": 10, "stickied": false }, { "author": "ABotelho23", "awards": 0, "body": "Stay the hell away from Bitnami images.", "created_utc": 1752982847, "id": "n43x9s6", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m28kww/upcoming_changes_to_the_bitnami_catalog_broadcom/n43x9s6/", "post_id": "1m28kww", "score": 6, "stickied": false }, { "author": "srekkas", "awards": 0, "body": "Yeah, does they do not benefit from all users and developers as they contribute?\n\nStupid B\\*\\*\\*cum", "created_utc": 1753104819, "id": "n4c72xn", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m28kww/upcoming_changes_to_the_bitnami_catalog_broadcom/n4c72xn/", "post_id": "1m28kww", "score": 3, "stickied": false }, { "author": "Numblesix", "awards": 0, "body": "Greeeeeeat, is there any alternative to Sealed? \nSure you can use eso but that’s not as simple as deploying a sealed secrets controller :(", "created_utc": 1752878860, "id": "n3wg5sm", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m28kww/upcoming_changes_to_the_bitnami_catalog_broadcom/n3wg5sm/", "post_id": "1m28kww", "score": 2, "stickied": false }, { "author": "Double_Intention_641", "awards": 0, "body": "Just stumbled across this today, their helm charts now link to https://github.com/bitnami/containers/issues/83267\n\nPretty rotten. Definitely time to look at alternatives.", "created_utc": 1753456368, "id": "n53lg5r", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m28kww/upcoming_changes_to_the_bitnami_catalog_broadcom/n53lg5r/", "post_id": "1m28kww", "score": 2, "stickied": false }, { "author": "whataboutplants", "awards": 0, "body": "Any recommendations for replacements of bitnami/php-fpm?", "created_utc": 1753702709, "id": "n5lb6tw", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m28kww/upcoming_changes_to_the_bitnami_catalog_broadcom/n5lb6tw/", "post_id": "1m28kww", "score": 2, "stickied": false }, { "author": "kamikazer", "awards": 0, "body": "any alternative to rabbitmq-cluster-operator?", "created_utc": 1753784852, "id": "n5roomg", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m28kww/upcoming_changes_to_the_bitnami_catalog_broadcom/n5roomg/", "post_id": "1m28kww", "score": 2, "stickied": false }, { "author": "SeraphBlade2010", "awards": 0, "body": "Will sealed secrets be affected? and what about the helm charts?", "created_utc": 1753795814, "id": "n5sg7so", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m28kww/upcoming_changes_to_the_bitnami_catalog_broadcom/n5sg7so/", "post_id": "1m28kww", "score": 1, "stickied": false }, { "author": "Impressive_Maize436", "awards": 0, "body": "if you’re looking for drop-in Bitnami replacements, RapidFort’s curated near-zero CVE images are an option! [https://www.rapidfort.com/blog/bitnami-goes-behind-paywall-rapidforts-curated-near-zero-cve-images-offer-superior-alternative](https://www.rapidfort.com/blog/bitnami-goes-behind-paywall-rapidforts-curated-near-zero-cve-images-offer-superior-alternative)", "created_utc": 1757568531, "id": "ndl3xlc", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m28kww/upcoming_changes_to_the_bitnami_catalog_broadcom/ndl3xlc/", "post_id": "1m28kww", "score": 1, "stickied": false }, { "author": "trepz", "awards": 0, "body": "I just wanted to add that we reached out to their sales dept and the quote for us was 62k/year. Amazing.", "created_utc": 1753280170, "id": "n4ptd4n", "is_submitter": false, "parent_id": "n3sqbz8", "permalink": "/r/kubernetes/comments/1m28kww/upcoming_changes_to_the_bitnami_catalog_broadcom/n4ptd4n/", "post_id": "1m28kww", "score": 4, "stickied": false }, { "author": "spooge_mcnubbins", "awards": 0, "body": "No kidding! This is bullshit. Thankfully, I've already moved away from Sealed Secrets, but my MariaDB Galera cluster is built around Bitnami. That's going to be a pain to move away from.", "created_utc": 1752849617, "id": "n3tp053", "is_submitter": false, "parent_id": "n3sqbz8", "permalink": "/r/kubernetes/comments/1m28kww/upcoming_changes_to_the_bitnami_catalog_broadcom/n3tp053/", "post_id": "1m28kww", "score": 4, "stickied": false }, { "author": "DrButttt", "awards": 0, "body": "Vmware bought them in 2019 and then broadcom bought vmware.", "created_utc": 1752821410, "id": "n3rvfdg", "is_submitter": false, "parent_id": "n3o17j0", "permalink": "/r/kubernetes/comments/1m28kww/upcoming_changes_to_the_bitnami_catalog_broadcom/n3rvfdg/", "post_id": "1m28kww", "score": 10, "stickied": false }, { "author": "rmslashusr", "awards": 0, "body": "That’s almost exactly what’s happening here", "created_utc": 1752862318, "id": "n3uy5cv", "is_submitter": false, "parent_id": "n3ntwaf", "permalink": "/r/kubernetes/comments/1m28kww/upcoming_changes_to_the_bitnami_catalog_broadcom/n3uy5cv/", "post_id": "1m28kww", "score": 9, "stickied": false }, { "author": "onedr0p", "awards": 0, "body": "https://github.com/isindir/sops-secrets-operator", "created_utc": 1752924164, "id": "n3z84gg", "is_submitter": false, "parent_id": "n3wg5sm", "permalink": "/r/kubernetes/comments/1m28kww/upcoming_changes_to_the_bitnami_catalog_broadcom/n3z84gg/", "post_id": "1m28kww", "score": 2, "stickied": false }, { "author": "eazylaykzy", "awards": 0, "body": "And Redis in HA (sentinel mode). \n\nThank you", "created_utc": 1753795487, "id": "n5sf7iq", "is_submitter": false, "parent_id": "n5roomg", "permalink": "/r/kubernetes/comments/1m28kww/upcoming_changes_to_the_bitnami_catalog_broadcom/n5sf7iq/", "post_id": "1m28kww", "score": 1, "stickied": false }, { "author": "eazylaykzy", "awards": 0, "body": "RabbitMQ is own by VMWare 🤦🏽‍♂️", "created_utc": 1753795809, "id": "n5sg7a7", "is_submitter": false, "parent_id": "n5roomg", "permalink": "/r/kubernetes/comments/1m28kww/upcoming_changes_to_the_bitnami_catalog_broadcom/n5sg7a7/", "post_id": "1m28kww", "score": 1, "stickied": false }, { "author": "shkarface", "awards": 0, "body": "Sealed secrets and minideb won't be affected by this", "created_utc": 1753796415, "id": "n5si20t", "is_submitter": false, "parent_id": "n5sg7so", "permalink": "/r/kubernetes/comments/1m28kww/upcoming_changes_to_the_bitnami_catalog_broadcom/n5si20t/", "post_id": "1m28kww", "score": 1, "stickied": false }, { "author": "mompelz", "awards": 0, "body": "Recreate a cluster with mariadb-operator, that's anyway more reliable.", "created_utc": 1752963201, "id": "n42h9f3", "is_submitter": false, "parent_id": "n3tp053", "permalink": "/r/kubernetes/comments/1m28kww/upcoming_changes_to_the_bitnami_catalog_broadcom/n42h9f3/", "post_id": "1m28kww", "score": 3, "stickied": false }, { "author": "SomethingAboutUsers", "awards": 0, "body": "I think I knew both of those things separately but for some dumbass reason didn't put them together in this context.\n\nIamnotasmartman.jpg", "created_utc": 1752843157, "id": "n3t3yr5", "is_submitter": false, "parent_id": "n3rvfdg", "permalink": "/r/kubernetes/comments/1m28kww/upcoming_changes_to_the_bitnami_catalog_broadcom/n3t3yr5/", "post_id": "1m28kww", "score": 7, "stickied": false }, { "author": "SeraphBlade2010", "awards": 0, "body": "any source I can check this on?", "created_utc": 1753796510, "id": "n5sicud", "is_submitter": false, "parent_id": "n5si20t", "permalink": "/r/kubernetes/comments/1m28kww/upcoming_changes_to_the_bitnami_catalog_broadcom/n5sicud/", "post_id": "1m28kww", "score": 2, "stickied": false }, { "author": "mmontes11", "awards": 0, "body": "Indeed. Here the required steps:\n\nhttps://github.com/mariadb-operator/mariadb-operator/blob/main/docs/logical_backup.md#migrating-an-external-mariadb-to-a-mariadb-running-in-kubernetes", "created_utc": 1753033116, "id": "n4777zd", "is_submitter": false, "parent_id": "n42h9f3", "permalink": "/r/kubernetes/comments/1m28kww/upcoming_changes_to_the_bitnami_catalog_broadcom/n4777zd/", "post_id": "1m28kww", "score": 3, "stickied": false }, { "author": "dogukanarkan", "awards": 0, "body": "You can see at the bottom. Search \\`sealed\\` in page. \n[https://github.com/bitnami/containers/issues/83267](https://github.com/bitnami/containers/issues/83267)", "created_utc": 1753923686, "id": "n6318f5", "is_submitter": false, "parent_id": "n5sicud", "permalink": "/r/kubernetes/comments/1m28kww/upcoming_changes_to_the_bitnami_catalog_broadcom/n6318f5/", "post_id": "1m28kww", "score": 1, "stickied": false } ]
25
1m28d6c
Looking for deployment tool to deploy helm charts
I am part of a team working out the deployment toolchain for our inhouse software. There are several products, each of which will be running as a collection of microservices in kubernetes. So in the end, there will be many kubernetes clusters, running tons of microservices. Each microservice's artifacts are uploaded as docker images + helm charts to a central artifact storage (Sonatype Nexus) and will be deployed from there. I am tasked with the design of a deployment pattern which allows non-developers to deploy our software, in a convenient and flexible way. It will \_most likely\_ boil down to not using CLI tools, but some kind of browser based HMI, depending on what is available on the market, and what can/must be implemented by us, which pretty much limits the possibilities unfortunately. Now I am curious what existing tools there are, which cover my needs, as I feel that I can't be the first one trying to offer enterprise-level easy-to-use deployment tools. I already checked for example [https://landscape.cncf.io/](https://landscape.cncf.io/), but upon a first glance, no tool satisfies my needs. What I need, in a nutshell: * deploy all helm charts (= microservices) of a product together * each helm chart must have the correct version, so some kind of bundling must be used (e.g what umbrella charts/helmsman/helmfile do) * it must be possible to start/stop/restart individual microservices also, either by scaling down/up replicas, or uninstalling/redeploying them * it must be possible to restart all microservices (can be a loop of the previous requirement) All of this in the most user friendly way, if possible, with some kind of HMI, which in the best case also provides a REST API to trigger actions so it can be integrated into legacy tools we already use / must use. We can't go the CI/CD route, as we have a decoupled development and deployment processes because of legal reasons. We can't use gitlab pipelines or GitOps to do the job for us. We need to manually trigger deployments after the software has passed large scale acceptance tests by different departments in the company. So basically the workflow would be like: 1. development team uploads all microservices to the Nexus artifact storage 2. development team generates some kind of manifest, containing all services and their corresponding versions, e.g. a helmsman file, umbrella chart, custom YAML, whatever. the manifest also transports the current product release version, either as filename, or contained in the file (e.g. my-product-v1.3.5) 3. development team signals that "my-product-v1.3.5" can now be installed and provides the manifest (e.g. also upload to Nexus) 4. operational team uses tool X to install "my-product-v1.3.5", by downloading the manifest, feeding it into tool X, which in turn does \_n\_ times \`helm install service-n --version \[version of service n contained in manifest\]\` 5. software is successfully deployed In addition, stop/start/restart must be possible, but this will probably be really easy to achieve, since most tools seem to cover this. I am aware that it is not recommended practice to deploy all microservices of a microservices application at once (= deployment monolith). However this is one of my current constraints I can't neglect, but some time in the future, microservices will be deployed individually. Does a tool exist which covers the above functionality? Otherwise it would be rather simple to implement something on our own, e.g. by implementing a golang service which contains a webserver + HMI, and uses the helm go library + k8s go library to perform actions on the cluster. However, I would like to avoid reinventing wheels, and I would like to keep the custom development efforts low, because I favour standard tools which already exists. So how do enterprises deploy to kubernetes nowadays, if they can't use GitOps/CI/CD and don't want to use the CLI to deploy helm charts? Does this use case even exist, or are we in a niche where no solution already exists? Thanks in advance for your thoughts, ideas & comments.
2
0.67
36
1,752,760,939
s71011
/r/kubernetes/comments/1m28d6c/looking_for_deployment_tool_to_deploy_helm_charts/
https://www.reddit.com/r/kubernetes/comments/1m28d6c/looking_for_deployment_tool_to_deploy_helm_charts/
true
self.kubernetes
null
kubernetes
false
false
false
0
2025-09-26T10:56:37.751632
[ { "author": "dacydergoth", "awards": 0, "body": "ArgoCD + App of Apps pattern and ApplicationSets\n\nBut as you pointed out, you're doing it wrong", "created_utc": 1752761590, "id": "n3muv5p", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m28d6c/looking_for_deployment_tool_to_deploy_helm_charts/n3muv5p/", "post_id": "1m28d6c", "score": 14, "stickied": false }, { "author": "MoTTTToM", "awards": 0, "body": "I was going to suggest GitOps until reading your second last paragraph. Which aspect of GitOps rules this out as an option?", "created_utc": 1752761562, "id": "n3muron", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m28d6c/looking_for_deployment_tool_to_deploy_helm_charts/n3muron/", "post_id": "1m28d6c", "score": 7, "stickied": false }, { "author": "SiurbliuMeistrs", "awards": 0, "body": "I guess Rancher could be used as a GUI to deploy apps (those are Helm charts usually) and has proper RBAC. Or use code executor like Rundeck to present options, dropdowns, targets etc to execute any code including k8s commands which also has good RBAC and Git versioning to make its job definitions IaC.", "created_utc": 1752776669, "id": "n3odajq", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m28d6c/looking_for_deployment_tool_to_deploy_helm_charts/n3odajq/", "post_id": "1m28d6c", "score": 3, "stickied": false }, { "author": "[deleted]", "awards": 0, "body": "[removed]", "created_utc": 1752783147, "id": "n3ozvgy", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m28d6c/looking_for_deployment_tool_to_deploy_helm_charts/n3ozvgy/", "post_id": "1m28d6c", "score": 3, "stickied": false }, { "author": "GeorgeRaven", "awards": 0, "body": "I ... I'm ... I'm sorry.\n\nThis sounds like hell, it also sounds like some decision makers are living in a different universe to the rest of us.\n\nIf you need a non-technical button to deploy apps, that's impossible, unless they come pre-tested, configured, and are bulletproof. Otherwise they will require someone who knows what they are doing to make some form of change to make them work or fix bugs that the helm chart creators etc (or whatever packaging method) did not ordain.\n\nThe best bet is something like backstage to get a non-techie some web-based template to fill out which automated the process of creating a pr to a git repo. Then have that repo gitops like normal, no complex custom code needed to deploy charts etc when those tools already exist. \n\nYou will need a catalogue ready-made of things that are installable for them to pick from. Honestly even that is nightmare but it sounds like what is going on here.\n\nIf it's too sensitive for public saas git hosting, then host that too. I can't imagine doing kubernetes without gitops. That is a disaster waiting to happen, it's already complex enough. If you ABSOLUTELY MUST raw dog it, god speed, make sure to take plenty of k8s etcd and volume backups.\n\nIdeally deployment would happen by specialists, who gitops everything and know what they are doing. Expecting anything in k8s to be a button to deploy is just pure fantasy without ungodly resources to test every permutation of everything, and then some of the disaster scenarios.", "created_utc": 1752783445, "id": "n3p0xcc", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m28d6c/looking_for_deployment_tool_to_deploy_helm_charts/n3p0xcc/", "post_id": "1m28d6c", "score": 3, "stickied": false }, { "author": "rumblpak", "awards": 0, "body": "While I fail to see how this can’t be done with renovatebot + fluxcd, it sounds like what you want is spinnaker. It’s the management approved solution for what you’re describing. It’s awful and you’ll hate your life. Good luck. ", "created_utc": 1752788214, "id": "n3phgt4", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m28d6c/looking_for_deployment_tool_to_deploy_helm_charts/n3phgt4/", "post_id": "1m28d6c", "score": 1, "stickied": false }, { "author": "Apochotodorus", "awards": 0, "body": "If I’m not mistaken, what you're describing sounds a lot like an [engineering platform](https://platformengineering.org/).\nFor the frontend, you might want to check out tools like [backstage](https://backstage.io/) or [port](https://www.port.io/) — they provide a user-friendly interface for developers to interact with infrastructure and deployment workflows.\nFor the backend — especially for orchestrating the deployment of your Helm charts — tools like [orbits](https://orbits.do) (disclaimer: I work there) or [kratix](https://kratix.io) can help. These platforms let you define the logic behind deployments, write ordered deployments, handle version synchronization across clusters, and automate security patch rollouts.\nThis kind of setup gives you a clear separation between the frontend (where teams trigger deployments) and the backend (which manages how and when things actually get deployed).", "created_utc": 1752827864, "id": "n3s6yen", "is_submitter": false, "parent_id": null, "permalink": "/r/kubernetes/comments/1m28d6c/looking_for_deployment_tool_to_deploy_helm_charts/n3s6yen/", "post_id": "1m28d6c", "score": 1, "stickied": false }, { "author": "vantasmer", "awards": 0, "body": "Read that whole post to conclude the same thing. Properly set up Argo (or akuity) with app of apps is all this admin needs. ", "created_utc": 1752785757, "id": "n3p953d", "is_submitter": false, "parent_id": "n3muv5p", "permalink": "/r/kubernetes/comments/1m28d6c/looking_for_deployment_tool_to_deploy_helm_charts/n3p953d/", "post_id": "1m28d6c", "score": 2, "stickied": false }, { "author": "BortLReynolds", "awards": 0, "body": "Yeah I don't get that either, nothing about having decoupled deployments says you can't still use GitOps.", "created_utc": 1752764343, "id": "n3n4ncg", "is_submitter": false, "parent_id": "n3muron", "permalink": "/r/kubernetes/comments/1m28d6c/looking_for_deployment_tool_to_deploy_helm_charts/n3n4ncg/", "post_id": "1m28d6c", "score": 7, "stickied": false }, { "author": "CircularCircumstance", "awards": 0, "body": "Maybe just needs to insert into that workflow an [Operator](https://sdk.operatorframework.io/docs/building-operators/helm/tutorial/) to CRUD the Helm charts?", "created_utc": 1752764447, "id": "n3n50wy", "is_submitter": false, "parent_id": "n3muron", "permalink": "/r/kubernetes/comments/1m28d6c/looking_for_deployment_tool_to_deploy_helm_charts/n3n50wy/", "post_id": "1m28d6c", "score": 1, "stickied": false }, { "author": "s71011", "awards": 0, "body": "The operators will not be able to use Git in the first place, unfortunately. We‘re unfortunately on the level of“i need a button i can press to install the software“. This is outside of my control.", "created_utc": 1752769237, "id": "n3nm89e", "is_submitter": true, "parent_id": "n3muron", "permalink": "/r/kubernetes/comments/1m28d6c/looking_for_deployment_tool_to_deploy_helm_charts/n3nm89e/", "post_id": "1m28d6c", "score": 1, "stickied": false }, { "author": "s71011", "awards": 0, "body": "Thanks, will check it out!", "created_utc": 1752782756, "id": "n3oyib4", "is_submitter": true, "parent_id": "n3odajq", "permalink": "/r/kubernetes/comments/1m28d6c/looking_for_deployment_tool_to_deploy_helm_charts/n3oyib4/", "post_id": "1m28d6c", "score": 1, "stickied": false }, { "author": "UndercoverRowbot", "awards": 0, "body": "Came here to say use Rundeck - I have a massive response typed out but it won't let me post it. \nI'll try again later but Rundeck is a great way to abstract teams like you are trying to do", "created_utc": 1752844321, "id": "n3t7hln", "is_submitter": false, "parent_id": "n3odajq", "permalink": "/r/kubernetes/comments/1m28d6c/looking_for_deployment_tool_to_deploy_helm_charts/n3t7hln/", "post_id": "1m28d6c", "score": 1, "stickied": false }, { "author": "s71011", "awards": 0, "body": "Thanks! Will check it out!", "created_utc": 1752822511, "id": "n3rxh1c", "is_submitter": true, "parent_id": "n3ozvgy", "permalink": "/r/kubernetes/comments/1m28d6c/looking_for_deployment_tool_to_deploy_helm_charts/n3rxh1c/", "post_id": "1m28d6c", "score": 1, "stickied": false }, { "author": "s71011", "awards": 0, "body": "Thanks!", "created_utc": 1752822714, "id": "n3rxu56", "is_submitter": true, "parent_id": "n3p0xcc", "permalink": "/r/kubernetes/comments/1m28d6c/looking_for_deployment_tool_to_deploy_helm_charts/n3rxu56/", "post_id": "1m28d6c", "score": 1, "stickied": false }, { "author": "s71011", "awards": 0, "body": "Also see my other comment: [https://www.reddit.com/r/kubernetes/comments/1m28d6c/comment/n3sbezz/?utm\\_source=share&utm\\_medium=web3x&utm\\_name=web3xcss&utm\\_term=1&utm\\_content=share\\_button](https://www.reddit.com/r/kubernetes/comments/1m28d6c/comment/n3sbezz/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button)\n\nYou said \"pre-tested\", \"configured\", and \"bulletproof\". This is exactly what we are doing, in the end even for legal reasons. There are whole departments in the company, responsible for acceptance-testing the software, and fulfilling all legal requirements, and exactly ensuring \"pre-tested\", \"configured\" and \"bulletproof\" conditions. We are not the average web-shop which sells shoes online.", "created_utc": 1752830628, "id": "n3sbrfe", "is_submitter": true, "parent_id": "n3p0xcc", "permalink": "/r/kubernetes/comments/1m28d6c/looking_for_deployment_tool_to_deploy_helm_charts/n3sbrfe/", "post_id": "1m28d6c", "score": 1, "stickied": false }, { "author": "s71011", "awards": 0, "body": "How would the workflow look like when using renovate and flux?", "created_utc": 1752822562, "id": "n3rxkae", "is_submitter": true, "parent_id": "n3phgt4", "permalink": "/r/kubernetes/comments/1m28d6c/looking_for_deployment_tool_to_deploy_helm_charts/n3rxkae/", "post_id": "1m28d6c", "score": 1, "stickied": false }, { "author": "s71011", "awards": 0, "body": "Thanks!", "created_utc": 1752830643, "id": "n3sbsff", "is_submitter": true, "parent_id": "n3s6yen", "permalink": "/r/kubernetes/comments/1m28d6c/looking_for_deployment_tool_to_deploy_helm_charts/n3sbsff/", "post_id": "1m28d6c", "score": 1, "stickied": false }, { "author": "s71011", "awards": 0, "body": "If am not totally mistaken, the only difference from GitOps to a manual helm install in the end is the trigger which in the end leads to a deployment. In the former, you push something to a git repo (in the case of ArgoCD, the updated Application/ApplicationSet YAML manifest containing the new version), and then ArgoCD takes care of deploying the changes, while the latter does the same, but triggered by a manual \\`helm install\\` or similar means through other tools. \nSo really, the advantage of GitOps lies in the full CI/CD workflow, where everything is automated from commit to deploy.\n\nBut as I pointed out, we can't do this. We're in a high-availability environment of critical state (state as in country, not status) services/infrastructure, and everything must not be done automatically, but explicitly and manually, with easy to use means of rolling back and trouble-shooting. Furthermore, as pointed out, triggering releases through git commits is just not viable, as the operators will not be able to use git in the first place.\n\nI am already using ArgoCD in other places, and I am really happy with it. However, for the use case I have described, I don't see any real benefits of using GitOps. Rather, I would have to tweak and workaround a lot, to make GitOps work in this scenario, which is only cons in my books.\n\nFor example, there is no easy way for an operator to roll back the software to the previous version, if using ArgoCD. They would need to know how to either edit the ApplicationSet manifest in the cluster, or have access to git and know how to operate git, both of which will not be the case.\n\nIf I would e.g. be using Umbrella charts, the whole software including microservices can be installed with 1 command, and can be rolled back with 1 command. So really what it comes down to is providing a nice UI, which supports this by offering a list of available versions, and allowing to do the \\`helm install\\` in a user friendly, non-CLI way.\n\nI am curious how you come to the conclusion that I am \"doing it wrong\"; given all my constraints (HA critical infrastructure, self-hosted datacenter locked out from the internet, required controlled rollout of things and not using automated deployments, ...) I have pointed out. \nAnd I mean that as a question, as I am clearly looking for a \"still-proper way\" of doing things, knowing that I can't go the usual \"best-practice\" approach your usual web-shop could do, leveraging all the standard tools you guys already mentioned.", "created_utc": 1752830428, "id": "n3sbezz", "is_submitter": true, "parent_id": "n3p953d", "permalink": "/r/kubernetes/comments/1m28d6c/looking_for_deployment_tool_to_deploy_helm_charts/n3sbezz/", "post_id": "1m28d6c", "score": 1, "stickied": false }, { "author": "Horror_Description87", "awards": 0, "body": "Fluxcd?", "created_utc": 1752766733, "id": "n3nd8gs", "is_submitter": false, "parent_id": "n3n50wy", "permalink": "/r/kubernetes/comments/1m28d6c/looking_for_deployment_tool_to_deploy_helm_charts/n3nd8gs/", "post_id": "1m28d6c", "score": 1, "stickied": false }, { "author": "rockettmann", "awards": 0, "body": "What about ArgoCD using manual sync + app of apps?", "created_utc": 1752806155, "id": "n3qxte2", "is_submitter": false, "parent_id": "n3nm89e", "permalink": "/r/kubernetes/comments/1m28d6c/looking_for_deployment_tool_to_deploy_helm_charts/n3qxte2/", "post_id": "1m28d6c", "score": 5, "stickied": false }, { "author": "MoTTTToM", "awards": 0, "body": "A tool that meets your spec, with slight adjustments would be cool. What the operator needs is a button to press. If everyone else is comfortable with git processes, then we merge the new/changed manifest to the production branch/repo (with the appropriate reviews and approvals) and it becomes available for deployment (instead of uploading the manifests to Nexus in step 3). The operator logs into the tool GUI, which discovers the new or changed artefact, and associated change management info. Then, during the approved change window, the operator could trigger the appropriate kustomization generation and commit of changed manifests, by checking off the required deployables, and pressing the deploy button. Buttons for undeploy and rollback would also be required. I can imagine how this would be possible on top of flux, probably ArgoCD.", "created_utc": 1752850401, "id": "n3trs8g", "is_submitter": false, "parent_id": "n3nm89e", "permalink": "/r/kubernetes/comments/1m28d6c/looking_for_deployment_tool_to_deploy_helm_charts/n3trs8g/", "post_id": "1m28d6c", "score": 1, "stickied": false }, { "author": "ciacco22", "awards": 0, "body": "What if I were to tell you “merge” is a button? 😄", "created_utc": 1752987871, "id": "n448d74", "is_submitter": false, "parent_id": "n3nm89e", "permalink": "/r/kubernetes/comments/1m28d6c/looking_for_deployment_tool_to_deploy_helm_charts/n448d74/", "post_id": "1m28d6c", "score": 1, "stickied": false }, { "author": "UndercoverRowbot", "awards": 0, "body": "We've been using it for years, long before Kubernetes was a thing, to deploy docker containers to multiple environments. We essentially have a DevOps Rundeck which houses all the devops jobs - these jobs are used to create environments and then deploy a rundeck to each environment. \n\nThe actual jobs are all yaml that sits in a git repo. When deploying the rundeck we geerate the jobs it requires (specific to the environments variables) and use the Rundeck api to load all the jobs from th git repo. \n\nRundeck jobs are incredibly flexible and in our environment it boils down to simply executing a shell script in the environment to create the docker container. Since this is all templated we can generate hundreds of jobs in seconds by feeding in a few parameters. Each Container/service has a create, delete, stop, start, and restart job. \n \nWe've now started using it in Kubernetes and you can follow the same pattern by generating jobs for helm deploy. \n\n1/n", "created_utc": 1752844410, "id": "n3t7rhb", "is_submitter": false, "parent_id": "n3t7hln", "permalink": "/r/kubernetes/comments/1m28d6c/looking_for_deployment_tool_to_deploy_helm_charts/n3t7rhb/", "post_id": "1m28d6c", "score": 1, "stickied": false }, { "author": "rumblpak", "awards": 0, "body": "In a nutshell, renovate detects when a new release is available and creates a pull request in github (or insert your git of choice). You then place controls requiring approvers to merge the pull request, and once it is merged, let flux handle the installation to the cluster. It sounds difficult but it can be setup in an afternoon and requires basically 0 maintenance. ", "created_utc": 1752840443, "id": "n3swezw", "is_submitter": false, "parent_id": "n3rxkae", "permalink": "/r/kubernetes/comments/1m28d6c/looking_for_deployment_tool_to_deploy_helm_charts/n3swezw/", "post_id": "1m28d6c", "score": 1, "stickied": false }, { "author": "vantasmer", "awards": 0, "body": "Argo literally has a UI where you can input the helm params and values. It also has a built in roll back mechanism. \n\nIt shows you each resource that is deployed as part of the application and can scale deployments/sts directly from there as well as view logs and exec into pods.\n\nI’m not saying you’re wrong or that your infrastructure isn’t complicated. Just don’t blame the tool. It sounds like you want to build a new tool so by all means, do that. \nNotice how no one mentioned GitOps in this specific thread yet you assume that’s the only way to use Argo. ", "created_utc": 1752839578, "id": "n3su77y", "is_submitter": false, "parent_id": "n3sbezz", "permalink": "/r/kubernetes/comments/1m28d6c/looking_for_deployment_tool_to_deploy_helm_charts/n3su77y/", "post_id": "1m28d6c", "score": 3, "stickied": false }, { "author": "dacydergoth", "awards": 0, "body": "Set application autosync to off. Optionally set upgrade windows on the projects. Then use ArgoCD to inspect the changes before sync - impact analysis is the big win here. Then just manually hit the sync button when approved. We do all this in our manual approval process.", "created_utc": 1752842123, "id": "n3t0z6l", "is_submitter": false, "parent_id": "n3sbezz", "permalink": "/r/kubernetes/comments/1m28d6c/looking_for_deployment_tool_to_deploy_helm_charts/n3t0z6l/", "post_id": "1m28d6c", "score": 3, "stickied": false }, { "author": "Lordvader89a", "awards": 0, "body": "You can either have a separate repository where you manually adjust the versions, which you said you dont want\n\nYou can also create a CI pipeline that sends a webhook to ArgoCD which triggers the deployment\n\nYou could also add webhooks on commit into the repo to trigger a \"refresh\" in ArgoCD, i.e. resulting in the applications not being up-to-date. Then, well all the review processes have passed, click on the \"sync\" button inside the ArgoCD UI. Add to that using app of apps and you can sync all microservices at once. Ofc this is still somewhat GitOps, but not automatic.", "created_utc": 1752860065, "id": "n3uq62n", "is_submitter": false, "parent_id": "n3sbezz", "permalink": "/r/kubernetes/comments/1m28d6c/looking_for_deployment_tool_to_deploy_helm_charts/n3uq62n/", "post_id": "1m28d6c", "score": 1, "stickied": false }, { "author": "s71011", "awards": 0, "body": "Please see my other comment: [https://www.reddit.com/r/kubernetes/comments/1m28d6c/comment/n3sbezz/?utm\\_source=share&utm\\_medium=web3x&utm\\_name=web3xcss&utm\\_term=1&utm\\_content=share\\_button](https://www.reddit.com/r/kubernetes/comments/1m28d6c/comment/n3sbezz/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button)", "created_utc": 1752830470, "id": "n3sbhlt", "is_submitter": true, "parent_id": "n3qxte2", "permalink": "/r/kubernetes/comments/1m28d6c/looking_for_deployment_tool_to_deploy_helm_charts/n3sbhlt/", "post_id": "1m28d6c", "score": 0, "stickied": false }, { "author": "UndercoverRowbot", "awards": 0, "body": "Rundeck can read job variables from an endpoint so we have created small wrapper services that return gitlab branches, or container versions from a registry. These variables are then used in the helm deploy jobs.\n\nIn your setup you can wrap step 2 so that rundeck can read the required variables from the manifest. Essentially you have a rest endpoint that will return the available versions in a json list. This is populated in a rundeck dropdown menu. \nThe selected value is then used in the next variable dropdown which again calls a rest endpoint to get the value for the required variable. \n\nExample \n`GET /manifests/prod`\n\nWil return `{[\"my-product-v1.3.4\",\"my-product-v1.3.5\"]}`\n\nThen if you want to access the variable for container version you will use this in the next rundeck variable as \n`GET /manifests/${option.manifests.value}/container_version`\n\nWhich will return `{[\"1.3.5-abcdef\"]}`\n\nYour Rundeck job then uses these variables in the shell script to deploy to the environment as:\n\n helm upgrade --install service-name --values version=@option.version@ source-helm-chart-url\n\nDoing it this way will stop the Ops team from having to download a manifest. Rundeck becomes the manifest in a sense because it will read the values and present them to the deployment team.\n\nIts very flexible so if there are credentials required, these can be stored in the rundeck vault or it can use hashicorp vault. And since it is all just shell scripts it is as flexible as your ability to write scripts.\n\nThe only catch here is that rundeck (in our case) needs a bastion of sorts to run the script on. It can SSH to a server if you set it up with SSH keys so that is a non issue for us - we have a bastion server in each environment or in the case of docker it ssh's to its own host and runs the commands. \n \nRundeck can be linked to Active Directory, or Azure AD, or any other SSO I believe. This allows you to setup fine grained access control - who can run? who can edit? who can delete? etc. And the logs c be used for auditing as well.\n\nBonus points - Rundeck has so many options, you can ssh to a host and set the executor as python so you can even write python scripts and it will execute python code on the host if shell is not your preference. \n\nTL;DR - use Rundeck fro deployments, git for jobs as code, small wrappers for job variables, SSO linked to RBAC.\n\nHope this helps", "created_utc": 1752844422, "id": "n3t7srv", "is_submitter": false, "parent_id": "n3t7rhb", "permalink": "/r/kubernetes/comments/1m28d6c/looking_for_deployment_tool_to_deploy_helm_charts/n3t7srv/", "post_id": "1m28d6c", "score": 1, "stickied": false }, { "author": "rockettmann", "awards": 0, "body": "Yeah, either there’s info that’s being left out or OP is misunderstanding. Argo world work here, even if it’s not 100% gitops.", "created_utc": 1752840312, "id": "n3sw2s2", "is_submitter": false, "parent_id": "n3su77y", "permalink": "/r/kubernetes/comments/1m28d6c/looking_for_deployment_tool_to_deploy_helm_charts/n3sw2s2/", "post_id": "1m28d6c", "score": 2, "stickied": false }, { "author": "s71011", "awards": 0, "body": "How would you use ArgoCD w/o using Gitops? Eg what would be the workflow? As I said, I am already using ArgoCD in other areas, and I know the GUI. But as far as I am aware, there is no way of updating an app w/o changing the YAML, which makes it GitOps again, but I could be mistaken. \n\nAlso, I am not trying to blame tools nor do I want to make a custom tool, as stated in the OP, this is what I want to avoid.\n\nUsing an umbrella chart, which bundles all microservices, in combination with ArgoCD sounds like a good plan, it a) eliminates the need for shipping a release manifest, because it’s in Nexus already as umbrella chart, b) allows standard helm install because its just a helm chart, and thus integrates everywhere.\nYet I am unsure how updating the umbrella chart version in ArgoCD would work outside of YAML updating, same for rollback. But I admit that I maybe just have to dig a bit deeper into ArgoCD docs, as I seem to have a lack of knowledge here.", "created_utc": 1752840105, "id": "n3svjnc", "is_submitter": true, "parent_id": "n3su77y", "permalink": "/r/kubernetes/comments/1m28d6c/looking_for_deployment_tool_to_deploy_helm_charts/n3svjnc/", "post_id": "1m28d6c", "score": 1, "stickied": false }, { "author": "s71011", "awards": 0, "body": "Thanks, will check it out!", "created_utc": 1752862680, "id": "n3uzds4", "is_submitter": true, "parent_id": "n3t7srv", "permalink": "/r/kubernetes/comments/1m28d6c/looking_for_deployment_tool_to_deploy_helm_charts/n3uzds4/", "post_id": "1m28d6c", "score": 1, "stickied": false }, { "author": "guhcampos", "awards": 0, "body": "Just use Argo without auto-sync. It will pick up the changes from git but not update the apps in prod, then a human can hit the sync button and they will be applied.", "created_utc": 1752853939, "id": "n3u4fm2", "is_submitter": false, "parent_id": "n3svjnc", "permalink": "/r/kubernetes/comments/1m28d6c/looking_for_deployment_tool_to_deploy_helm_charts/n3u4fm2/", "post_id": "1m28d6c", "score": 3, "stickied": false }, { "author": "rockettmann", "awards": 0, "body": "OP could even point it to OCI. \n\nCreate an app in Argo that pulls the chart from OCI. \n\nEnd users can update the version and manually sync.", "created_utc": 1752974948, "id": "n43dis5", "is_submitter": false, "parent_id": "n3u4fm2", "permalink": "/r/kubernetes/comments/1m28d6c/looking_for_deployment_tool_to_deploy_helm_charts/n43dis5/", "post_id": "1m28d6c", "score": 2, "stickied": false } ]
35