Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request] Running kubent as a webservice #218

Open
mikhailadvani opened this issue Sep 23, 2021 · 10 comments
Open

[Feature Request] Running kubent as a webservice #218

mikhailadvani opened this issue Sep 23, 2021 · 10 comments
Assignees

Comments

@mikhailadvani
Copy link

In a multi-tenant kubernetes cluster with tenant not having cluster-admin privileges, sharing the kubent output is not straightforward. Below is my proposal to tackle this:

  1. Run kubent as webservice with a relatively privileged serviceAccount - this can be configured by cluster admins
  2. Expose the information using a kubernetes service and/or ingress however other tools are exposed
  3. Refreshes the deprecation info every n minutes without any restarts
  4. Publish helm charts to be able to deploy easily in any cluster

Further extension: expose the information in the form of prometheus metrics so as to be able to create alerts.

Will be more than happy to contribute to this if there is agreement on the idea.

@ghost
Copy link

ghost commented Sep 23, 2021

See docker image.

@ghost
Copy link

ghost commented Sep 23, 2021

I probably advise running it as a sidecar

@ghost
Copy link

ghost commented Sep 23, 2021

We are unlikely to add all of these features as they are a bit beyond what kubent was designed for. Nothing stops you wrapping a service arround our docker image.

@mikhailadvani
Copy link
Author

Thought about it but personally, I do not like the sidecar pattern primarily because of the orchestration of the periodic refresh and error handling. I am more inclined towards a separate project like a kubent-prometheus-exporter which runs a goroutine periodically to update the list of metrics in memory that are then served by a webserver in prometheus form. A bit of refactoring in the main project here would go a long way in being able to call the entire computation in a single method call without duplication but an os.Exec("kubent...") would suffice as a PoC. I would definitely like to get opinions on the approach

@stepanstipl
Copy link
Contributor

Hi @mikhailadvani, thanks for the input. I think turning this into a "proper" webservice, as in real-time listening for API changes and their "stream" processing, and exposing this via web service, is a bit beyond current scope of the tool.

What I would see as an easy implementation of what I think you're trying to achieve is a CronJob periodically executing the tool and saving its output somewhere (might be a simple persistent volume). The tool should already run nicely from within the cluster. I think it would make sense to add an example of this, aka deployment manifest to create a relevant service account with limited read-only permissions and the CronJob itself.

As to exposing this - the tool already supports JSON, and perhaps an easy way would be to use something like a json_exporter. It seems flexible enough and might work even without any transformation step. I would suggest grouping findings by resource kind as a label.

As to Helm charts - I would start with a plain yaml manifests first, keep it simple. And once done, we can discuss supporting various distribution tools (Helm, Kustomize, Kpt, Terraform... ), but that might be a bit beyond the scope.

TL;DR: I think it would be interesting to add this as a deployment example. As I see it, all the building blocks are available, and I would follow the UNIX philosophy of using several small tools and composing them together to achieve this.

Let me know if that would accomplish what you're after. 😄

@mikhailadvani
Copy link
Author

I tried it out but facing authentication issues because of the lack of presence of a proper kube config/context. I have hacked around it for now using the following.

kubectl config set-cluster local --server=https://kubernetes.default.svc.cluster.local  --certificate-authority /run/secrets/kubernetes.io/serviceaccount/ca.crt
kubectl config set-credentials local --token=$(cat /run/secrets/kubernetes.io/serviceaccount/token)
kubectl config set-context local --cluster=local --user=local
kubectl config use-context local

Is there a better way that I am missing?

@stepanstipl
Copy link
Contributor

I believe you're right @mikhailadvani, and thanks for sharing the workaround 👍 . The in-cluster "no-config" doesn't work at the moment, I've created issue #223 to support this out of the box. The expectation is that if you're inside a pod, you should not need to specify any config, actually you should not even need kubectl to be present

@ghost ghost assigned stepanstipl Oct 22, 2021
@moertel
Copy link

moertel commented Apr 21, 2022

@mikhailadvani did you ever make any progress on this proposal? I'm asking because I had the same idea of exposing API deprecations as Prometheus metrics and my search of "somebody must have done this already" led me here.

@mikhailadvani
Copy link
Author

I had done a dirty implementation with executing kubent in a shell through a wrapper webservice using os.Exec("kubent" ...) but never got down to doing it the right way which included making changes in this project which would make the computation a single public method which I could call in my webservice

@milanholubstratox
Copy link

Hi, for those interested - #302 introduces webservice (prometheus pushgateway) which is fed with prometheus metrics stemming from the cronjob. As a bonus grafana dashboard included! Assumes existing prometheus is present in the cluster and configured to scrape the pushgateway.

@github-actions github-actions bot added the stale label Dec 11, 2022
@doitintl doitintl deleted a comment from github-actions bot Dec 15, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants