You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Tried to create a cluster on RHEL 8 using rootless podman. Added the following workarounds:
Downgraded to v5.5.0 because v5.6.0+ was failing to start even earlier (perhaps there's been a regression here).
k3s args added based on workarounds listed on other issues
Serverlb seems to be retrying creating initial nginx config
Also I noticed that server0 fails to start because it can't create the cgroup "Failed to create cgroup" err="mkdir /sys/fs/cgroup/cpuset/kubepods: permission denied" cgroupName=[kubepods].
It seems to me the real problem is the cgroup issue, but I don't see the command output complaining about k3d-k3s-default-server-0. I am aware that there are some issues (perhaps lack of support) with rootless + cgroups1 on the k3s side. Can someone confirm if this is the case or if the issue is something else?
ERRO[0012] Failed Cluster Start: Failed to add one or more helper nodes: Node k3d-k3s-default-serverlb failed to get ready: error waiting for log line `start worker processes` from node 'k3d-k3s-default-serverlb': stopped returning log lines: node k3d-k3s-default-serverlb is running=true in status=running
[2024-05-01T19:54:32+0000] creating initial nginx config (try 3/3)
2024-05-01T19:54:32Z k3d-k3s-default-serverlb confd[24]: INFO Backend set to file
2024-05-01T19:54:32Z k3d-k3s-default-serverlb confd[24]: INFO Starting confd
2024-05-01T19:54:32Z k3d-k3s-default-serverlb confd[24]: INFO Backend source(s) set to /etc/confd/values.yaml
2024-05-01T19:54:32Z k3d-k3s-default-serverlb confd[24]: DEBUG Loading template resources from confdir /etc/confd
2024-05-01T19:54:32Z k3d-k3s-default-serverlb confd[24]: DEBUG Found template: /etc/confd/conf.d/nginx.toml
2024-05-01T19:54:32Z k3d-k3s-default-serverlb confd[24]: DEBUG Loading template resource from /etc/confd/conf.d/nginx.toml
2024-05-01T19:54:32Z k3d-k3s-default-serverlb confd[24]: DEBUG Retrieving keys from store
2024-05-01T19:54:32Z k3d-k3s-default-serverlb confd[24]: DEBUG Key prefix set to /
2024-05-01T19:54:32Z k3d-k3s-default-serverlb confd[24]: DEBUG Key Map: map[string]string{"/ports/6443.tcp/0":"k3d-k3s-default-server-0", "/settings/workerConnections":"1024"}
2024-05-01T19:54:32Z k3d-k3s-default-serverlb confd[24]: DEBUG Got the following map from store: map[/ports/6443.tcp/0:k3d-k3s-default-server-0 /settings/workerConnections:1024]
2024-05-01T19:54:32Z k3d-k3s-default-serverlb confd[24]: DEBUG Using source template /etc/confd/templates/nginx.tmpl
2024-05-01T19:54:32Z k3d-k3s-default-serverlb confd[24]: DEBUG Compiling source template /etc/confd/templates/nginx.tmpl
2024-05-01T19:54:32Z k3d-k3s-default-serverlb confd[24]: DEBUG Comparing candidate config to /etc/nginx/nginx.conf
2024-05-01T19:54:32Z k3d-k3s-default-serverlb confd[24]: INFO /etc/nginx/nginx.conf has md5sum fc7ae7839fba2feb2145900a0b6abf8d should be 38d71f35f3941d5788f417f69e5b77fb
2024-05-01T19:54:32Z k3d-k3s-default-serverlb confd[24]: INFO Target config /etc/nginx/nginx.conf out of sync
2024-05-01T19:54:32Z k3d-k3s-default-serverlb confd[24]: DEBUG Overwriting target config /etc/nginx/nginx.conf
2024-05-01T19:54:32Z k3d-k3s-default-serverlb confd[24]: INFO Target config /etc/nginx/nginx.conf has been updated`
docker logs k3d-k3s-default-server-0:
time="2024-05-01T19:43:32Z" level=info msg="Starting k3s v1.26.4+k3s1 (8d0255af)"
time="2024-05-01T19:43:32Z" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s"
time="2024-05-01T19:43:32Z" level=info msg="Configuring database table schema and indexes, this may take a moment..."
time="2024-05-01T19:43:32Z" level=info msg="Database tables and indexes are up to date"
time="2024-05-01T19:43:32Z" level=info msg="Kine available at unix://kine.sock"
time="2024-05-01T19:43:32Z" level=info msg="Bootstrap key locked for initial create"
time="2024-05-01T19:43:32Z" level=info msg="generated self-signed CA certificate CN=k3s-client-ca@1714592612: notBefore=2024-05-01 19:43:32.396037604 +0000 UTC notAfter=2034-04-29 19:43:32.396037604 +0000 UTC"
time="2024-05-01T19:43:32Z" level=info msg="certificate CN=system:admin,O=system:masters signed by CN=k3s-client-ca@1714592612: notBefore=2024-05-01 19:43:32 +0000 UTC notAfter=2025-05-01 19:43:32 +0000 UTC"
time="2024-05-01T19:43:32Z" level=info msg="certificate CN=system:kube-controller-manager signed by CN=k3s-client-ca@1714592612: notBefore=2024-05-01 19:43:32 +0000 UTC notAfter=2025-05-01 19:43:32 +0000 UTC"
time="2024-05-01T19:43:32Z" level=info msg="certificate CN=system:kube-scheduler signed by CN=k3s-client-ca@1714592612: notBefore=2024-05-01 19:43:32 +0000 UTC notAfter=2025-05-01 19:43:32 +0000 UTC"
time="2024-05-01T19:43:32Z" level=info msg="certificate CN=system:apiserver,O=system:masters signed by CN=k3s-client-ca@1714592612: notBefore=2024-05-01 19:43:32 +0000 UTC notAfter=2025-05-01 19:43:32 +0000 UTC"
time="2024-05-01T19:43:32Z" level=info msg="certificate CN=system:kube-proxy signed by CN=k3s-client-ca@1714592612: notBefore=2024-05-01 19:43:32 +0000 UTC notAfter=2025-05-01 19:43:32 +0000 UTC"
time="2024-05-01T19:43:32Z" level=info msg="certificate CN=system:k3s-controller signed by CN=k3s-client-ca@1714592612: notBefore=2024-05-01 19:43:32 +0000 UTC notAfter=2025-05-01 19:43:32 +0000 UTC"
time="2024-05-01T19:43:32Z" level=info msg="certificate CN=k3s-cloud-controller-manager signed by CN=k3s-client-ca@1714592612: notBefore=2024-05-01 19:43:32 +0000 UTC notAfter=2025-05-01 19:43:32 +0000 UTC"
time="2024-05-01T19:43:32Z" level=info msg="generated self-signed CA certificate CN=k3s-server-ca@1714592612: notBefore=2024-05-01 19:43:32.398889409 +0000 UTC notAfter=2034-04-29 19:43:32.398889409 +0000 UTC"
time="2024-05-01T19:43:32Z" level=info msg="certificate CN=kube-apiserver signed by CN=k3s-server-ca@1714592612: notBefore=2024-05-01 19:43:32 +0000 UTC notAfter=2025-05-01 19:43:32 +0000 UTC"
time="2024-05-01T19:43:32Z" level=info msg="generated self-signed CA certificate CN=k3s-request-header-ca@1714592612: notBefore=2024-05-01 19:43:32.399589047 +0000 UTC notAfter=2034-04-29 19:43:32.399589047 +0000 UTC"
time="2024-05-01T19:43:32Z" level=info msg="certificate CN=system:auth-proxy signed by CN=k3s-request-header-ca@1714592612: notBefore=2024-05-01 19:43:32 +0000 UTC notAfter=2025-05-01 19:43:32 +0000 UTC"
time="2024-05-01T19:43:32Z" level=info msg="generated self-signed CA certificate CN=etcd-server-ca@1714592612: notBefore=2024-05-01 19:43:32.400250862 +0000 UTC notAfter=2034-04-29 19:43:32.400250862 +0000 UTC"
time="2024-05-01T19:43:32Z" level=info msg="certificate CN=etcd-server signed by CN=etcd-server-ca@1714592612: notBefore=2024-05-01 19:43:32 +0000 UTC notAfter=2025-05-01 19:43:32 +0000 UTC"
time="2024-05-01T19:43:32Z" level=info msg="certificate CN=etcd-client signed by CN=etcd-server-ca@1714592612: notBefore=2024-05-01 19:43:32 +0000 UTC notAfter=2025-05-01 19:43:32 +0000 UTC"
time="2024-05-01T19:43:32Z" level=info msg="generated self-signed CA certificate CN=etcd-peer-ca@1714592612: notBefore=2024-05-01 19:43:32.401137983 +0000 UTC notAfter=2034-04-29 19:43:32.401137983 +0000 UTC"
time="2024-05-01T19:43:32Z" level=info msg="certificate CN=etcd-peer signed by CN=etcd-peer-ca@1714592612: notBefore=2024-05-01 19:43:32 +0000 UTC notAfter=2025-05-01 19:43:32 +0000 UTC"
time="2024-05-01T19:43:32Z" level=info msg="Saving cluster bootstrap data to datastore"
time="2024-05-01T19:43:32Z" level=info msg="certificate CN=k3s,O=k3s signed by CN=k3s-server-ca@1714592612: notBefore=2024-05-01 19:43:32 +0000 UTC notAfter=2025-05-01 19:43:32 +0000 UTC"
time="2024-05-01T19:43:32Z" level=warning msg="dynamiclistener [::]:6443: no cached certificate available for preload - deferring certificate load until storage initialization or first client request"
time="2024-05-01T19:43:32Z" level=info msg="Active TLS secret / (ver=) (count 12): map[listener.cattle.io/cn-0.0.0.0:0.0.0.0 listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-10.89.0.2:10.89.0.2 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-__1-f16284:::1 listener.cattle.io/cn-k3d-k3s-default-server-0:k3d-k3s-default-server-0 listener.cattle.io/cn-k3d-k3s-default-serverlb:k3d-k3s-default-serverlb listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc:kubernetes.default.svc listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=CA33009334309847B79E87274A21518F06D8997B]"
time="2024-05-01T19:43:32Z" level=info msg="Bootstrap key lock is held"
time="2024-05-01T19:43:32Z" level=info msg="Tunnel server egress proxy mode: agent"
time="2024-05-01T19:43:32Z" level=info msg="Tunnel server egress proxy waiting for runtime core to become available"
time="2024-05-01T19:43:32Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --etcd-servers=unix://kine.sock --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
time="2024-05-01T19:43:32Z" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --profiling=false --secure-port=10259"
time="2024-05-01T19:43:32Z" level=info msg="Waiting for API server to become available"
time="2024-05-01T19:43:32Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true"
W0501 19:43:32.781821 7 feature_gate.go:241] Setting GA feature gate JobTrackingWithFinalizers=true. It will be removed in a future release.
time="2024-05-01T19:43:32Z" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --bind-address=127.0.0.1 --cloud-config=/var/lib/rancher/k3s/server/etc/cloud-config.yaml --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --controllers=*,-route --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --leader-elect-resource-name=k3s-cloud-controller-manager --node-status-update-frequency=1m0s --profiling=false"
time="2024-05-01T19:43:32Z" level=info msg="Server node token is available at /var/lib/rancher/k3s/server/token"
I0501 19:43:32.782828 7 server.go:569] external host was not specified, using 10.89.0.2
time="2024-05-01T19:43:32Z" level=info msg="To join server node to cluster: k3s server -s https://10.89.0.2:6443 -t ${SERVER_NODE_TOKEN}"
time="2024-05-01T19:43:32Z" level=info msg="Agent node token is available at /var/lib/rancher/k3s/server/agent-token"
time="2024-05-01T19:43:32Z" level=info msg="To join agent node to cluster: k3s agent -s https://10.89.0.2:6443 -t ${AGENT_NODE_TOKEN}"
I0501 19:43:32.783218 7 server.go:171] Version: v1.26.4+k3s1
I0501 19:43:32.783238 7 server.go:173] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
time="2024-05-01T19:43:32Z" level=info msg="Wrote kubeconfig /output/kubeconfig.yaml"
time="2024-05-01T19:43:32Z" level=info msg="Run: k3s kubectl"
time="2024-05-01T19:43:32Z" level=info msg="certificate CN=k3d-k3s-default-server-0 signed by CN=k3s-server-ca@1714592612: notBefore=2024-05-01 19:43:32 +0000 UTC notAfter=2025-05-01 19:43:32 +0000 UTC"
time="2024-05-01T19:43:32Z" level=info msg="certificate CN=system:node:k3d-k3s-default-server-0,O=system:nodes signed by CN=k3s-client-ca@1714592612: notBefore=2024-05-01 19:43:32 +0000 UTC notAfter=2025-05-01 19:43:32 +0000 UTC"
time="2024-05-01T19:43:32Z" level=info msg="Module overlay was already loaded"
time="2024-05-01T19:43:32Z" level=info msg="Module nf_conntrack was already loaded"
time="2024-05-01T19:43:32Z" level=warning msg="Failed to load kernel module br_netfilter with modprobe"
time="2024-05-01T19:43:32Z" level=warning msg="Failed to load kernel module iptable_nat with modprobe"
time="2024-05-01T19:43:32Z" level=warning msg="Failed to load kernel module iptable_filter with modprobe"
time="2024-05-01T19:43:32Z" level=info msg="Set sysctl 'net/bridge/bridge-nf-call-iptables' to 1"
time="2024-05-01T19:43:32Z" level=error msg="Failed to set sysctl: open /proc/sys/net/bridge/bridge-nf-call-iptables: no such file or directory"
time="2024-05-01T19:43:32Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400"
time="2024-05-01T19:43:32Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600"
time="2024-05-01T19:43:32Z" level=warning msg="cgroup v2 controllers are not delegated for rootless. Disabling cgroup."
time="2024-05-01T19:43:32Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"
time="2024-05-01T19:43:32Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"
I0501 19:43:33.029753 7 shared_informer.go:270] Waiting for caches to sync for node_authorizer
I0501 19:43:33.031145 7 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0501 19:43:33.031159 7 plugins.go:161] Loaded 12 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
W0501 19:43:33.046259 7 genericapiserver.go:660] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
I0501 19:43:33.046954 7 instance.go:277] Using reconciler: lease
I0501 19:43:33.161283 7 instance.go:621] API group "internal.apiserver.k8s.io" is not enabled, skipping.
I0501 19:43:33.208997 7 instance.go:621] API group "resource.k8s.io" is not enabled, skipping.
W0501 19:43:33.302992 7 genericapiserver.go:660] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
W0501 19:43:33.303012 7 genericapiserver.go:660] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
W0501 19:43:33.304494 7 genericapiserver.go:660] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
W0501 19:43:33.307952 7 genericapiserver.go:660] Skipping API autoscaling/v2beta1 because it has no resources.
W0501 19:43:33.307972 7 genericapiserver.go:660] Skipping API autoscaling/v2beta2 because it has no resources.
W0501 19:43:33.311243 7 genericapiserver.go:660] Skipping API batch/v1beta1 because it has no resources.
W0501 19:43:33.312961 7 genericapiserver.go:660] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
W0501 19:43:33.314519 7 genericapiserver.go:660] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
W0501 19:43:33.314561 7 genericapiserver.go:660] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
W0501 19:43:33.318801 7 genericapiserver.go:660] Skipping API networking.k8s.io/v1beta1 because it has no resources.
W0501 19:43:33.318814 7 genericapiserver.go:660] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
W0501 19:43:33.320212 7 genericapiserver.go:660] Skipping API node.k8s.io/v1beta1 because it has no resources.
W0501 19:43:33.320227 7 genericapiserver.go:660] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0501 19:43:33.320254 7 genericapiserver.go:660] Skipping API policy/v1beta1 because it has no resources.
W0501 19:43:33.324254 7 genericapiserver.go:660] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
W0501 19:43:33.324270 7 genericapiserver.go:660] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0501 19:43:33.325664 7 genericapiserver.go:660] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
W0501 19:43:33.325679 7 genericapiserver.go:660] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0501 19:43:33.334161 7 genericapiserver.go:660] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0501 19:43:33.338390 7 genericapiserver.go:660] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
W0501 19:43:33.338408 7 genericapiserver.go:660] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
W0501 19:43:33.342231 7 genericapiserver.go:660] Skipping API apps/v1beta2 because it has no resources.
W0501 19:43:33.342248 7 genericapiserver.go:660] Skipping API apps/v1beta1 because it has no resources.
W0501 19:43:33.344113 7 genericapiserver.go:660] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
W0501 19:43:33.344130 7 genericapiserver.go:660] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
W0501 19:43:33.345655 7 genericapiserver.go:660] Skipping API events.k8s.io/v1beta1 because it has no resources.
W0501 19:43:33.355899 7 genericapiserver.go:660] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
time="2024-05-01T19:43:33Z" level=info msg="containerd is now running"
time="2024-05-01T19:43:33Z" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect"
time="2024-05-01T19:43:33Z" level=warning msg="Disabling CPU quotas due to missing cpu controller or cpu.cfs_period_us"
time="2024-05-01T19:43:33Z" level=info msg="Running kubelet --address=0.0.0.0 --allowed-unsafe-sysctls=net.ipv4.ip_forward,net.ipv6.conf.all.forwarding --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --cpu-cfs-quota=false --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --feature-gates=KubeletInUserNamespace=true --healthz-bind-address=127.0.0.1 --hostname-override=k3d-k3s-default-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/k3s --node-labels= --pod-infra-container-image=rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/etc/resolv.conf --runtime-cgroups=/k3s --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
time="2024-05-01T19:43:33Z" level=info msg="Handling backend connection request [k3d-k3s-default-server-0]"
time="2024-05-01T19:43:33Z" level=info msg="Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:6443/v1-k3s/readyz: 500 Internal Server Error"
I0501 19:43:33.917748 7 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt"
I0501 19:43:33.917818 7 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt"
I0501 19:43:33.917833 7 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
I0501 19:43:33.917872 7 secure_serving.go:210] Serving securely on 127.0.0.1:6444
I0501 19:43:33.917954 7 available_controller.go:494] Starting AvailableConditionController
I0501 19:43:33.917972 7 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0501 19:43:33.917978 7 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0501 19:43:33.918029 7 autoregister_controller.go:141] Starting autoregister controller
I0501 19:43:33.918037 7 cache.go:32] Waiting for caches to sync for autoregister controller
I0501 19:43:33.917957 7 apiservice_controller.go:97] Starting APIServiceRegistrationController
I0501 19:43:33.918057 7 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0501 19:43:33.918058 7 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt::/var/lib/rancher/k3s/server/tls/client-auth-proxy.key"
I0501 19:43:33.918086 7 crdregistration_controller.go:111] Starting crd-autoregister controller
I0501 19:43:33.918095 7 shared_informer.go:270] Waiting for caches to sync for crd-autoregister
I0501 19:43:33.917959 7 apf_controller.go:361] Starting API Priority and Fairness config controller
I0501 19:43:33.918110 7 customresource_discovery_controller.go:288] Starting DiscoveryController
I0501 19:43:33.918125 7 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0501 19:43:33.918128 7 controller.go:80] Starting OpenAPI V3 AggregationController
I0501 19:43:33.918131 7 shared_informer.go:270] Waiting for caches to sync for cluster_authentication_trust_controller
I0501 19:43:33.918158 7 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt"
I0501 19:43:33.918243 7 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt"
I0501 19:43:33.918251 7 controller.go:85] Starting OpenAPI controller
I0501 19:43:33.918273 7 controller.go:85] Starting OpenAPI V3 controller
I0501 19:43:33.918284 7 naming_controller.go:291] Starting NamingConditionController
I0501 19:43:33.918294 7 establishing_controller.go:76] Starting EstablishingController
I0501 19:43:33.918405 7 controller.go:83] Starting OpenAPI AggregationController
I0501 19:43:33.918425 7 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I0501 19:43:33.918438 7 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0501 19:43:33.918467 7 controller.go:121] Starting legacy_token_tracking_controller
I0501 19:43:33.918475 7 shared_informer.go:270] Waiting for caches to sync for configmaps
I0501 19:43:33.918470 7 crd_finalizer.go:266] Starting CRDFinalizer
I0501 19:43:33.918722 7 gc_controller.go:78] Starting apiserver lease garbage collector
I0501 19:43:33.953987 7 controller.go:615] quota admission added evaluator for: namespaces
E0501 19:43:33.959155 7 controller.go:156] Unable to perform initial Kubernetes service initialization: Service "kubernetes" is invalid: spec.clusterIPs: Invalid value: []string{"10.43.0.1"}: failed to allocate IP 10.43.0.1: cannot allocate resources of type serviceipallocations at this time
I0501 19:43:34.002892 7 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
I0501 19:43:34.018221 7 shared_informer.go:277] Caches are synced for crd-autoregister
I0501 19:43:34.018242 7 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0501 19:43:34.018223 7 cache.go:39] Caches are synced for autoregister controller
I0501 19:43:34.018233 7 cache.go:39] Caches are synced for AvailableConditionController controller
I0501 19:43:34.018366 7 shared_informer.go:277] Caches are synced for cluster_authentication_trust_controller
I0501 19:43:34.018489 7 apf_controller.go:366] Running API Priority and Fairness config worker
I0501 19:43:34.018501 7 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
I0501 19:43:34.018673 7 shared_informer.go:277] Caches are synced for configmaps
I0501 19:43:34.029821 7 shared_informer.go:277] Caches are synced for node_authorizer
I0501 19:43:34.743532 7 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0501 19:43:34.922263 7 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
I0501 19:43:34.924753 7 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
I0501 19:43:34.924777 7 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0501 19:43:35.175310 7 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0501 19:43:35.196864 7 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0501 19:43:35.265907 7 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.43.0.1]
W0501 19:43:35.269065 7 lease.go:251] Resetting endpoints for master service "kubernetes" to [10.89.0.2]
I0501 19:43:35.269635 7 controller.go:615] quota admission added evaluator for: endpoints
I0501 19:43:35.272209 7 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
time="2024-05-01T19:43:35Z" level=info msg="Waiting for cloud-controller-manager privileges to become available"
W0501 19:43:35.785514 7 feature_gate.go:241] Setting GA feature gate JobTrackingWithFinalizers=true. It will be removed in a future release.
time="2024-05-01T19:43:35Z" level=info msg="Kube API server is now running"
time="2024-05-01T19:43:35Z" level=info msg="ETCD server is now running"
time="2024-05-01T19:43:35Z" level=info msg="k3s is up and running"
time="2024-05-01T19:43:35Z" level=info msg="Applying CRD addons.k3s.cattle.io"
time="2024-05-01T19:43:35Z" level=info msg="Applying CRD helmcharts.helm.cattle.io"
time="2024-05-01T19:43:35Z" level=info msg="Applying CRD helmchartconfigs.helm.cattle.io"
time="2024-05-01T19:43:35Z" level=info msg="Waiting for CRD helmchartconfigs.helm.cattle.io to become available"
Flag --cloud-provider has been deprecated, will be removed in 1.25 or later, in favor of removing cloud provider code from Kubelet.
Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI.
I0501 19:43:35.836632 7 server.go:197] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
I0501 19:43:35.837954 7 server.go:407] "Kubelet version" kubeletVersion="v1.26.4+k3s1"
I0501 19:43:35.837968 7 server.go:409] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0501 19:43:35.838721 7 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt"
E0501 19:43:35.848669 7 info.go:114] Failed to get system UUID: open /etc/machine-id: no such file or directory
W0501 19:43:35.849607 7 info.go:53] Couldn't collect info from any of the files in "/etc/machine-id,/var/lib/dbus/machine-id"
I0501 19:43:35.850172 7 server.go:654] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /"
I0501 19:43:35.850661 7 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
I0501 19:43:35.850712 7 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName:/k3s SystemCgroupsName: KubeletCgroupsName:/k3s KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:false CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]}
I0501 19:43:35.850732 7 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
I0501 19:43:35.850740 7 container_manager_linux.go:308] "Creating device plugin manager"
I0501 19:43:35.850822 7 state_mem.go:36] "Initialized new in-memory state store"
I0501 19:43:36.052672 7 server.go:770] "Failed to ApplyOOMScoreAdj" err="write /proc/self/oom_score_adj: permission denied"
I0501 19:43:36.054374 7 kubelet.go:398] "Attempting to sync node with API server"
I0501 19:43:36.054393 7 kubelet.go:286] "Adding static pod path" path="/var/lib/rancher/k3s/agent/pod-manifests"
I0501 19:43:36.054415 7 kubelet.go:297] "Adding apiserver pod source"
I0501 19:43:36.054428 7 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
I0501 19:43:36.055724 7 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="v1.6.19-k3s1" apiVersion="v1"
W0501 19:43:36.055884 7 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
E0501 19:43:36.056283 7 server.go:1170] "Failed to set rlimit on max file handles" err="operation not permitted"
I0501 19:43:36.056297 7 server.go:1181] "Started kubelet"
I0501 19:43:36.056320 7 server.go:161] "Starting to listen" address="0.0.0.0" port=10250
E0501 19:43:36.056547 7 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs"
E0501 19:43:36.056571 7 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
I0501 19:43:36.057206 7 server.go:451] "Adding debug handlers to kubelet server"
I0501 19:43:36.057843 7 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
I0501 19:43:36.057886 7 volume_manager.go:293] "Starting Kubelet Volume Manager"
I0501 19:43:36.057915 7 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
E0501 19:43:36.060995 7 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"k3d-k3s-default-server-0\" not found" node="k3d-k3s-default-server-0"
I0501 19:43:36.067382 7 cpu_manager.go:214] "Starting CPU manager" policy="none"
I0501 19:43:36.067393 7 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
I0501 19:43:36.067408 7 state_mem.go:36] "Initialized new in-memory state store"
I0501 19:43:36.069521 7 policy_none.go:49] "None policy: Start"
I0501 19:43:36.070103 7 memory_manager.go:169] "Starting memorymanager" policy="None"
I0501 19:43:36.070122 7 state_mem.go:35] "Initializing new in-memory state store"
E0501 19:43:36.072807 7 node_container_manager_linux.go:61] "Failed to create cgroup" err="mkdir /sys/fs/cgroup/cpuset/kubepods: permission denied" cgroupName=[kubepods]
E0501 19:43:36.072816 7 kubelet.go:1466] "Failed to start ContainerManager" err="mkdir /sys/fs/cgroup/cpuset/kubepods: permission denied"
Which OS & Architecture
❯ k3d runtime-info
arch: amd64
cgroupdriver: cgroupfs
cgroupversion: "1"
endpoint: /run/user/605833/podman/podman.sock
filesystem: extfs
infoname: sjc-ads-6761
name: docker
os: '"rhel"'
ostype: linux
version: 4.4.1
❯ podman version
Client: Podman Engine
Version: 4.4.1
API Version: 4.4.1
Go Version: go1.19.6
Built: Thu Jun 15 07:39:56 2023
OS/Arch: linux/amd64
Which version of k3d
❯ k3d version
k3d version v5.5.0
k3s version v1.26.4-k3s1 (default)
What did you do
Tried to create a cluster on RHEL 8 using rootless podman. Added the following workarounds:
Serverlb
seems to be retryingcreating initial nginx config
"Failed to create cgroup" err="mkdir /sys/fs/cgroup/cpuset/kubepods: permission denied" cgroupName=[kubepods]
.It seems to me the real problem is the cgroup issue, but I don't see the command output complaining about
k3d-k3s-default-server-0
. I am aware that there are some issues (perhaps lack of support) with rootless + cgroups1 on the k3s side. Can someone confirm if this is the case or if the issue is something else?Screenshots or terminal output
docker logs k3d-k3s-default-serverlb
docker logs k3d-k3s-default-server-0
:Which OS & Architecture
Which version of
k3d
Which version of docker
The text was updated successfully, but these errors were encountered: