Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Cluster create fails while using rootless podman #1439

Open
arshanh opened this issue May 1, 2024 · 1 comment
Open

[BUG] Cluster create fails while using rootless podman #1439

arshanh opened this issue May 1, 2024 · 1 comment
Labels
bug Something isn't working

Comments

@arshanh
Copy link

arshanh commented May 1, 2024

What did you do

Tried to create a cluster on RHEL 8 using rootless podman. Added the following workarounds:

  • Downgraded to v5.5.0 because v5.6.0+ was failing to start even earlier (perhaps there's been a regression here).
  • k3s args added based on workarounds listed on other issues
  • Serverlb seems to be retrying creating initial nginx config
  • Also I noticed that server0 fails to start because it can't create the cgroup "Failed to create cgroup" err="mkdir /sys/fs/cgroup/cpuset/kubepods: permission denied" cgroupName=[kubepods].

It seems to me the real problem is the cgroup issue, but I don't see the command output complaining about k3d-k3s-default-server-0. I am aware that there are some issues (perhaps lack of support) with rootless + cgroups1 on the k3s side. Can someone confirm if this is the case or if the issue is something else?

❯ export DOCKER_HOST=unix://$XDG_RUNTIME_DIR/podman/podman.sock DOCKER_SOCK=$XDG_RUNTIME_DIR/podman/podman.sock http_proxy='' https_proxy='' HTTP_PROXY='' HTTPS_PROXY=''
❯ k3d --verbose --trace cluster create \
  --k3s-arg "--kube-proxy-arg=conntrack-max-per-core=0@server:*" \
  --k3s-arg "--kube-proxy-arg=conntrack-max-per-core=0@agent:*" \
  --k3s-arg '--kubelet-arg=feature-gates=KubeletInUserNamespace=true@server:*'

Screenshots or terminal output

ERRO[0012] Failed Cluster Start: Failed to add one or more helper nodes: Node k3d-k3s-default-serverlb failed to get ready: error waiting for log line `start worker processes` from node 'k3d-k3s-default-serverlb': stopped returning log lines: node k3d-k3s-default-serverlb is running=true in status=running 
❯ k3d --verbose --trace cluster create test \
  --k3s-arg "--kube-proxy-arg=conntrack-max-per-core=0@server:*" \
  --k3s-arg "--kube-proxy-arg=conntrack-max-per-core=0@agent:*" \
  --k3s-arg '--kubelet-arg=feature-gates=KubeletInUserNamespace=true@server:*'
DEBU[0000] DOCKER_SOCK=/run/user/605833/podman/podman.sock 
DEBU[0000] Runtime Info:
&{Name:docker Endpoint:/run/user/605833/podman/podman.sock Version:4.4.1 OSType:linux OS:"rhel" Arch:amd64 CgroupVersion:1 CgroupDriver:cgroupfs Filesystem:extfs InfoName:sjc-ads-6761} 
DEBU[0000] Additional CLI Configuration:
cli:
  api-port: ""
  env: []
  k3s-node-labels: []
  k3sargs:
  - --kube-proxy-arg=conntrack-max-per-core=0@server:*
  - --kube-proxy-arg=conntrack-max-per-core=0@agent:*
  - --kubelet-arg=feature-gates=KubeletInUserNamespace=true@server:*
  ports: []
  registries:
    create: ""
  runtime-labels: []
  runtime-ulimits: []
  volumes: []
hostaliases: [] 
DEBU[0000] Configuration:
agents: 0
image: docker.io/rancher/k3s:v1.26.4-k3s1
network: ""
options:
  k3d:
    disableimagevolume: false
    disableloadbalancer: false
    disablerollback: false
    loadbalancer:
      configoverrides: []
    timeout: 0s
    wait: true
  kubeconfig:
    switchcurrentcontext: true
    updatedefaultkubeconfig: true
  runtime:
    agentsmemory: ""
    gpurequest: ""
    hostpidmode: false
    serversmemory: ""
registries:
  config: ""
  use: []
servers: 1
subnet: ""
token: "" 
TRAC[0000] Trying to read config apiVersion='k3d.io/v1alpha5', kind='simple' 
DEBU[0000] ========== Simple Config ==========
{TypeMeta:{Kind:Simple APIVersion:k3d.io/v1alpha5} ObjectMeta:{Name:} Servers:1 Agents:0 ExposeAPI:{Host: HostIP: HostPort:} Image:docker.io/rancher/k3s:v1.26.4-k3s1 Network: Subnet: ClusterToken: Volumes:[] Ports:[] Options:{K3dOptions:{Wait:true Timeout:0s DisableLoadbalancer:false DisableImageVolume:false NoRollback:false NodeHookActions:[] Loadbalancer:{ConfigOverrides:[]}} K3sOptions:{ExtraArgs:[] NodeLabels:[]} KubeconfigOptions:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true} Runtime:{GPURequest: ServersMemory: AgentsMemory: HostPidMode:false Labels:[] Ulimits:[]}} Env:[] Registries:{Use:[] Create:<nil> Config:} HostAliases:[]}
========================== 
TRAC[0000] VolumeFilterMap: map[]                       
TRAC[0000] PortFilterMap: map[]                         
TRAC[0000] K3sNodeLabelFilterMap: map[]                 
TRAC[0000] RuntimeLabelFilterMap: map[]                 
TRAC[0000] EnvFilterMap: map[]                          
DEBU[0000] ========== Merged Simple Config ==========
{TypeMeta:{Kind:Simple APIVersion:k3d.io/v1alpha5} ObjectMeta:{Name:} Servers:1 Agents:0 ExposeAPI:{Host: HostIP: HostPort:34633} Image:docker.io/rancher/k3s:v1.26.4-k3s1 Network: Subnet: ClusterToken: Volumes:[] Ports:[] Options:{K3dOptions:{Wait:true Timeout:0s DisableLoadbalancer:false DisableImageVolume:false NoRollback:false NodeHookActions:[] Loadbalancer:{ConfigOverrides:[]}} K3sOptions:{ExtraArgs:[{Arg:--kubelet-arg=feature-gates=KubeletInUserNamespace=true NodeFilters:[server:*]} {Arg:--kube-proxy-arg=conntrack-max-per-core=0 NodeFilters:[server:* agent:*]}] NodeLabels:[]} KubeconfigOptions:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true} Runtime:{GPURequest: ServersMemory: AgentsMemory: HostPidMode:false Labels:[] Ulimits:[]}} Env:[] Registries:{Use:[] Create:<nil> Config:} HostAliases:[]}
========================== 
DEBU[0000] generated loadbalancer config:
ports:
  6443.tcp:
  - k3d-test-server-0
settings:
  workerConnections: 1024 
TRAC[0000] Filtering 2 nodes by [server:*]              
TRAC[0000] Filtered 1 nodes (filter: [server:*])        
TRAC[0000] Filtering 2 nodes by [server:* agent:*]      
TRAC[0000] Filtered 1 nodes (filter: [server:* agent:*]) 
DEBU[0000] ===== Merged Cluster Config =====
&{TypeMeta:{Kind: APIVersion:} Cluster:{Name:test Network:{Name:k3d-test ID: External:false IPAM:{IPPrefix:zero IPPrefix IPsUsed:[] Managed:false} Members:[]} Token: Nodes:[0xc000482680 0xc000482820] InitNode:<nil> ExternalDatastore:<nil> KubeAPI:0xc000391900 ServerLoadBalancer:0xc0002c9380 ImageVolume: Volumes:[]} ClusterCreateOpts:{DisableImageVolume:false WaitForServer:true Timeout:0s DisableLoadBalancer:false GPURequest: ServersMemory: AgentsMemory: NodeHooks:[] GlobalLabels:map[app:k3d] GlobalEnv:[] HostAliases:[] Registries:{Create:<nil> Use:[] Config:<nil>}} KubeconfigOpts:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true}}
===== ===== ===== 
DEBU[0000] '--kubeconfig-update-default set: enabling wait-for-server 
INFO[0000] Prep: Network                                
INFO[0000] Created network 'k3d-test'                   
INFO[0000] Created image volume k3d-test-images         
TRAC[0000] Using Registries: []                         
TRAC[0000] 
===== Creating Cluster =====

Runtime:
{}

Cluster:
&{Name:test Network:{Name:k3d-test ID:b3350fdc391cece405d40cf2e0877795f2b22d6becd3820ef767a7e8c2422721 External:false IPAM:{IPPrefix:10.89.0.0/24 IPsUsed:[] Managed:false} Members:[]} Token: Nodes:[0xc000482680 0xc000482820] InitNode:<nil> ExternalDatastore:<nil> KubeAPI:0xc000391900 ServerLoadBalancer:0xc0002c9380 ImageVolume:k3d-test-images Volumes:[k3d-test-images]}

ClusterCreatOpts:
&{DisableImageVolume:false WaitForServer:true Timeout:0s DisableLoadBalancer:false GPURequest: ServersMemory: AgentsMemory: NodeHooks:[] GlobalLabels:map[app:k3d k3d.cluster.imageVolume:k3d-test-images k3d.cluster.network:k3d-test k3d.cluster.network.external:false k3d.cluster.network.id:b3350fdc391cece405d40cf2e0877795f2b22d6becd3820ef767a7e8c2422721 k3d.cluster.network.iprange:10.89.0.0/24] GlobalEnv:[] HostAliases:[] Registries:{Create:<nil> Use:[] Config:<nil>}}

============================
         
TRAC[0000] Docker Machine not specified via DOCKER_MACHINE_NAME env var 
TRAC[0000] [Docker] Not using docker-machine            
DEBU[0000] [Docker] DockerHost: '' (unix:///run/user/605833/podman/podman.sock) 
INFO[0000] Starting new tools node...                   
DEBU[0000] DOCKER_SOCK=/run/user/605833/podman/podman.sock 
DEBU[0000] DOCKER_SOCK=/run/user/605833/podman/podman.sock 
DEBU[0000] DOCKER_SOCK=/run/user/605833/podman/podman.sock 
TRAC[0000] Creating node from spec
&{Name:k3d-test-tools Role:noRole Image:ghcr.io/k3d-io/k3d-tools:5.5.0 Volumes:[k3d-test-images:/k3d/images /run/user/605833/podman/podman.sock:/run/user/605833/podman/podman.sock] Env:[] Cmd:[] Args:[noop] Ports:map[] Restart:false Created: HostPidMode:false RuntimeLabels:map[app:k3d k3d.cluster:test k3d.version:v5.5.0] RuntimeUlimits:[] K3sNodeLabels:map[] Networks:[k3d-test] ExtraHosts:[host.k3d.internal:host-gateway] ServerOpts:{IsInit:false KubeAPI:<nil>} AgentOpts:{} GPURequest: Memory: State:{Running:false Status: Started:} IP:{IP:zero IP Static:false} HookActions:[]} 
TRAC[0000] Creating docker container with translated config
&{ContainerConfig:{Hostname:k3d-test-tools Domainname: User: AttachStdin:false AttachStdout:false AttachStderr:false ExposedPorts:map[] Tty:false OpenStdin:false StdinOnce:false Env:[K3S_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml] Cmd:[noop] Healthcheck:<nil> ArgsEscaped:false Image:ghcr.io/k3d-io/k3d-tools:5.5.0 Volumes:map[] WorkingDir: Entrypoint:[] NetworkDisabled:false MacAddress: OnBuild:[] Labels:map[app:k3d k3d.cluster:test k3d.role:noRole k3d.version:v5.5.0] StopSignal: StopTimeout:<nil> Shell:[]} HostConfig:{Binds:[k3d-test-images:/k3d/images /run/user/605833/podman/podman.sock:/run/user/605833/podman/podman.sock] ContainerIDFile: LogConfig:{Type: Config:map[]} NetworkMode:bridge PortBindings:map[] RestartPolicy:{Name: MaximumRetryCount:0} AutoRemove:false VolumeDriver: VolumesFrom:[] ConsoleSize:[0 0] CapAdd:[] CapDrop:[] CgroupnsMode: DNS:[] DNSOptions:[] DNSSearch:[] ExtraHosts:[host.k3d.internal:host-gateway] GroupAdd:[] IpcMode: Cgroup: Links:[] OomScoreAdj:0 PidMode: Privileged:true PublishAllPorts:false ReadonlyRootfs:false SecurityOpt:[] StorageOpt:map[] Tmpfs:map[/run: /var/run:] UTSMode: UsernsMode: ShmSize:0 Sysctls:map[] Runtime: Isolation: Resources:{CPUShares:0 Memory:0 NanoCPUs:0 CgroupParent: BlkioWeight:0 BlkioWeightDevice:[] BlkioDeviceReadBps:[] BlkioDeviceWriteBps:[] BlkioDeviceReadIOps:[] BlkioDeviceWriteIOps:[] CPUPeriod:0 CPUQuota:0 CPURealtimePeriod:0 CPURealtimeRuntime:0 CpusetCpus: CpusetMems: Devices:[] DeviceCgroupRules:[] DeviceRequests:[] KernelMemory:0 KernelMemoryTCP:0 MemoryReservation:0 MemorySwap:0 MemorySwappiness:<nil> OomKillDisable:<nil> PidsLimit:<nil> Ulimits:[] CPUCount:0 CPUPercent:0 IOMaximumIOps:0 IOMaximumBandwidth:0} Mounts:[] MaskedPaths:[] ReadonlyPaths:[] Init:0xc00018740a} NetworkingConfig:{EndpointsConfig:map[k3d-test:0xc00062c000]}} 
DEBU[0000] Created container k3d-test-tools (ID: 75918995237bdcc711982a7824e9a64d4f9dacc3412641a6b33260dd6af3f0fd) 
DEBU[0000] Node k3d-test-tools Start Time: 2024-05-01 13:05:22.456633031 -0700 PDT m=+0.297961145 
TRAC[0000] Starting node 'k3d-test-tools'               
INFO[0000] Starting Node 'k3d-test-tools'               
DEBU[0000] Truncated 2024-05-01 20:05:22.742047022 +0000 UTC to 2024-05-01 20:05:22 +0000 UTC 
INFO[0001] Creating node 'k3d-test-server-0'            
DEBU[0001] DOCKER_SOCK=/run/user/605833/podman/podman.sock 
TRAC[0001] Creating node from spec
&{Name:k3d-test-server-0 Role:server Image:docker.io/rancher/k3s:v1.26.4-k3s1 Volumes:[k3d-test-images:/k3d/images] Env:[K3S_TOKEN=fBBUpqgXgZrpZCsifSWa] Cmd:[] Args:[--kubelet-arg=feature-gates=KubeletInUserNamespace=true --kube-proxy-arg=conntrack-max-per-core=0] Ports:map[] Restart:true Created: HostPidMode:false RuntimeLabels:map[app:k3d k3d.cluster:test k3d.cluster.imageVolume:k3d-test-images k3d.cluster.network:k3d-test k3d.cluster.network.external:false k3d.cluster.network.id:b3350fdc391cece405d40cf2e0877795f2b22d6becd3820ef767a7e8c2422721 k3d.cluster.network.iprange:10.89.0.0/24 k3d.cluster.token:fBBUpqgXgZrpZCsifSWa k3d.cluster.url:https://k3d-test-server-0:6443 k3d.server.loadbalancer:k3d-test-serverlb] RuntimeUlimits:[] K3sNodeLabels:map[] Networks:[k3d-test] ExtraHosts:[] ServerOpts:{IsInit:false KubeAPI:0xc000391900} AgentOpts:{} GPURequest: Memory: State:{Running:false Status: Started:} IP:{IP:zero IP Static:false} HookActions:[]} 
TRAC[0001] Creating docker container with translated config
&{ContainerConfig:{Hostname:k3d-test-server-0 Domainname: User: AttachStdin:false AttachStdout:false AttachStderr:false ExposedPorts:map[] Tty:false OpenStdin:false StdinOnce:false Env:[K3S_TOKEN=fBBUpqgXgZrpZCsifSWa K3S_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml] Cmd:[server --kubelet-arg=feature-gates=KubeletInUserNamespace=true --kube-proxy-arg=conntrack-max-per-core=0 --tls-san 0.0.0.0 --tls-san k3d-test-serverlb] Healthcheck:<nil> ArgsEscaped:false Image:docker.io/rancher/k3s:v1.26.4-k3s1 Volumes:map[] WorkingDir: Entrypoint:[] NetworkDisabled:false MacAddress: OnBuild:[] Labels:map[app:k3d k3d.cluster:test k3d.cluster.imageVolume:k3d-test-images k3d.cluster.network:k3d-test k3d.cluster.network.external:false k3d.cluster.network.id:b3350fdc391cece405d40cf2e0877795f2b22d6becd3820ef767a7e8c2422721 k3d.cluster.network.iprange:10.89.0.0/24 k3d.cluster.token:fBBUpqgXgZrpZCsifSWa k3d.cluster.url:https://k3d-test-server-0:6443 k3d.role:server k3d.server.api.host:0.0.0.0 k3d.server.api.hostIP:0.0.0.0 k3d.server.api.port:34633 k3d.server.loadbalancer:k3d-test-serverlb k3d.version:v5.5.0] StopSignal: StopTimeout:<nil> Shell:[]} HostConfig:{Binds:[k3d-test-images:/k3d/images] ContainerIDFile: LogConfig:{Type: Config:map[]} NetworkMode:bridge PortBindings:map[] RestartPolicy:{Name:unless-stopped MaximumRetryCount:0} AutoRemove:false VolumeDriver: VolumesFrom:[] ConsoleSize:[0 0] CapAdd:[] CapDrop:[] CgroupnsMode: DNS:[] DNSOptions:[] DNSSearch:[] ExtraHosts:[] GroupAdd:[] IpcMode: Cgroup: Links:[] OomScoreAdj:0 PidMode: Privileged:true PublishAllPorts:false ReadonlyRootfs:false SecurityOpt:[] StorageOpt:map[] Tmpfs:map[/run: /var/run:] UTSMode: UsernsMode: ShmSize:0 Sysctls:map[] Runtime: Isolation: Resources:{CPUShares:0 Memory:0 NanoCPUs:0 CgroupParent: BlkioWeight:0 BlkioWeightDevice:[] BlkioDeviceReadBps:[] BlkioDeviceWriteBps:[] BlkioDeviceReadIOps:[] BlkioDeviceWriteIOps:[] CPUPeriod:0 CPUQuota:0 CPURealtimePeriod:0 CPURealtimeRuntime:0 CpusetCpus: CpusetMems: Devices:[] DeviceCgroupRules:[] DeviceRequests:[] KernelMemory:0 KernelMemoryTCP:0 MemoryReservation:0 MemorySwap:0 MemorySwappiness:<nil> OomKillDisable:<nil> PidsLimit:<nil> Ulimits:[] CPUCount:0 CPUPercent:0 IOMaximumIOps:0 IOMaximumBandwidth:0} Mounts:[] MaskedPaths:[] ReadonlyPaths:[] Init:0xc00021764a} NetworkingConfig:{EndpointsConfig:map[k3d-test:0xc00062c180]}} 
DEBU[0001] Created container k3d-test-server-0 (ID: b7c45e4dbaa53753d4bbf0d1038fb9f20e02d17a676b993645a57331ba111426) 
DEBU[0001] Created node 'k3d-test-server-0'             
INFO[0001] Creating LoadBalancer 'k3d-test-serverlb'    
DEBU[0001] DOCKER_SOCK=/run/user/605833/podman/podman.sock 
TRAC[0001] Creating node from spec
&{Name:k3d-test-serverlb Role:loadbalancer Image:ghcr.io/k3d-io/k3d-proxy:5.5.0 Volumes:[k3d-test-images:/k3d/images] Env:[] Cmd:[] Args:[] Ports:map[6443:[{HostIP:0.0.0.0 HostPort:34633}]] Restart:true Created: HostPidMode:false RuntimeLabels:map[app:k3d k3d.cluster:test k3d.cluster.imageVolume:k3d-test-images k3d.cluster.network:k3d-test k3d.cluster.network.external:false k3d.cluster.network.id:b3350fdc391cece405d40cf2e0877795f2b22d6becd3820ef767a7e8c2422721 k3d.cluster.network.iprange:10.89.0.0/24 k3d.cluster.token:fBBUpqgXgZrpZCsifSWa k3d.cluster.url:https://k3d-test-server-0:6443 k3d.role:loadbalancer k3d.server.loadbalancer:k3d-test-serverlb k3d.version:v5.5.0] RuntimeUlimits:[] K3sNodeLabels:map[] Networks:[k3d-test] ExtraHosts:[] ServerOpts:{IsInit:false KubeAPI:<nil>} AgentOpts:{} GPURequest: Memory: State:{Running:false Status: Started:} IP:{IP:zero IP Static:false} HookActions:[{Stage:preStart Action:{Runtime:{} Content:[112 111 114 116 115 58 10 32 32 54 52 52 51 46 116 99 112 58 10 32 32 45 32 107 51 100 45 116 101 115 116 45 115 101 114 118 101 114 45 48 10 115 101 116 116 105 110 103 115 58 10 32 32 119 111 114 107 101 114 67 111 110 110 101 99 116 105 111 110 115 58 32 49 48 50 52 10] Dest:/etc/confd/values.yaml Mode:-rwxr--r-- Description:Write Loadbalancer Configuration}}]} 
TRAC[0001] Creating docker container with translated config
&{ContainerConfig:{Hostname:k3d-test-serverlb Domainname: User: AttachStdin:false AttachStdout:false AttachStderr:false ExposedPorts:map[6443:{}] Tty:false OpenStdin:false StdinOnce:false Env:[K3S_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml] Cmd:[] Healthcheck:<nil> ArgsEscaped:false Image:ghcr.io/k3d-io/k3d-proxy:5.5.0 Volumes:map[] WorkingDir: Entrypoint:[] NetworkDisabled:false MacAddress: OnBuild:[] Labels:map[app:k3d k3d.cluster:test k3d.cluster.imageVolume:k3d-test-images k3d.cluster.network:k3d-test k3d.cluster.network.external:false k3d.cluster.network.id:b3350fdc391cece405d40cf2e0877795f2b22d6becd3820ef767a7e8c2422721 k3d.cluster.network.iprange:10.89.0.0/24 k3d.cluster.token:fBBUpqgXgZrpZCsifSWa k3d.cluster.url:https://k3d-test-server-0:6443 k3d.role:loadbalancer k3d.server.loadbalancer:k3d-test-serverlb k3d.version:v5.5.0] StopSignal: StopTimeout:<nil> Shell:[]} HostConfig:{Binds:[k3d-test-images:/k3d/images] ContainerIDFile: LogConfig:{Type: Config:map[]} NetworkMode:bridge PortBindings:map[6443:[{HostIP:0.0.0.0 HostPort:34633}]] RestartPolicy:{Name:unless-stopped MaximumRetryCount:0} AutoRemove:false VolumeDriver: VolumesFrom:[] ConsoleSize:[0 0] CapAdd:[] CapDrop:[] CgroupnsMode: DNS:[] DNSOptions:[] DNSSearch:[] ExtraHosts:[] GroupAdd:[] IpcMode: Cgroup: Links:[] OomScoreAdj:0 PidMode: Privileged:true PublishAllPorts:false ReadonlyRootfs:false SecurityOpt:[] StorageOpt:map[] Tmpfs:map[/run: /var/run:] UTSMode: UsernsMode: ShmSize:0 Sysctls:map[] Runtime: Isolation: Resources:{CPUShares:0 Memory:0 NanoCPUs:0 CgroupParent: BlkioWeight:0 BlkioWeightDevice:[] BlkioDeviceReadBps:[] BlkioDeviceWriteBps:[] BlkioDeviceReadIOps:[] BlkioDeviceWriteIOps:[] CPUPeriod:0 CPUQuota:0 CPURealtimePeriod:0 CPURealtimeRuntime:0 CpusetCpus: CpusetMems: Devices:[] DeviceCgroupRules:[] DeviceRequests:[] KernelMemory:0 KernelMemoryTCP:0 MemoryReservation:0 MemorySwap:0 MemorySwappiness:<nil> OomKillDisable:<nil> PidsLimit:<nil> Ulimits:[] CPUCount:0 CPUPercent:0 IOMaximumIOps:0 IOMaximumBandwidth:0} Mounts:[] MaskedPaths:[] ReadonlyPaths:[] Init:0xc000217fff} NetworkingConfig:{EndpointsConfig:map[k3d-test:0xc00062c240]}} 
DEBU[0001] Created container k3d-test-serverlb (ID: 14ad4e5def0fb33d0ef2ab1e5b88527077e957d567dc29981f83af3cc6fb3634) 
DEBU[0001] Created loadbalancer 'k3d-test-serverlb'     
DEBU[0001] DOCKER_SOCK=/run/user/605833/podman/podman.sock 
INFO[0001] Using the k3d-tools node to gather environment information 
TRAC[0001] TranslateContainerDetailsToNode: Checking for default object label app=k3d on container /k3d-test-tools 
DEBU[0001] no netlabel present on container /k3d-test-tools 
DEBU[0001] failed to get IP for container /k3d-test-tools as we couldn't find the cluster network 
DEBU[0001] Deleting node k3d-test-tools ...             
TRAC[0001] [Docker] Deleted Container k3d-test-tools    
DEBU[0001] DOCKER_SOCK=/run/user/605833/podman/podman.sock 
TRAC[0001] GOOS: linux / Runtime OS: linux ("rhel")     
INFO[0001] HostIP: using network gateway 10.89.0.1 address 
INFO[0001] Starting cluster 'test'                      
INFO[0001] Starting servers...                          
DEBU[0001] DOCKER_SOCK=/run/user/605833/podman/podman.sock 
DEBU[0001] No fix enabled.                              
DEBU[0001] Node k3d-test-server-0 Start Time: 2024-05-01 13:05:23.8111218 -0700 PDT m=+1.652449925 
TRAC[0001] Starting node 'k3d-test-server-0'            
INFO[0001] Starting Node 'k3d-test-server-0'            
DEBU[0001] Truncated 2024-05-01 20:05:24.079547196 +0000 UTC to 2024-05-01 20:05:24 +0000 UTC 
DEBU[0001] Waiting for node k3d-test-server-0 to get ready (Log: 'k3s is up and running') 
TRAC[0001] NodeWaitForLogMessage: Node 'k3d-test-server-0' waiting for log message 'k3s is up and running' since '2024-05-01 20:05:24 +0000 UTC' 
TRAC[0005] Non-fatal last log line in node k3d-test-server-0: �time="2024-05-01T20:05:28Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd" 
ERRO[0005] Failed Cluster Start: Failed to start server k3d-test-server-0: Node k3d-test-server-0 failed to get ready: error waiting for log line `k3s is up and running` from node 'k3d-test-server-0': stopped returning log lines: node k3d-test-server-0 is running=true in status=running 
ERRO[0005] Failed to create cluster >>> Rolling Back    
INFO[0005] Deleting cluster 'test'                      
TRAC[0005] TranslateContainerDetailsToNode: Checking for default object label app=k3d on container /k3d-test-serverlb 
TRAC[0005] TranslateContainerDetailsToNode: Checking for default object label app=k3d on container /k3d-test-server-0 
DEBU[0005] Cluster Details: &{Name:test Network:{Name:k3d-test ID:b3350fdc391cece405d40cf2e0877795f2b22d6becd3820ef767a7e8c2422721 External:false IPAM:{IPPrefix:10.89.0.0/24 IPsUsed:[] Managed:false} Members:[]} Token:fBBUpqgXgZrpZCsifSWa Nodes:[0xc000482680 0xc000482820] InitNode:<nil> ExternalDatastore:<nil> KubeAPI:0xc000391900 ServerLoadBalancer:0xc0002c9380 ImageVolume:k3d-test-images Volumes:[k3d-test-images]} 
DEBU[0005] Deleting node k3d-test-serverlb ...          
TRAC[0005] [Docker] Deleted Container k3d-test-serverlb 
DEBU[0005] Deleting node k3d-test-server-0 ...          
TRAC[0006] [Docker] Deleted Container k3d-test-server-0 
INFO[0006] Deleting cluster network 'k3d-test'          
INFO[0006] Deleting 1 attached volumes...               
DEBU[0006] Deleting volume k3d-test-images...           
FATA[0006] Cluster creation FAILED, all changes have been rolled back! 

docker logs k3d-k3s-default-serverlb

[2024-05-01T19:54:32+0000] creating initial nginx config (try 3/3)
2024-05-01T19:54:32Z k3d-k3s-default-serverlb confd[24]: INFO Backend set to file
2024-05-01T19:54:32Z k3d-k3s-default-serverlb confd[24]: INFO Starting confd
2024-05-01T19:54:32Z k3d-k3s-default-serverlb confd[24]: INFO Backend source(s) set to /etc/confd/values.yaml
2024-05-01T19:54:32Z k3d-k3s-default-serverlb confd[24]: DEBUG Loading template resources from confdir /etc/confd
2024-05-01T19:54:32Z k3d-k3s-default-serverlb confd[24]: DEBUG Found template: /etc/confd/conf.d/nginx.toml
2024-05-01T19:54:32Z k3d-k3s-default-serverlb confd[24]: DEBUG Loading template resource from /etc/confd/conf.d/nginx.toml
2024-05-01T19:54:32Z k3d-k3s-default-serverlb confd[24]: DEBUG Retrieving keys from store
2024-05-01T19:54:32Z k3d-k3s-default-serverlb confd[24]: DEBUG Key prefix set to /
2024-05-01T19:54:32Z k3d-k3s-default-serverlb confd[24]: DEBUG Key Map: map[string]string{"/ports/6443.tcp/0":"k3d-k3s-default-server-0", "/settings/workerConnections":"1024"}
2024-05-01T19:54:32Z k3d-k3s-default-serverlb confd[24]: DEBUG Got the following map from store: map[/ports/6443.tcp/0:k3d-k3s-default-server-0 /settings/workerConnections:1024]
2024-05-01T19:54:32Z k3d-k3s-default-serverlb confd[24]: DEBUG Using source template /etc/confd/templates/nginx.tmpl
2024-05-01T19:54:32Z k3d-k3s-default-serverlb confd[24]: DEBUG Compiling source template /etc/confd/templates/nginx.tmpl
2024-05-01T19:54:32Z k3d-k3s-default-serverlb confd[24]: DEBUG Comparing candidate config to /etc/nginx/nginx.conf
2024-05-01T19:54:32Z k3d-k3s-default-serverlb confd[24]: INFO /etc/nginx/nginx.conf has md5sum fc7ae7839fba2feb2145900a0b6abf8d should be 38d71f35f3941d5788f417f69e5b77fb
2024-05-01T19:54:32Z k3d-k3s-default-serverlb confd[24]: INFO Target config /etc/nginx/nginx.conf out of sync
2024-05-01T19:54:32Z k3d-k3s-default-serverlb confd[24]: DEBUG Overwriting target config /etc/nginx/nginx.conf
2024-05-01T19:54:32Z k3d-k3s-default-serverlb confd[24]: INFO Target config /etc/nginx/nginx.conf has been updated`

docker logs k3d-k3s-default-server-0:

time="2024-05-01T19:43:32Z" level=info msg="Starting k3s v1.26.4+k3s1 (8d0255af)"
time="2024-05-01T19:43:32Z" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s"
time="2024-05-01T19:43:32Z" level=info msg="Configuring database table schema and indexes, this may take a moment..."
time="2024-05-01T19:43:32Z" level=info msg="Database tables and indexes are up to date"
time="2024-05-01T19:43:32Z" level=info msg="Kine available at unix://kine.sock"
time="2024-05-01T19:43:32Z" level=info msg="Bootstrap key locked for initial create"
time="2024-05-01T19:43:32Z" level=info msg="generated self-signed CA certificate CN=k3s-client-ca@1714592612: notBefore=2024-05-01 19:43:32.396037604 +0000 UTC notAfter=2034-04-29 19:43:32.396037604 +0000 UTC"
time="2024-05-01T19:43:32Z" level=info msg="certificate CN=system:admin,O=system:masters signed by CN=k3s-client-ca@1714592612: notBefore=2024-05-01 19:43:32 +0000 UTC notAfter=2025-05-01 19:43:32 +0000 UTC"
time="2024-05-01T19:43:32Z" level=info msg="certificate CN=system:kube-controller-manager signed by CN=k3s-client-ca@1714592612: notBefore=2024-05-01 19:43:32 +0000 UTC notAfter=2025-05-01 19:43:32 +0000 UTC"
time="2024-05-01T19:43:32Z" level=info msg="certificate CN=system:kube-scheduler signed by CN=k3s-client-ca@1714592612: notBefore=2024-05-01 19:43:32 +0000 UTC notAfter=2025-05-01 19:43:32 +0000 UTC"
time="2024-05-01T19:43:32Z" level=info msg="certificate CN=system:apiserver,O=system:masters signed by CN=k3s-client-ca@1714592612: notBefore=2024-05-01 19:43:32 +0000 UTC notAfter=2025-05-01 19:43:32 +0000 UTC"
time="2024-05-01T19:43:32Z" level=info msg="certificate CN=system:kube-proxy signed by CN=k3s-client-ca@1714592612: notBefore=2024-05-01 19:43:32 +0000 UTC notAfter=2025-05-01 19:43:32 +0000 UTC"
time="2024-05-01T19:43:32Z" level=info msg="certificate CN=system:k3s-controller signed by CN=k3s-client-ca@1714592612: notBefore=2024-05-01 19:43:32 +0000 UTC notAfter=2025-05-01 19:43:32 +0000 UTC"
time="2024-05-01T19:43:32Z" level=info msg="certificate CN=k3s-cloud-controller-manager signed by CN=k3s-client-ca@1714592612: notBefore=2024-05-01 19:43:32 +0000 UTC notAfter=2025-05-01 19:43:32 +0000 UTC"
time="2024-05-01T19:43:32Z" level=info msg="generated self-signed CA certificate CN=k3s-server-ca@1714592612: notBefore=2024-05-01 19:43:32.398889409 +0000 UTC notAfter=2034-04-29 19:43:32.398889409 +0000 UTC"
time="2024-05-01T19:43:32Z" level=info msg="certificate CN=kube-apiserver signed by CN=k3s-server-ca@1714592612: notBefore=2024-05-01 19:43:32 +0000 UTC notAfter=2025-05-01 19:43:32 +0000 UTC"
time="2024-05-01T19:43:32Z" level=info msg="generated self-signed CA certificate CN=k3s-request-header-ca@1714592612: notBefore=2024-05-01 19:43:32.399589047 +0000 UTC notAfter=2034-04-29 19:43:32.399589047 +0000 UTC"
time="2024-05-01T19:43:32Z" level=info msg="certificate CN=system:auth-proxy signed by CN=k3s-request-header-ca@1714592612: notBefore=2024-05-01 19:43:32 +0000 UTC notAfter=2025-05-01 19:43:32 +0000 UTC"
time="2024-05-01T19:43:32Z" level=info msg="generated self-signed CA certificate CN=etcd-server-ca@1714592612: notBefore=2024-05-01 19:43:32.400250862 +0000 UTC notAfter=2034-04-29 19:43:32.400250862 +0000 UTC"
time="2024-05-01T19:43:32Z" level=info msg="certificate CN=etcd-server signed by CN=etcd-server-ca@1714592612: notBefore=2024-05-01 19:43:32 +0000 UTC notAfter=2025-05-01 19:43:32 +0000 UTC"
time="2024-05-01T19:43:32Z" level=info msg="certificate CN=etcd-client signed by CN=etcd-server-ca@1714592612: notBefore=2024-05-01 19:43:32 +0000 UTC notAfter=2025-05-01 19:43:32 +0000 UTC"
time="2024-05-01T19:43:32Z" level=info msg="generated self-signed CA certificate CN=etcd-peer-ca@1714592612: notBefore=2024-05-01 19:43:32.401137983 +0000 UTC notAfter=2034-04-29 19:43:32.401137983 +0000 UTC"
time="2024-05-01T19:43:32Z" level=info msg="certificate CN=etcd-peer signed by CN=etcd-peer-ca@1714592612: notBefore=2024-05-01 19:43:32 +0000 UTC notAfter=2025-05-01 19:43:32 +0000 UTC"
time="2024-05-01T19:43:32Z" level=info msg="Saving cluster bootstrap data to datastore"
time="2024-05-01T19:43:32Z" level=info msg="certificate CN=k3s,O=k3s signed by CN=k3s-server-ca@1714592612: notBefore=2024-05-01 19:43:32 +0000 UTC notAfter=2025-05-01 19:43:32 +0000 UTC"
time="2024-05-01T19:43:32Z" level=warning msg="dynamiclistener [::]:6443: no cached certificate available for preload - deferring certificate load until storage initialization or first client request"
time="2024-05-01T19:43:32Z" level=info msg="Active TLS secret / (ver=) (count 12): map[listener.cattle.io/cn-0.0.0.0:0.0.0.0 listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-10.89.0.2:10.89.0.2 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-__1-f16284:::1 listener.cattle.io/cn-k3d-k3s-default-server-0:k3d-k3s-default-server-0 listener.cattle.io/cn-k3d-k3s-default-serverlb:k3d-k3s-default-serverlb listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc:kubernetes.default.svc listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=CA33009334309847B79E87274A21518F06D8997B]"
time="2024-05-01T19:43:32Z" level=info msg="Bootstrap key lock is held"
time="2024-05-01T19:43:32Z" level=info msg="Tunnel server egress proxy mode: agent"
time="2024-05-01T19:43:32Z" level=info msg="Tunnel server egress proxy waiting for runtime core to become available"
time="2024-05-01T19:43:32Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --etcd-servers=unix://kine.sock --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
time="2024-05-01T19:43:32Z" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --profiling=false --secure-port=10259"
time="2024-05-01T19:43:32Z" level=info msg="Waiting for API server to become available"
time="2024-05-01T19:43:32Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true"
W0501 19:43:32.781821       7 feature_gate.go:241] Setting GA feature gate JobTrackingWithFinalizers=true. It will be removed in a future release.
time="2024-05-01T19:43:32Z" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --bind-address=127.0.0.1 --cloud-config=/var/lib/rancher/k3s/server/etc/cloud-config.yaml --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --controllers=*,-route --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --leader-elect-resource-name=k3s-cloud-controller-manager --node-status-update-frequency=1m0s --profiling=false"
time="2024-05-01T19:43:32Z" level=info msg="Server node token is available at /var/lib/rancher/k3s/server/token"
I0501 19:43:32.782828       7 server.go:569] external host was not specified, using 10.89.0.2
time="2024-05-01T19:43:32Z" level=info msg="To join server node to cluster: k3s server -s https://10.89.0.2:6443 -t ${SERVER_NODE_TOKEN}"
time="2024-05-01T19:43:32Z" level=info msg="Agent node token is available at /var/lib/rancher/k3s/server/agent-token"
time="2024-05-01T19:43:32Z" level=info msg="To join agent node to cluster: k3s agent -s https://10.89.0.2:6443 -t ${AGENT_NODE_TOKEN}"
I0501 19:43:32.783218       7 server.go:171] Version: v1.26.4+k3s1
I0501 19:43:32.783238       7 server.go:173] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
time="2024-05-01T19:43:32Z" level=info msg="Wrote kubeconfig /output/kubeconfig.yaml"
time="2024-05-01T19:43:32Z" level=info msg="Run: k3s kubectl"
time="2024-05-01T19:43:32Z" level=info msg="certificate CN=k3d-k3s-default-server-0 signed by CN=k3s-server-ca@1714592612: notBefore=2024-05-01 19:43:32 +0000 UTC notAfter=2025-05-01 19:43:32 +0000 UTC"
time="2024-05-01T19:43:32Z" level=info msg="certificate CN=system:node:k3d-k3s-default-server-0,O=system:nodes signed by CN=k3s-client-ca@1714592612: notBefore=2024-05-01 19:43:32 +0000 UTC notAfter=2025-05-01 19:43:32 +0000 UTC"
time="2024-05-01T19:43:32Z" level=info msg="Module overlay was already loaded"
time="2024-05-01T19:43:32Z" level=info msg="Module nf_conntrack was already loaded"
time="2024-05-01T19:43:32Z" level=warning msg="Failed to load kernel module br_netfilter with modprobe"
time="2024-05-01T19:43:32Z" level=warning msg="Failed to load kernel module iptable_nat with modprobe"
time="2024-05-01T19:43:32Z" level=warning msg="Failed to load kernel module iptable_filter with modprobe"
time="2024-05-01T19:43:32Z" level=info msg="Set sysctl 'net/bridge/bridge-nf-call-iptables' to 1"
time="2024-05-01T19:43:32Z" level=error msg="Failed to set sysctl: open /proc/sys/net/bridge/bridge-nf-call-iptables: no such file or directory"
time="2024-05-01T19:43:32Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400"
time="2024-05-01T19:43:32Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600"
time="2024-05-01T19:43:32Z" level=warning msg="cgroup v2 controllers are not delegated for rootless. Disabling cgroup."
time="2024-05-01T19:43:32Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"
time="2024-05-01T19:43:32Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"
I0501 19:43:33.029753       7 shared_informer.go:270] Waiting for caches to sync for node_authorizer
I0501 19:43:33.031145       7 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0501 19:43:33.031159       7 plugins.go:161] Loaded 12 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
W0501 19:43:33.046259       7 genericapiserver.go:660] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
I0501 19:43:33.046954       7 instance.go:277] Using reconciler: lease
I0501 19:43:33.161283       7 instance.go:621] API group "internal.apiserver.k8s.io" is not enabled, skipping.
I0501 19:43:33.208997       7 instance.go:621] API group "resource.k8s.io" is not enabled, skipping.
W0501 19:43:33.302992       7 genericapiserver.go:660] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
W0501 19:43:33.303012       7 genericapiserver.go:660] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
W0501 19:43:33.304494       7 genericapiserver.go:660] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
W0501 19:43:33.307952       7 genericapiserver.go:660] Skipping API autoscaling/v2beta1 because it has no resources.
W0501 19:43:33.307972       7 genericapiserver.go:660] Skipping API autoscaling/v2beta2 because it has no resources.
W0501 19:43:33.311243       7 genericapiserver.go:660] Skipping API batch/v1beta1 because it has no resources.
W0501 19:43:33.312961       7 genericapiserver.go:660] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
W0501 19:43:33.314519       7 genericapiserver.go:660] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
W0501 19:43:33.314561       7 genericapiserver.go:660] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
W0501 19:43:33.318801       7 genericapiserver.go:660] Skipping API networking.k8s.io/v1beta1 because it has no resources.
W0501 19:43:33.318814       7 genericapiserver.go:660] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
W0501 19:43:33.320212       7 genericapiserver.go:660] Skipping API node.k8s.io/v1beta1 because it has no resources.
W0501 19:43:33.320227       7 genericapiserver.go:660] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0501 19:43:33.320254       7 genericapiserver.go:660] Skipping API policy/v1beta1 because it has no resources.
W0501 19:43:33.324254       7 genericapiserver.go:660] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
W0501 19:43:33.324270       7 genericapiserver.go:660] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0501 19:43:33.325664       7 genericapiserver.go:660] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
W0501 19:43:33.325679       7 genericapiserver.go:660] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0501 19:43:33.334161       7 genericapiserver.go:660] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0501 19:43:33.338390       7 genericapiserver.go:660] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
W0501 19:43:33.338408       7 genericapiserver.go:660] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
W0501 19:43:33.342231       7 genericapiserver.go:660] Skipping API apps/v1beta2 because it has no resources.
W0501 19:43:33.342248       7 genericapiserver.go:660] Skipping API apps/v1beta1 because it has no resources.
W0501 19:43:33.344113       7 genericapiserver.go:660] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
W0501 19:43:33.344130       7 genericapiserver.go:660] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
W0501 19:43:33.345655       7 genericapiserver.go:660] Skipping API events.k8s.io/v1beta1 because it has no resources.
W0501 19:43:33.355899       7 genericapiserver.go:660] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
time="2024-05-01T19:43:33Z" level=info msg="containerd is now running"
time="2024-05-01T19:43:33Z" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect"
time="2024-05-01T19:43:33Z" level=warning msg="Disabling CPU quotas due to missing cpu controller or cpu.cfs_period_us"
time="2024-05-01T19:43:33Z" level=info msg="Running kubelet --address=0.0.0.0 --allowed-unsafe-sysctls=net.ipv4.ip_forward,net.ipv6.conf.all.forwarding --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --cpu-cfs-quota=false --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --feature-gates=KubeletInUserNamespace=true --healthz-bind-address=127.0.0.1 --hostname-override=k3d-k3s-default-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/k3s --node-labels= --pod-infra-container-image=rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/etc/resolv.conf --runtime-cgroups=/k3s --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
time="2024-05-01T19:43:33Z" level=info msg="Handling backend connection request [k3d-k3s-default-server-0]"
time="2024-05-01T19:43:33Z" level=info msg="Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:6443/v1-k3s/readyz: 500 Internal Server Error"
I0501 19:43:33.917748       7 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt"
I0501 19:43:33.917818       7 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt"
I0501 19:43:33.917833       7 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
I0501 19:43:33.917872       7 secure_serving.go:210] Serving securely on 127.0.0.1:6444
I0501 19:43:33.917954       7 available_controller.go:494] Starting AvailableConditionController
I0501 19:43:33.917972       7 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0501 19:43:33.917978       7 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0501 19:43:33.918029       7 autoregister_controller.go:141] Starting autoregister controller
I0501 19:43:33.918037       7 cache.go:32] Waiting for caches to sync for autoregister controller
I0501 19:43:33.917957       7 apiservice_controller.go:97] Starting APIServiceRegistrationController
I0501 19:43:33.918057       7 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0501 19:43:33.918058       7 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt::/var/lib/rancher/k3s/server/tls/client-auth-proxy.key"
I0501 19:43:33.918086       7 crdregistration_controller.go:111] Starting crd-autoregister controller
I0501 19:43:33.918095       7 shared_informer.go:270] Waiting for caches to sync for crd-autoregister
I0501 19:43:33.917959       7 apf_controller.go:361] Starting API Priority and Fairness config controller
I0501 19:43:33.918110       7 customresource_discovery_controller.go:288] Starting DiscoveryController
I0501 19:43:33.918125       7 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0501 19:43:33.918128       7 controller.go:80] Starting OpenAPI V3 AggregationController
I0501 19:43:33.918131       7 shared_informer.go:270] Waiting for caches to sync for cluster_authentication_trust_controller
I0501 19:43:33.918158       7 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt"
I0501 19:43:33.918243       7 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt"
I0501 19:43:33.918251       7 controller.go:85] Starting OpenAPI controller
I0501 19:43:33.918273       7 controller.go:85] Starting OpenAPI V3 controller
I0501 19:43:33.918284       7 naming_controller.go:291] Starting NamingConditionController
I0501 19:43:33.918294       7 establishing_controller.go:76] Starting EstablishingController
I0501 19:43:33.918405       7 controller.go:83] Starting OpenAPI AggregationController
I0501 19:43:33.918425       7 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I0501 19:43:33.918438       7 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0501 19:43:33.918467       7 controller.go:121] Starting legacy_token_tracking_controller
I0501 19:43:33.918475       7 shared_informer.go:270] Waiting for caches to sync for configmaps
I0501 19:43:33.918470       7 crd_finalizer.go:266] Starting CRDFinalizer
I0501 19:43:33.918722       7 gc_controller.go:78] Starting apiserver lease garbage collector
I0501 19:43:33.953987       7 controller.go:615] quota admission added evaluator for: namespaces
E0501 19:43:33.959155       7 controller.go:156] Unable to perform initial Kubernetes service initialization: Service "kubernetes" is invalid: spec.clusterIPs: Invalid value: []string{"10.43.0.1"}: failed to allocate IP 10.43.0.1: cannot allocate resources of type serviceipallocations at this time
I0501 19:43:34.002892       7 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
I0501 19:43:34.018221       7 shared_informer.go:277] Caches are synced for crd-autoregister
I0501 19:43:34.018242       7 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0501 19:43:34.018223       7 cache.go:39] Caches are synced for autoregister controller
I0501 19:43:34.018233       7 cache.go:39] Caches are synced for AvailableConditionController controller
I0501 19:43:34.018366       7 shared_informer.go:277] Caches are synced for cluster_authentication_trust_controller
I0501 19:43:34.018489       7 apf_controller.go:366] Running API Priority and Fairness config worker
I0501 19:43:34.018501       7 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
I0501 19:43:34.018673       7 shared_informer.go:277] Caches are synced for configmaps
I0501 19:43:34.029821       7 shared_informer.go:277] Caches are synced for node_authorizer
I0501 19:43:34.743532       7 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0501 19:43:34.922263       7 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
I0501 19:43:34.924753       7 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
I0501 19:43:34.924777       7 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0501 19:43:35.175310       7 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0501 19:43:35.196864       7 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0501 19:43:35.265907       7 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.43.0.1]
W0501 19:43:35.269065       7 lease.go:251] Resetting endpoints for master service "kubernetes" to [10.89.0.2]
I0501 19:43:35.269635       7 controller.go:615] quota admission added evaluator for: endpoints
I0501 19:43:35.272209       7 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
time="2024-05-01T19:43:35Z" level=info msg="Waiting for cloud-controller-manager privileges to become available"
W0501 19:43:35.785514       7 feature_gate.go:241] Setting GA feature gate JobTrackingWithFinalizers=true. It will be removed in a future release.
time="2024-05-01T19:43:35Z" level=info msg="Kube API server is now running"
time="2024-05-01T19:43:35Z" level=info msg="ETCD server is now running"
time="2024-05-01T19:43:35Z" level=info msg="k3s is up and running"
time="2024-05-01T19:43:35Z" level=info msg="Applying CRD addons.k3s.cattle.io"
time="2024-05-01T19:43:35Z" level=info msg="Applying CRD helmcharts.helm.cattle.io"
time="2024-05-01T19:43:35Z" level=info msg="Applying CRD helmchartconfigs.helm.cattle.io"
time="2024-05-01T19:43:35Z" level=info msg="Waiting for CRD helmchartconfigs.helm.cattle.io to become available"
Flag --cloud-provider has been deprecated, will be removed in 1.25 or later, in favor of removing cloud provider code from Kubelet.
Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI.
I0501 19:43:35.836632       7 server.go:197] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
I0501 19:43:35.837954       7 server.go:407] "Kubelet version" kubeletVersion="v1.26.4+k3s1"
I0501 19:43:35.837968       7 server.go:409] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0501 19:43:35.838721       7 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt"
E0501 19:43:35.848669       7 info.go:114] Failed to get system UUID: open /etc/machine-id: no such file or directory
W0501 19:43:35.849607       7 info.go:53] Couldn't collect info from any of the files in "/etc/machine-id,/var/lib/dbus/machine-id"
I0501 19:43:35.850172       7 server.go:654] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
I0501 19:43:35.850661       7 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
I0501 19:43:35.850712       7 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName:/k3s SystemCgroupsName: KubeletCgroupsName:/k3s KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:false CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]}
I0501 19:43:35.850732       7 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
I0501 19:43:35.850740       7 container_manager_linux.go:308] "Creating device plugin manager"
I0501 19:43:35.850822       7 state_mem.go:36] "Initialized new in-memory state store"
I0501 19:43:36.052672       7 server.go:770] "Failed to ApplyOOMScoreAdj" err="write /proc/self/oom_score_adj: permission denied"
I0501 19:43:36.054374       7 kubelet.go:398] "Attempting to sync node with API server"
I0501 19:43:36.054393       7 kubelet.go:286] "Adding static pod path" path="/var/lib/rancher/k3s/agent/pod-manifests"
I0501 19:43:36.054415       7 kubelet.go:297] "Adding apiserver pod source"
I0501 19:43:36.054428       7 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
I0501 19:43:36.055724       7 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="v1.6.19-k3s1" apiVersion="v1"
W0501 19:43:36.055884       7 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
E0501 19:43:36.056283       7 server.go:1170] "Failed to set rlimit on max file handles" err="operation not permitted"
I0501 19:43:36.056297       7 server.go:1181] "Started kubelet"
I0501 19:43:36.056320       7 server.go:161] "Starting to listen" address="0.0.0.0" port=10250
E0501 19:43:36.056547       7 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs"
E0501 19:43:36.056571       7 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
I0501 19:43:36.057206       7 server.go:451] "Adding debug handlers to kubelet server"
I0501 19:43:36.057843       7 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
I0501 19:43:36.057886       7 volume_manager.go:293] "Starting Kubelet Volume Manager"
I0501 19:43:36.057915       7 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
E0501 19:43:36.060995       7 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"k3d-k3s-default-server-0\" not found" node="k3d-k3s-default-server-0"
I0501 19:43:36.067382       7 cpu_manager.go:214] "Starting CPU manager" policy="none"
I0501 19:43:36.067393       7 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
I0501 19:43:36.067408       7 state_mem.go:36] "Initialized new in-memory state store"
I0501 19:43:36.069521       7 policy_none.go:49] "None policy: Start"
I0501 19:43:36.070103       7 memory_manager.go:169] "Starting memorymanager" policy="None"
I0501 19:43:36.070122       7 state_mem.go:35] "Initializing new in-memory state store"
E0501 19:43:36.072807       7 node_container_manager_linux.go:61] "Failed to create cgroup" err="mkdir /sys/fs/cgroup/cpuset/kubepods: permission denied" cgroupName=[kubepods]
E0501 19:43:36.072816       7 kubelet.go:1466] "Failed to start ContainerManager" err="mkdir /sys/fs/cgroup/cpuset/kubepods: permission denied"

Which OS & Architecture

❯ k3d runtime-info
arch: amd64
cgroupdriver: cgroupfs
cgroupversion: "1"
endpoint: /run/user/605833/podman/podman.sock
filesystem: extfs
infoname: sjc-ads-6761
name: docker
os: '"rhel"'
ostype: linux
version: 4.4.1
❯ podman version
Client:       Podman Engine
Version:      4.4.1
API Version:  4.4.1
Go Version:   go1.19.6
Built:        Thu Jun 15 07:39:56 2023
OS/Arch:      linux/amd64

Which version of k3d

❯ k3d version
k3d version v5.5.0
k3s version v1.26.4-k3s1 (default)

Which version of docker

❯ podman info
host:
  arch: amd64
  buildahVersion: 1.29.0
  cgroupControllers: []
  cgroupManager: cgroupfs
  cgroupVersion: v1
  conmon:
    package: conmon-2.1.6-1.module+el8.8.0+18098+9b44df5f.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.6, commit: 8c4ab5a095127ecc96ef8a9c885e0e1b14aeb11b'
  cpuUtilization:
    idlePercent: 92.94
    systemPercent: 0.76
    userPercent: 6.3
  cpus: 16
  distribution:
    distribution: '"rhel"'
    version: "8.8"
  eventLogger: file
  hostname: sjc-ads-6761
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 25
      size: 1
    - container_id: 1
      host_id: 4080000000
      size: 60000000
    uidmap:
    - container_id: 0
      host_id: 605833
      size: 1
    - container_id: 1
      host_id: 4140000000
      size: 60000000
  kernel: 4.18.0-477.15.1.el8_8.x86_64
  linkmode: dynamic
  logDriver: k8s-file
  memFree: 4451016704
  memTotal: 33406013440
  networkBackend: cni
  ociRuntime:
    name: runc
    package: runc-1.1.4-1.module+el8.8.0+18060+3f21f2cc.x86_64
    path: /usr/bin/runc
    version: |-
      runc version 1.1.4
      spec: 1.0.2-dev
      go: go1.19.4
      libseccomp: 2.5.2
  os: linux
  remoteSocket:
    exists: true
    path: /run/user/605833/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_SYS_CHROOT,CAP_NET_RAW,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /bin/slirp4netns
    package: slirp4netns-1.2.0-2.module+el8.8.0+18060+3f21f2cc.x86_64
    version: |-
      slirp4netns version 1.2.0
      commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
      libslirp: 4.4.0
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.2
  swapFree: 15763599360
  swapTotal: 34359721984
  uptime: 4807h 5m 0.00s (Approximately 200.29 days)
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - registry.access.redhat.com
  - registry.redhat.io
  - docker.io
store:
  configFile: /users/arhashem/.config/containers/storage.conf
  containerStore:
    number: 19
    paused: 0
    running: 10
    stopped: 9
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /users/arhashem/.local/share/containers/storage
  graphRootAllocated: 831824330752
  graphRootUsed: 705473302528
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 25
  runRoot: /run/user/605833/containers
  transientStore: false
  volumePath: /users/arhashem/.local/share/containers/storage/volumes
version:
  APIVersion: 4.4.1
  Built: 1686839996
  BuiltTime: Thu Jun 15 07:39:56 2023
  GitCommit: ""
  GoVersion: go1.19.6
  Os: linux
  OsArch: linux/amd64
  Version: 4.4.1
@arshanh arshanh added the bug Something isn't working label May 1, 2024
@Cub0n
Copy link

Cub0n commented May 8, 2024

Same Problem here:

K3D

k3d version v5.6.3
k3s version v1.28.8-k3s1 (default)

Podman

host:
arch: arm
buildahVersion: 1.28.2
cgroupControllers:

  • cpu
  • memory
  • pids
    cgroupManager: systemd
    cgroupVersion: v2
    conmon:
    package: conmon_2.1.6+ds1-1_armhf
    path: /usr/bin/conmon
    version: 'conmon version 2.1.6, commit: unknown'
    cpus: 4
    distribution:
    codename: bookworm
    distribution: debian
    version: "12"
    kernel: 6.1.0-21-armmp
    linkmode: dynamic
    networkBackend: netavark
    ociRuntime:
    name: crun
    package: crun_1.8.1-1+deb12u1_armhf
    path: /usr/bin/crun
    version: |-
    crun version 1.8.1
    commit: f8a096be060b22ccd3d5f3ebe44108517fbf6c3
    spec: 1.0.0
    +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
    os: linux
    remoteSocket:
    exists: true
    security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: false
    serviceIsRemote: false
    slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns_1.2.0-1_armhf
    version: |-
    slirp4netns version 1.2.0
    commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
    libslirp: 4.7.0
    SLIRP_CONFIG_VERSION_MAX: 4
    libseccomp: 2.5.4
    plugins:
    authorization: null
    network:
  • bridge
  • macvlan
    volume:
  • local
    registries: {}
    store:
    graphDriverName: overlay
    graphOptions:
    overlay.mount_program:
    Executable: /usr/bin/fuse-overlayfs
    Package: fuse-overlayfs_1.10-1_armhf
    Version: |-
    fusermount3 version: 3.14.0
    fuse-overlayfs: version 1.10
    FUSE library version 3.14.0
    using FUSE kernel interface version 7.31
    overlay.mountopt: nodev
    graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
    imageCopyTmpDir: /var/tmp
    version:
    APIVersion: 4.3.1
    Built: 0
    BuiltTime: Thu Jan 1 01:00:00 1970
    GitCommit: ""
    GoVersion: go1.19.8
    Os: linux
    OsArch: linux/arm
    Version: 4.3.1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants