-
-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Open
Description
Hi!
I have read this discussion #473 and the troubleshooting guide. It looks like the play is failing at joining the worker node to the cluster of itself.
The error message:
TASK [k3s_agent : Manage k3s service] ***************************************************************************************************************************************
fatal: [launch-single]: FAILED! => {"changed": false, "msg": "Unable to start service k3s-node: Job for k3s-node.service failed because the control process exited with error code.\nSee \"systemctl status k3s-node.service\" and \"journalctl -xeu k3s-node.service\" for details.\n"}
Journalctl has the following messages:
Feb 10 11:50:29 launch-single k3s[9182]: time="2026-02-10T11:50:29Z" level=info msg="Starting k3s agent v1.33.6+k3s1 (b5847677)"
Feb 10 11:50:29 launch-single k3s[9182]: time="2026-02-10T11:50:29Z" level=warning msg="Error starting load balancer: listen tcp 127.0.0.1:6444: bind: address already in us>
Feb 10 11:50:29 launch-single k3s[9182]: time="2026-02-10T11:50:29Z" level=fatal msg="Error: listen tcp 127.0.0.1:6444: bind: address already in use"
Feb 10 11:50:29 launch-single systemd[1]: k3s-node.service: Main process exited, code=exited, status=1/FAILURE
hosts.ini:
[master]
launch-single
[node]
launch-single
[k3s_cluster:children]
master
nodeUPDATE: The other discussion #290 has made single-node installation more clear: the node should be listed as master only. It is definitely worth a note in the projet's README.md.
Here is the corrected hosts.ini:
[master]
launch-single
[k3s_cluster:children]
masterPlease close the issue.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels