PRD: Auth Config File Permissions Issue
Problem Statement
The kube-dc-manager pod crashes with permission denied error when trying to write to /etc/kube-auth-conf.yaml on the host filesystem. The file is mounted from /etc/rancher/auth-conf.yaml but has restrictive permissions.
Current Behavior
Pod crashes with:
kube auth file '/etc/kube-auth-conf.yaml' is not writable: open /etc/kube-auth-conf.yaml: permission denied
Root Cause
- File permissions:
/etc/rancher/auth-conf.yamlcreated with644(rw-r--r--) owned by root - Container security:
kube-dc-managerruns as non-root user (runAsNonRoot: true) - HostPath mount: Volume mounted from host preserves original permissions
# Deployment configuration
securityContext:
runAsNonRoot: true
volumeMounts:
- mountPath: /etc/kube-auth-conf.yaml
name: auth-config
subPath: auth-conf.yaml
volumes:
- hostPath:
path: /etc/rancher
name: auth-config
Current Workaround
Manual permission fix on all master nodes:
for ip in 213.111.154.233 213.111.154.229 213.111.154.223; do
ssh root@$ip 'chmod 666 /etc/rancher/auth-conf.yaml'
done
Issues with workaround:
- Reverts after file recreation
- Must be applied to new nodes manually
- Not persistent across reinstalls
Requirements
Must Have
kube-dc-managercan write auth configuration- Solution survives node reboots
- Works on new nodes automatically
- Maintains security best practices
Should Have
- Integrated into RKE2 bootstrap scripts
- No manual intervention required
Could Have
- Alternative storage mechanism (ConfigMap/Secret)
- In-cluster auth config management
Proposed Solutions
Option A: Fix in RKE2 Bootstrap Scripts
Add permission fix to install-server.sh and install-agent.sh:
# Create auth-conf.yaml with correct permissions
mkdir -p /etc/rancher
touch /etc/rancher/auth-conf.yaml
chmod 666 /etc/rancher/auth-conf.yaml
# Initialize with empty config
cat > /etc/rancher/auth-conf.yaml <<EOF
apiVersion: apiserver.config.k8s.io/v1beta1
kind: AuthenticationConfiguration
jwt: []
EOF
Pros:
- Automatic on new nodes
- Simple implementation
- No deployment changes needed
Cons:
- Requires bootstrap script modification
- World-writable file on host
Option B: Use EmptyDir with Init Container
Replace hostPath with emptyDir, use init container to copy initial config:
initContainers:
- name: auth-config-init
image: busybox
command: ['sh', '-c', 'cp /host-config/auth-conf.yaml /config/ && chmod 666 /config/auth-conf.yaml']
volumeMounts:
- name: host-config
mountPath: /host-config
readOnly: true
- name: auth-config
mountPath: /config
volumes:
- name: auth-config
emptyDir: {}
- name: host-config
hostPath:
path: /etc/rancher
Pros:
- No host permission changes
- Works with existing security context
- Self-contained solution
Cons:
- Config lost on pod restart (need sync mechanism)
- More complex deployment
Option C: Use Kubernetes Secret Instead
Store auth configuration in a Kubernetes Secret and sync to host via DaemonSet:
Pros:
- Kubernetes-native
- Proper RBAC controls
- Audit trail
Cons:
- Requires architecture change
- Additional DaemonSet
Option D: Change Deployment Security Context
Add fsGroup to allow group write access:
securityContext:
runAsNonRoot: true
fsGroup: 1000
Combined with host file group ownership change.
Pros:
- Minimal change
- More secure than world-writable
Cons:
- Still requires host-side change
- Group must match container user
Recommended Solution
Option A: Fix in RKE2 Bootstrap Scripts - Simple and effective.
Add to /home/voa/projects/kube-dc.cloud/scripts/rke2/install-server.sh:
# Ensure auth config file is writable for kube-dc-manager
mkdir -p /etc/rancher
if [ ! -f /etc/rancher/auth-conf.yaml ]; then
cat > /etc/rancher/auth-conf.yaml <<EOF
apiVersion: apiserver.config.k8s.io/v1beta1
kind: AuthenticationConfiguration
jwt: []
EOF
fi
chmod 666 /etc/rancher/auth-conf.yaml
Affected Files
/home/voa/projects/kube-dc.cloud/scripts/rke2/install-server.sh/home/voa/projects/kube-dc.cloud/scripts/rke2/install-agent.sh- Kube-DC Helm chart (manager deployment)
Implementation Steps
- Add auth-conf.yaml creation to
install-server.sh - Add auth-conf.yaml creation to
install-agent.sh - Document the requirement in deployment guide
- Test on fresh cluster deployment
- Verify kube-dc-manager starts without manual intervention
Success Criteria
- Fresh cluster deployment: kube-dc-manager pod running without errors
- Node reboot: permissions persist
- New node addition: auth config automatically configured
- No manual chmod commands required
Priority
High - Blocks kube-dc-manager operation, currently requires manual fix
Security Considerations
- File with
666permissions is world-readable/writable - Contains OIDC authentication configuration
- Consider restricting to group-writable with matching fsGroup
- Host file should not contain secrets (those are in Kubernetes Secrets)
Kubernetes Native Alternatives Research
As of Kubernetes 1.30+ (January 2026):
The StructuredAuthenticationConfiguration feature (beta) provides:
- ✅ Dynamic reload - API server watches file, no restart needed on change
- ✅ Multiple OIDC providers - Can configure multiple JWT authenticators
- ✅ CEL expressions - For claim validation and mapping
However, authentication config is still file-based:
- ❌ Not a CRD - No Kubernetes-native declarative resource
- ❌ Not ConfigMap/Secret - Cannot be managed via kubectl
- ❌ Host filesystem required -
--authentication-configpoints to local file
Conclusion: File-based configuration with writable permissions remains necessary.
The Kubernetes community has not yet moved to a fully declarative in-cluster
authentication configuration model. Our chmod 666 fix is the correct approach
for the current Kubernetes architecture.
Reference: KEP-3331: Structured Authentication Config