0
votes

I have a successful install of Openshift Origin 3.9. I have 2 masters, 2 etcd, 2 infra, 2 nodes. I am unable to login using the web console, login using CLI works fine (oc login -u system:admin).

I have already ran "oc adm policy add-cluster-role-to-user cluster-admin system" but there was no change.

Doing a "oc get users" says no resources found. I have htpasswd authentication setup. Creating the system:admin account worked without issue, but creating any other users, does not show them in "oc get users". Almost as if it's not reading anything from the htpasswd file. I can add users to htpasswd manually, but logging in using those ID/passwd doesn't work in either CLI or Web console.

Some details:

[root@master1 master]# oc get identity
No resources found.

[root@master1 master]# oc get user
No resources found.

When I try to create a new user, it is created not using any identity provider:

[root@master1 master]# oc create user test1
user "test1" created
[root@master1 master]# oc get users
NAME      UID                                    FULL NAME   IDENTITIES
test1     c5352b4a-92b0-11e8-99d1-42010a8e0003               

master-config.yaml identity config:

oauthConfig:
  assetPublicURL: https://X.X.X.X:8443/console/
  grantConfig:
    method: auto
  identityProviders:
  - challenge: true
    login: true
    mappingMethod: add
    name: htpasswd_auth
    provider:
      apiVersion: v1
      file: /etc/origin/master/htpasswd
      kind: HTPasswdPasswordIdentityProvider

Below is my ansible config:

# Create an OSEv3 group that contains the master, nodes, etcd, and lb groups.
# The lb group lets Ansible configure HAProxy as the load balancing solution.
# Comment lb out if your load balancer is pre-configured.
[OSEv3:children]
masters
nodes
etcd
lb

# Set variables common for all OSEv3 hosts
[OSEv3:vars]
ansible_ssh_user=timebrk
openshift_deployment_type=origin
ansible_become=yes

# Cloud Provider Configuration
openshift_cloudprovider_kind=gce
openshift_gcp_project=emerald-ivy-211414
openshift_gcp_prefix=453007126348
openshift_gcp_multizone=False 

# Uncomment the following to enable htpasswd authentication; defaults to
# DenyAllPasswordIdentityProvider.
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]

# Native high availbility cluster method with optional load balancer.
# If no lb group is defined installer assumes that a load balancer has
# been preconfigured. For installation the value of
# openshift_master_cluster_hostname must resolve to the load balancer
# or to one or all of the masters defined in the inventory if no load
# balancer is present.
openshift_master_cluster_method=native
openshift_master_cluster_hostname=X.X.X.X
openshift_master_cluster_public_hostname=X.X.X.X

# apply updated node defaults
openshift_node_kubelet_args={'pods-per-core': ['10'], 'max-pods': ['250'], 'image-gc-high-threshold': ['90'], 'image-gc-low-threshold': ['80']}

# enable ntp on masters to ensure proper failover
openshift_clock_enabled=true

# host group for masters
[masters]
master1.c.emerald-ivy-211414.internal openshift_ip=X.X.X.X
master2.c.emerald-ivy-211414.internal openshift_ip=X.X.X.X

# host group for etcd
[etcd]
etcd1.c.emerald-ivy-211414.internal
etcd2.c.emerald-ivy-211414.internal

# Specify load balancer host
[lb]
lb.c.emerald-ivy-211414.internal openshift_ip=X.X.X.X openshift_public_ip=X.X.X.X

# host group for nodes, includes region info
[nodes]
master[1:2].c.emerald-ivy-211414.internal
node1.c.emerald-ivy-211414.internal openshift_node_labels="{'region': 'primary', 'zone': 'east'}"
node2.c.emerald-ivy-211414.internal openshift_node_labels="{'region': 'primary', 'zone': 'west'}"
infra-node1.c.emerald-ivy-211414.internal openshift_node_labels="{'region': 'infra', 'zone': 'default'}"
infra-node2.c.emerald-ivy-211414.internal openshift_node_labels="{'region': 'infra', 'zone': 'default'}"
1

1 Answers

2
votes

Looks like the htpasswd file was not present on my master2 node for some reason. Once I copied it from master1, I was able to login to the Web console using system:admin credentials.

I still don't know why the password file is not sync'd across the master nodes, but my original issue has been resolved.