0
votes

I'm new to openshift or K8'S. I have installed Openshift v3.11.0+bf985b1-463 cluster in my centos 7. While running prerequisites.yml and deploy_cluster.yml run status is successful. And i have updated htpasswd and granted the cluster-admin role for my user.

htpasswd -b ${HTPASSWD_PATH}/htpasswd $OKD_USERNAME ${OKD_PASSWORD}
oc adm policy add-cluster-role-to-user cluster-admin $OKD_USERNAME

and i have create the user and identity also by the below cmd.

oc create user bob
oc create identity ldap_provider:bob
oc create useridentitymapping ldap_provider:bob bob

When i try to login with oc login -u bob -p password it say's

Login failed (401 Unauthorized) Verify you have provided correct credentials.

But i can able to login with oc login -u system:admin

For your information: the okd deploy_cluster.yml ran successfully but the below pod is in error state. Is that cause the problem? cmd oc get pods

enter image description here

Suggest me how can i fix the issue. Thank you.

UPDATE: I have ran the deploy_cluster.yml once again the login issue is solved able to login. But it fails with the below error.

This phase can be restarted by running: playbooks/openshift-logging/config.yml
Node logging-es-data-master-ioblern6 in cluster logging-es was unable to rollout. Please see documentation regarding recovering during a rolling cluster restart

In openshift console the Logging Pod have the below event. enter image description here

But all the servers have enough memory like more than 65% is free. And the Ansible version is 2.6.5

1 Master node config: 4CPU, 16GB RAM, 50GB HDD

2 Slave and 1 infra node config: 4CPU, 16GB RAM, 20GB HDD

1
Any reason you installed OCP v3.11? The current version if v4.5.7... - titou10
my management asked to do this only and i have another one doubt. based on the okd documentation for v3.11 the latest version is v3.11.272. i have installed few days back only the version is v3.11.0+bf985b1-463. how can i upgrade to latest version - Kavinithees
Please chech the resources on worker node - Dashrath Mundkar
I have checked and i have updated my configurations in the UPDATE section @DashrathMundkar - Kavinithees

1 Answers

1
votes

To create a new user try to follow these steps:

1 Create on each master node the password entry in htpasswd file with:

$ htpasswd -b </path/to/htpasswd> <user_name>

$ htpasswd -b /etc/origin/master/htpasswd myUser myPassword

2 Restart on each master node the master api and master controllers

$ master-restart controllers && master-restart api
or
$ /usr/local/bin/master-restart api && /usr/local/bin/master-restart controllers

3 Apply needed roles

$ oc adm policy add-cluster-role-to-user cluster-admin myUser

4 Login as myUser

$ oc login -u myUser -p myPassword 

Running again the deploy_cluster.yaml after configuring the htpasswd file, will force the restart of master controllers and api so you've been able to login as your new user.

About the other problem, registry-console and loggin-es-data-master pods not running it's because you cannot run again the deploy_cluster.yaml when your cluster is already up and running so you have to uninstall okd and then run again the playbook. This happens because the SDN is already working and all your nodes already own all needed certificates.

$ ansible-playbook -i path/to/inventory /usr/share/ansible/openshift-ansible/playbooks/adhoc/uninstall.yml

and then again

$ ansible-playbook -i path/to/inventory /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml

More detailed informations are here

If, after all this procedure, the logging-es-data-master pod should not run then uninstall the logging component with

$ ansible-playbook -i /path/to/inventory> /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml -e openshift_logging_install_logging=False -e openshift_logging_purge_logging=true

and then uninstall the whole okd and install it again.

If your cluster is already working and you cannot perform again the installation so try only to uninstall and reinstall the logging component:

$ ansible-playbook -i /path/to/inventory> /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml -e openshift_logging_install_logging=False -e openshift_logging_purge_logging=true
$ ansible-playbook -i /path/to/inventory> /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml -e openshift_logging_install_logging=True

RH detailed instructinos are here