I've created clusters using kops command. For each cluster I've to create a hosted zone and add namespaces to DNS provider. To create a hosted zone, I've created a sub-domain in the hosted zone in aws(example.com) by using the following command :
ID=$(uuidgen) && aws route53 create-hosted-zone --name subdomain1.example.com --caller-reference $ID | jq .DelegationSet.NameServers
The nameservers I get by executing above command are included in a newly created file subdomain1.json with the following content.
{
"Comment": "Create a subdomain NS record in the parent domain",
"Changes": [
{
"Action": "CREATE",
"ResourceRecordSet": {
"Name": "subdomain1.example.com",
"Type": "NS",
"TTL": 300,
"ResourceRecords": [
{
"Value": "ns-1.awsdns-1.co.uk"
},
{
"Value": "ns-2.awsdns-2.org"
},
{
"Value": "ns-3.awsdns-3.com"
},
{
"Value": "ns-4.awsdns-4.net"
}
]
}
}
]
}
To get the parent-zone-id, I've used the following command:
aws route53 list-hosted-zones | jq '.HostedZones[] | select(.Name=="example.com.") | .Id'
To apply the subdomain NS records to the parent hosted zone-
aws route53 change-resource-record-sets --hosted-zone-id <parent-zone-id> --change-batch file://subdomain1.json
then I created a cluster using kops command-
kops create cluster --name=subdomain1.example.com --master-count=1 --master-zones ap-southeast-1a --node-count=1 --zones=ap-southeast-1a --authorization=rbac --state=s3://example.com --kubernetes-version=1.11.0 --yes
I'm able to create a cluster, validate it and get its nodes. By using the same procedure, I created one more cluster (subdomain2.example.com).
I've set aliases for the two clusters using these commands-
kubectl config set-context subdomain1 --cluster=subdomain1.example.com --user=subdomain1.example.com
kubectl config set-context subdomain2 --cluster=subdomain2.example.com --user=subdomain2.example.com
To set up federation between these two clusters, I've used these commands- kubectl config use-context subdomain1
kubectl create clusterrolebinding admin-to-cluster-admin-binding --clusterrole=cluster-admin --user=admin
kubefed init interstellar --host-cluster-context=subdomain1 --dns-provider=aws-route53 --dns-zone-name=example.com
-The output of kubefed init command should be
But for me it's showing as "waiting for the federation control plane to come up...", but it does not come up. What might be the error?
I've followed the following tutorial to create 2 clusters.
https://gist.github.com/arun-gupta/02f534c7720c8e9c9a875681b430441a