1
votes

I have several AWS EC2 instances, and on them I have a Rancher instance deployed. On Rancher, I've deployed a website using Kubernetes, and it is deployed using Istio to handle the networking, I am able to log in with http://portal.website.com:31380. I also have AWS Route 53 to get the URL working and nginx for a load balancer across the EC2 instances.

But I want to be able to login with just http://portal.website.com, so removing the port. Is there a way for me to do this?

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
 name: portal-gateway
spec:
 selector:
   istio: ingressgateway
 servers:
 - port:
     number: 80
     name: http
     protocol: HTTP
   hosts:
   - "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
 name: ingress
spec:
 hosts:
 - "*"
 gateways:
 - portal-gateway
 http:
 - match:
   - uri:
       prefix: "/"
   rewrite:
     uri: "/"
   route:
   - destination:
       host: portal
       port:
         number: 80
   websocketUpgrade: true
---
apiVersion: v1
kind: Service
metadata:
 name: portal
spec:
 ports:
   - protocol: TCP
     port: 80
     targetPort: 8080
 selector:
   app: portal
 type: ClusterIP

Edit: I am accessing this on 31380, because it is setup to use a NodePort (https://kubernetes.io/docs/concepts/services-networking/service/#nodeport). The Istio docs say If the EXTERNAL-IP value is <none> (or perpetually <pending>), your environment does not provide an external load balancer for the ingress gateway. In this case, you can access the gateway using the service’s node port.

Here is the output of kubectl get svc istio-ingressgateway -n istio-system

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE istio-ingressgateway NodePort 10.43.200.101 15020:30051/TCP,80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:30419/TCP,15030:30306/TCP,15031:31130/TCP,15032:32720/TCP,15443:30361/TCP 3h27m

1
Just curious..where did the port number 31380 came from?Is there any NodePort Service?sachin
As @sachin mentioned, why do you have to use the port number? How did you configure your ingress gateway? Is it load balancer or node port? What is the output from kubectl get svc istio-ingressgateway -n istio-system?Jakub
@sachin I edited the bottom of my postMike K.
@jt97 I edited the bottom of my postMike K.
@MikeK. so you don't get a load balancer that points to istio ingress gateway when installing istio?Can't you change the svc istio-ingressgateway type to loadbalancer?sachin

1 Answers

1
votes

As you mentioned, istio documentation say that

If the EXTERNAL-IP value is (or perpetually ), your environment does not provide an external load balancer for the ingress gateway. In this case, you can access the gateway using the service’s node port.


If we take a look at kubernetes documentation about NodePort

If you set the type field to NodePort, the Kubernetes control plane allocates a port from a range specified by --service-node-port-range flag (default: 30000-32767). Each node proxies that port (the same port number on every Node) into your Service. Your Service reports the allocated port in its .spec.ports[*].nodePort field.

So if you ingress-gateway is NodePort then you have to use http://portal.website.com:31380.

If you want to use http://portal.website.com to would have to change it to LoadBalancer.

As @sachin mentioned, If you use cloud like aws you can configure Istio with AWS Load Balancer with appropriate annotations.

On cloud providers which support external load balancers, setting the type field to LoadBalancer provisions a load balancer for your Service. The actual creation of the load balancer happens asynchronously, and information about the provisioned balancer is published in the Service's .status.loadBalancer

I see you use aws, so you can read more about it in below links:


If it´s on premise then you could take a look at metalLB

MetalLB is a load-balancer implementation for bare metal Kubernetes clusters, using standard routing protocols.

Kubernetes does not offer an implementation of network load-balancers (Services of type LoadBalancer) for bare metal clusters. The implementations of Network LB that Kubernetes does ship with are all glue code that calls out to various IaaS platforms (GCP, AWS, Azure…). If you’re not running on a supported IaaS platform (GCP, AWS, Azure…), LoadBalancers will remain in the “pending” state indefinitely when created.

Bare metal cluster operators are left with two lesser tools to bring user traffic into their clusters, “NodePort” and “externalIPs” services. Both of these options have significant downsides for production use, which makes bare metal clusters second class citizens in the Kubernetes ecosystem.

MetalLB aims to redress this imbalance by offering a Network LB implementation that integrates with standard network equipment, so that external services on bare metal clusters also “just work” as much as possible.

You can read more about it in below link: