0
votes

I have a local Kubernetes install based on Docker Desktop. I have a Kubernetes Service setup with Cluster IP on top of 3 Pods. I notice when looking at the Container logs the same Pod is always hit.

Is this the default behaviour of Cluster IP? If so how will the other Pods ever be used or what is the point of them using Cluster IP?

The other option is to use a LoadBalancer type however I want the Service to only be accessible from within the Cluster.

Is there a way to make the LoadBalancer internal?

If anyone can please advise that would be much appreciated.

UPDATE:

I have tried using an LoadBalancer type and the same Pod is being hit all the time also.

Here is my config:

apiVersion: v1
kind: Namespace
metadata:
  name: dropshippingplatform
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: organisationservice-deployment
  namespace: dropshippingplatform
spec:
  selector:
    matchLabels:
      app: organisationservice-pod
  replicas: 3
  template:
    metadata:
      labels:
        app: organisationservice-pod
    spec:
      containers:
      - name: organisationservice-container
        image: organisationservice:v1.0.1
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: organisationservice-service
  namespace: dropshippingplatform
spec:
  selector:
    app: organisationservice-pod
  ports:
    - protocol: TCP
      port: 81
      targetPort: 80
  type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: apigateway-deployment
  namespace: dropshippingplatform
spec:
  selector:
    matchLabels:
      app: apigateway-pod
  template:
    metadata:
      labels:
        app: apigateway-pod
    spec:
      containers:
      - name: apigateway-container
        image: apigateway:v1.0.1
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: apigateway-service
  namespace: dropshippingplatform
spec:
  selector:
    app: apigateway-pod
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  type: LoadBalancer

Here is my Ocelot configuration:

{
  "Routes": [
    {
      "DownstreamPathTemplate": "/api/organisations",
      "DownstreamScheme": "http",
      "ServiceName": "organisationservice-service",
      "ServiceNamespace": "dropshippingplatform",
      "UpstreamPathTemplate": "/APIGateway/Organisations",
      "UpstreamHttpMethod": [ "Get" ],
      "Key": "Login"
    },
    {
      "DownstreamPathTemplate": "/weatherforecast",
      "DownstreamScheme": "http",
      "ServiceName": "organisationservice-service",
      "ServiceNamespace": "dropshippingplatform",
      "UpstreamPathTemplate": "/APIGateway/WeatherForecast",
      "UpstreamHttpMethod": [ "Get" ],
      "Key": "WeatherForecast"
    }
  ],
  "Aggregates": [
    {
      "RouteKeys": [
        "Login",
        "WeatherForecast"
      ],
      "UpstreamPathTemplate": "/APIGateway/Organisations/Login"
    },
    {
      "RouteKeys": [
        "Login",
        "WeatherForecast"
      ],
      "UpstreamPathTemplate": "/APIGateway/Organisations/TestAggregator",
      "Aggregator": "TestAggregator"
    }
  ],
  "GlobalConfiguration": {
    "ServiceDiscoveryProvider": {
      "Namespace": "default",
      "Type": "KubernetesServiceDiscoveryProvider"
    }
  }
}

To isolate the issue I created a Load Balancer in front of the Kubernetes Service in question and called the Service directly from the client. The same Pod is being hit all the time which tells me its to do with Kubernetes and not the Ocelot API Gateway.

Here is the output of kubectl describe svc:

Name:              organisationservice-service
Namespace:         dropshippingplatform
Labels:            <none>
Annotations:       <none>
Selector:          app=organisationservice-pod
Type:              ClusterIP
IP:                X.X.X.119
Port:              <unset>  81/TCP
TargetPort:        80/TCP
Endpoints:         X.X.X.163:80,X.X.X.165:80,X.X.X.166:80
Session Affinity:  None
Events:            <none>
1
Is there many different clients? Any keep-alive connections involved? what kind of gateways?Jonas
How are you connecting to the service? Where is the client running? What is the client?David Maze
There is only one client which is running on the local machine also. However the Service that I'm having issues with is not called by the client, but an API Gateway Service which is detailed on the configuration. This API Gateway is running Ocelot. I will add the Ocelot configuration also.Sach K
Hello @SachK, could you add more details about your service? What is the result of commands: kubectl describe svc <my-service> | grep -i end and kubectl get ep <my-service>. You should check the endpoints. Please add this info to the question. You can also read more here.Mikołaj Głodziak
@Mikolaj Glodziak - Added the outputSach K

1 Answers

1
votes

I solved it. It turned out that Ocelot API Gateway was the issue. I added this to the Ocelot configuration:

"LoadBalancerOptions": {
        "Type": "RoundRobin"
      },

And now its equally distributing the traffic.