1
votes

I have created Cassandra stateful/headless cluster on AWS and it's working fine inside the cluster. The only problem is I am not able to access it from outside cluster. I tried most of the things on the Kubernetes documentation or StackOverflow references, but still not able to solve it.

I have a working security group from AWS. Here are my service and statefulset yaml files.

apiVersion: v1
kind: Service
metadata:
  name: cassandra
spec:
  externalTrafficPolicy: Local
  ports:
  - nodePort: 30000
    port: 30000
    protocol: TCP
    targetPort: 9042
  selector:
    app: cassandra
  type: NodePort
apiVersion: "apps/v1"
kind: StatefulSet
metadata:
  name: cassandra
spec:
  serviceName: cassandra
  replicas: 2
  selector:
    matchLabels:
      app: cassandra
  template:
    metadata:
      labels:
        role: cassandra
        app: cassandra
    spec:
      terminationGracePeriodSeconds: 10
      containers:
        - env:
            - name: MAX_HEAP_SIZE
              value: 1024M
            - name: HEAP_NEWSIZE
              value: 1024M
            - name: CASSANDRA_SEEDS
              value: "cassandra-0.cassandra.default.svc.cluster.local"
            - name: CASSANDRA_CLUSTER_NAME
              value: "SetuCassandra"
            - name: CASSANDRA_DC
              value: "DC1-SetuCassandra"
            - name: CASSANDRA_RACK
              value: "Rack1-SetuCassandra"
            - name: CASSANDRA_SEED_PROVIDER
              value: io.k8s.cassandra.KubernetesSeedProvider
            - name: POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
          image: library/cassandra:3.11
          name: cassandra
          volumeMounts:
            - mountPath: /cassandra-storage
              name: cassandra-storage
          ports:
            - containerPort: 9042
              name: cql
  volumeClaimTemplates:
  - metadata:
      name: cassandra-storage
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 320Gi

I appreciate any help on this.

2
Please provide more details about networking - from where you cannot access that service? Can you SSH on Kubernetes node and check if you can access cassandra using node's IP and port from NodePort (30000)Jakub Bujny

2 Answers

1
votes

There are not enough details on the AWS security groups. But my guess is that your security group(s) in your cluster are not allowing traffic from the security groups or IP addresses from the other cluster. Something like this:

enter image description here

1
votes

The headless service created for the stateful set is not meant to be accessed by the users of the service. Its main intent, as per my understanding, was for the intra sts communication between the pods of the given STS (to form the cluster among themselves). For instances, if you have 3 node mongodb cluster (as an STS), mongodb-0 would want to exchange clustering info/data with mongodb-1 and mongodb-2.

If you want to access this service as a user, you are not interested in (or care about) mongodb-0/1/2 but more so as a service. The typical approach is to create a headful service (possibly with a nodeport if required) and access it.

Basically create two services, one would be a headless service (and use it with STS) and other would be a regular service. The pod selectors can be the same for both the services.