0
votes

I am trying to run a bitcoin node on kubernetes. My stateful set is as follows:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: bitcoin-stateful
  namespace: dev
spec:
  serviceName: bitcoinrpc-dev-service
  replicas: 1
  selector:
    matchLabels:
      app: bitcoin-node
  template:
    metadata:
      labels:
        app: bitcoin-node
    spec:
      containers:
      - name: bitcoin-node-mainnet
        image: myimage:v0.13.2-addrindex
        imagePullPolicy: Always
        ports:
        - containerPort: 8332 
        volumeMounts:
        - name: bitcoin-chaindata
          mountPath: /root/.bitcoin
        livenessProbe:
          exec:
            command:
            -  bitcoin-cli
            -  getinfo
          initialDelaySeconds: 60 #wait this period after staring fist time
          periodSeconds: 15  # polling interval
          timeoutSeconds: 15    # wish to receive response within this time period
        readinessProbe: 
          exec:
            command:
            -  bitcoin-cli
            -  getinfo
          initialDelaySeconds: 60 #wait this period after staring fist time
          periodSeconds: 15    # polling interval
          timeoutSeconds: 15    # wish to receive response within this time period
        command: ["/bin/bash"]
        args: ["-c","service ntp start && \
                    bitcoind -printtoconsole -conf=/root/.bitcoin/bitcoin.conf -reindex-chainstate -datadir=/root/.bitcoin/ -daemon=0 -bind=0.0.0.0"]

Since, the bitcoin node doesn't serve any http get requests and only can serve post requests, I am trying to use bitcoin-cli command for liveness and readiness probe

My service is as follows:

kind: Service
apiVersion: v1
metadata:
  name: bitcoinrpc-dev-service
  namespace: dev
spec:
  selector:
    app: bitcoin-node
  ports:
  - name: mainnet
    protocol: TCP
    port: 80
    targetPort: 8332

When I describe the pods, they are running ok and all the health checks seem to be ok.

However, I am also using ingress controller with the following config:

kind: Ingress
apiVersion: extensions/v1beta1
metadata:
  name: dev-ingress
  namespace: dev
  annotations:
    kubernetes.io/ingress.class: "gce"
    kubernetes.io/ingress.global-static-ip-name: "dev-ingress"
spec:
  rules:
        - host: bitcoin.something.net
      http:
        paths:
        - path: /rpc
          backend:
            serviceName: bitcoinrpc-dev-service
            servicePort: 80

The health checks on the L7 load balancer seem to be failing. The tests are automatically configured in the following manner.

enter image description here

However, these tests are not the same as the ones configured in the readiness probe. I tried to delete the ingress and recreate however, it still behaves the same way.

I have the following questions:

1. Should I modify/delete this health check manually?
2. Even if the health check is failing (wrongly configured), since the containers and ingress are up, does it mean that I should be able to access the service through http?
2

2 Answers

0
votes

What is is missing is that you are performing the liveness and readiness probe as exec command, therefor you need to create a pod that includes an Exec readiness probe and other pod that includes Exec readiness probe as well, Here and Here is described how to do it.

Another thing is to receive traffic through the GCE L7 Loadbalancer Controller you need: At least 1 Kubernetes NodePort Service (this is the endpoint for your Ingress), so your service is not configured well. therefor you will not be able to able to access the service.

The health check in picture in for the Default backend (where your MIG is using it to check the health of the node) that mean your nodes health check not the container.

0
votes
  1. No, you don't have to delete the health check as it will get created automatically even if you delete it.
  2. No, you won't be able to access the services until the health checks pass because the traffic in case of gke is passed using NEGs which depend on health checks to know where they can route traffic to.

One possible solution this could be that you need to add a basic http router to your application that returns 200, this can be used health check endpoint.

Other possible options include:

  1. Creating a service of type NodePort and using LoadBalancer to route traffic on the given port to the node pool/instance groups as backend service rather than using NEG
  2. Create the service of type LoadBalancer. This step is the easiest but you need to ensure that the load balancer ip is protected using best security policies like IAP, firewall rules, etc.