0
votes

I was testing on a kubernetes setup with AWS EKS on Fargate, and encountered an issue on the container startup.

It is a java application making use of hibernate. It seems it failed to connect to the MySQL server on startup, giving a "Communications link failure" error. The database server is running properly on AWS RDS, and the docker image can be run as expected in local.

I wonder if this is caused by the MySQL port 3306 not being configured properly on the container/node/service. Would like to see if you can spot out what the issue is and please don't hesitate to point out any mis-configuration, thank you very much.


Pod startup log

/\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/  ___)| |_)| | | | | || (_| |  ) ) ) )
'  |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot ::        (v2.3.1.RELEASE)

2020-08-13 11:39:39.930  INFO 1 --- [           main] com.example.demo.DemoApplication         : The following profiles are active: prod
2020-08-13 11:39:58.536  INFO 1 --- [           main] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data JPA repositories in DEFERRED mode.
...
......
2020-08-13 11:41:27.606 ERROR 1 --- [         task-1] com.zaxxer.hikari.pool.HikariPool        : HikariPool-1 - Exception during pool initialization.

com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure

The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
    at com.mysql.cj.jdbc.exceptions.SQLError.createCommunicationsException(SQLError.java:174) ~[mysql-connector-java-8.0.20.jar!/:8.0.20]
    at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:64) ~[mysql-connector-java-8.0.20.jar!/:8.0.20]
    at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:836) ~[mysql-connector-java-8.0.20.jar!/:8.0.20]
    at com.mysql.cj.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:456) ~[mysql-connector-java-8.0.20.jar!/:8.0.20]
    at com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:246) ~[mysql-connector-java-8.0.20.jar!/:8.0.20]
    at com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:197) ~[mysql-connector-java-8.0.20.jar!/:8.0.20]
    at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:138) ~[HikariCP-3.4.5.jar!/:na]
    at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:358) ~[HikariCP-3.4.5.jar!/:na]
    at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:206) ~[HikariCP-3.4.5.jar!/:na]
    at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:477) [HikariCP-3.4.5.jar!/:na]
    at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:560) [HikariCP-3.4.5.jar!/:na]
    at com.zaxxer.hikari.pool.HikariPool.<init>(HikariPool.java:115) [HikariCP-3.4.5.jar!/:na]
    at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:112) [HikariCP-3.4.5.jar!/:na]
    at org.hibernate.engine.jdbc.connections.internal.DatasourceConnectionProviderImpl.getConnection(DatasourceConnectionProviderImpl.java:122) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
    at org.hibernate.engine.jdbc.env.internal.JdbcEnvironmentInitiator$ConnectionProviderJdbcConnectionAccess.obtainConnection(JdbcEnvironmentInitiator.java:180) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
    at org.hibernate.engine.jdbc.env.internal.JdbcEnvironmentInitiator.initiateService(JdbcEnvironmentInitiator.java:68) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
    at org.hibernate.engine.jdbc.env.internal.JdbcEnvironmentInitiator.initiateService(JdbcEnvironmentInitiator.java:35) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
    at org.hibernate.boot.registry.internal.StandardServiceRegistryImpl.initiateService(StandardServiceRegistryImpl.java:101) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
    at org.hibernate.service.internal.AbstractServiceRegistryImpl.createService(AbstractServiceRegistryImpl.java:263) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
    at org.hibernate.service.internal.AbstractServiceRegistryImpl.initializeService(AbstractServiceRegistryImpl.java:237) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
    at org.hibernate.service.internal.AbstractServiceRegistryImpl.getService(AbstractServiceRegistryImpl.java:214) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
    at org.hibernate.id.factory.internal.DefaultIdentifierGeneratorFactory.injectServices(DefaultIdentifierGeneratorFactory.java:152) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
    at org.hibernate.service.internal.AbstractServiceRegistryImpl.injectDependencies(AbstractServiceRegistryImpl.java:286) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
    at org.hibernate.service.internal.AbstractServiceRegistryImpl.initializeService(AbstractServiceRegistryImpl.java:243) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
    at org.hibernate.service.internal.AbstractServiceRegistryImpl.getService(AbstractServiceRegistryImpl.java:214) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
    at org.hibernate.boot.internal.InFlightMetadataCollectorImpl.<init>(InFlightMetadataCollectorImpl.java:176) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
    at org.hibernate.boot.model.process.spi.MetadataBuildingProcess.complete(MetadataBuildingProcess.java:118) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
    at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.metadata(EntityManagerFactoryBuilderImpl.java:1224) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
    at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.build(EntityManagerFactoryBuilderImpl.java:1255) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
    at org.springframework.orm.jpa.vendor.SpringHibernateJpaPersistenceProvider.createContainerEntityManagerFactory(SpringHibernateJpaPersistenceProvider.java:58) [spring-orm-5.2.7.RELEASE.jar!/:5.2.7.RELEASE]
    at org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean.createNativeEntityManagerFactory(LocalContainerEntityManagerFactoryBean.java:365) [spring-orm-5.2.7.RELEASE.jar!/:5.2.7.RELEASE]
    at org.springframework.orm.jpa.AbstractEntityManagerFactoryBean.buildNativeEntityManagerFactory(AbstractEntityManagerFactoryBean.java:391) [spring-orm-5.2.7.RELEASE.jar!/:5.2.7.RELEASE]
    at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_212]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[na:1.8.0_212]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[na:1.8.0_212]
    at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_212]
...
......

Service

patricks-mbp:test patrick$ kubectl get services -n test
NAME   TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
test   NodePort   10.100.160.22   <none>        80:31176/TCP   4h57m

service.yaml

kind: Service
apiVersion: v1
metadata:
name: test
namespace: test
spec:
selector:
    app: test
type: NodePort
ports:
- protocol: TCP
    port: 80
    targetPort: 8080

Deployment

patricks-mbp:test patrick$ kubectl get deployments -n test
NAME   READY   UP-TO-DATE   AVAILABLE   AGE
test   0/1     1            0           4h42m

deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: test
  namespace: test
  labels:
    app: test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: test
  strategy: {}
  template:
    metadata:
      labels:
        app: test
    spec:
      containers:
      - name: test
        image: <image location>
        ports:
          - containerPort: 8080
        resources: {}

Pods

patricks-mbp:test patrick$ kubectl get pods -n test
NAME                   READY   STATUS    RESTARTS   AGE
test-8648f7959-4gdvm   1/1     Running   6          21m

patricks-mbp:test patrick$ kubectl describe pod test-8648f7959-4gdvm -n test
Name:                 test-8648f7959-4gdvm
Namespace:            test
Priority:             2000001000
Priority Class Name:  system-node-critical
Node:                 fargate-ip-192-168-123-170.ec2.internal/192.168.123.170
Start Time:           Thu, 13 Aug 2020 21:29:07 +1000
Labels:               app=test
                    eks.amazonaws.com/fargate-profile=fp-1a0330f1
                    pod-template-hash=8648f7959
Annotations:          kubernetes.io/psp: eks.privileged
Status:               Running
IP:                   192.168.123.170
IPs:
IP:           192.168.123.170
Controlled By:  ReplicaSet/test-8648f7959
Containers:
test:
    Container ID:   containerd://a1517a13d66274e1d7f8efcea950d0fe3d944d1f7208d057494e208223a895a7
    Image:          <image location>
    Image ID:       <image ID>
    Port:           8080/TCP
    Host Port:      0/TCP
    State:          Terminated
    Reason:       Error
    Exit Code:    1
    Started:      Thu, 13 Aug 2020 21:48:07 +1000
    Finished:     Thu, 13 Aug 2020 21:50:28 +1000
    Last State:     Terminated
    Reason:       Error
    Exit Code:    1
    Started:      Thu, 13 Aug 2020 21:43:04 +1000
    Finished:     Thu, 13 Aug 2020 21:45:22 +1000
    Ready:          False
    Restart Count:  6
    Environment:    <none>
    Mounts:
    /var/run/secrets/kubernetes.io/serviceaccount from default-token-5hdzd (ro)
Conditions:
Type              Status
Initialized       True
Ready             False
ContainersReady   False
PodScheduled      True
Volumes:
default-token-5hdzd:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-5hdzd
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type     Reason     Age                 From                                              Message
----     ------     ----                ----                                              -------
Normal   Scheduled  <unknown>           fargate-scheduler                                 Successfully assigned test/test-8648f7959-4gdvm to fargate-ip-192-168-123-170.ec2.internal
Normal   Pulling    21m                 kubelet, fargate-ip-192-168-123-170.ec2.internal  Pulling image "174304792831.dkr.ecr.us-east-1.amazonaws.com/test:v2"
Normal   Pulled     21m                 kubelet, fargate-ip-192-168-123-170.ec2.internal  Successfully pulled image "174304792831.dkr.ecr.us-east-1.amazonaws.com/test:v2"
Normal   Created    11m (x5 over 21m)   kubelet, fargate-ip-192-168-123-170.ec2.internal  Created container test
Normal   Started    11m (x5 over 21m)   kubelet, fargate-ip-192-168-123-170.ec2.internal  Started container test
Normal   Pulled     11m (x4 over 19m)   kubelet, fargate-ip-192-168-123-170.ec2.internal  Container image "174304792831.dkr.ecr.us-east-1.amazonaws.com/test:v2" already present on machine
Warning  BackOff    11s (x27 over 17m)  kubelet, fargate-ip-192-168-123-170.ec2.internal  Back-off restarting failed container

Ingress

patricks-mbp:~ patrick$ kubectl describe ing -n test test
Name:             test
Namespace:        test
Address:          <ALB public address>
Default backend:  default-http-backend:80 (<none>)
Rules:
  Host  Path  Backends
  ----  ----  --------
  *
        /   test:80 (192.168.72.15:8080)
Annotations:
  alb.ingress.kubernetes.io/scheme:                  internet-facing
  alb.ingress.kubernetes.io/target-type:             ip
  kubectl.kubernetes.io/last-applied-configuration:  {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"alb.ingress.kubernetes.io/scheme":"internet-facing","alb.ingress.kubernetes.io/target-type":"ip","kubernetes.io/ingress.class":"alb"},"name":"test","namespace":"test"},"spec":{"rules":[{"http":{"paths":[{"backend":{"serviceName":"test","servicePort":80},"path":"/"}]}}]}}

  kubernetes.io/ingress.class:  alb
Events:                         <none>

ingress.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: test
  namespace: test
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/target-type: ip
    alb.ingress.kubernetes.io/scheme: internet-facing
spec:
  rules:
    - http:
        paths:
          - path: /
            backend:
              serviceName: test
              servicePort: 80

AWS ALB ingress controller

Permission for ALB ingress controller to communicate with cluster -> similar to https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.8/docs/examples/rbac-role.yaml

Creation of Ingress Controller which uses ALB -> similar to https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.8/docs/examples/alb-ingress-controller.yaml

1
How does the JDBC connection url look like? Have you opened security group in RDS to allow communication? - Arghya Sadhu
@ArghyaSadhu jdbc:mysql://<AWS RDS endpoint>:3306/test - Patrick C.

1 Answers

1
votes

To allow pod from Fargate to connect to RDS you need to open security group.

  1. Find the security group ID of your Fargate service
  2. In your RDS security group rules, instead of putting a CIDR in your source field, put the Fargate service security group ID. Port 3306