0
votes

I am a newbie to k8s and I am trying to deploy a private docker registry in Kubernetes.

The problem is that whenever I have to upload a heavy image (1GB size) via docker push, the command eventually returns EOF.

Apparently, I believe the issue has to do with kubernetes ingress nginx controller.

I will provide you with some useful information, in case you need more, do not hesitate to ask:

Docker push (to internal k8s docker registry) fail:

[root@bastion ~]# docker push docker-registry.apps.kube.lab/example:stable
The push refers to a repository [docker-registry.apps.kube.lab/example]
c0acde035881: Pushed 
f6d2683cee8b: Pushed 
00b1a6ab6acd: Retrying in 1 second 
28c41b4dd660: Pushed 
36957997ca7a: Pushed 
5c4d527d6b3a: Pushed 
a933681cf349: Pushing [==================================================>] 520.4 MB
f49d20b92dc8: Retrying in 20 seconds 
fe342cfe5c83: Retrying in 15 seconds 
630e4f1da707: Retrying in 13 seconds 
9780f6d83e45: Waiting 
EOF

Ingress definition:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: docker-registry
  namespace: docker-registry
  annotations:
    nginx.ingress.kubernetes.io/proxy-body-size: "0"
    nginx.ingress.kubernetes.io/proxy-connect-timeout: "86400"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "86400"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "86400"
spec:    
  rules: 
  - host: docker-registry.apps.kube.lab
    http:
      paths:
      - backend:
          serviceName: docker-registry
          servicePort: 5000
        path: /  

Docker registry configuration (/etc/docker/registry/config.yml):

version: 0.1
log: 
  level: info
  formatter: json
  fields:
    service: registry
storage:
  redirect:
    disable: true                                                                                                                                                                                                 
  cache:
    blobdescriptor: inmemory
  filesystem:
    rootdirectory: /var/lib/registry
http:
  addr: :5000
  host: docker-registry.apps.kube.lab
  headers:
    X-Content-Type-Options: [nosniff]
health:
  storagedriver:
    enabled: true
    interval: 10s
    threshold: 3

Docker registry logs:

{"go.version":"go1.11.2","http.request.host":"docker-registry.apps.kube.lab","http.request.id":"c079b639-0e8a-4a27-96fa-44c4c0182ff7","http.request.method":"HEAD","http.request.remoteaddr":"10.233.70.0","http.request.uri":"/v2/example/blobs/sha256:751620502a7a2905067c2f32d4982fb9b310b9808670ce82c0e2b40f5307a3ee","http.request.useragent":"docker/1.13.1 go/go1.10.3 kernel/3.10.0-1127.el7.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/1.13.1 \\(linux\\))","level":"debug","msg":"authorizing request","time":"2020-11-07T14:43:22.893626513Z","vars.digest":"sha256:751620502a7a2905067c2f32d4982fb9b310b9808670ce82c0e2b40f5307a3ee","vars.name":"example"}
{"go.version":"go1.11.2","http.request.host":"docker-registry.apps.kube.lab","http.request.id":"c079b639-0e8a-4a27-96fa-44c4c0182ff7","http.request.method":"HEAD","http.request.remoteaddr":"10.233.70.0","http.request.uri":"/v2/example/blobs/sha256:751620502a7a2905067c2f32d4982fb9b310b9808670ce82c0e2b40f5307a3ee","http.request.useragent":"docker/1.13.1 go/go1.10.3 kernel/3.10.0-1127.el7.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/1.13.1 \\(linux\\))","level":"debug","msg":"GetBlob","time":"2020-11-07T14:43:22.893751065Z","vars.digest":"sha256:751620502a7a2905067c2f32d4982fb9b310b9808670ce82c0e2b40f5307a3ee","vars.name":"example"}
{"go.version":"go1.11.2","http.request.host":"docker-registry.apps.kube.lab","http.request.id":"c079b639-0e8a-4a27-96fa-44c4c0182ff7","http.request.method":"HEAD","http.request.remoteaddr":"10.233.70.0","http.request.uri":"/v2/example/blobs/sha256:751620502a7a2905067c2f32d4982fb9b310b9808670ce82c0e2b40f5307a3ee","http.request.useragent":"docker/1.13.1 go/go1.10.3 kernel/3.10.0-1127.el7.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/1.13.1 \\(linux\\))","level":"debug","msg":"filesystem.GetContent(\"/docker/registry/v2/repositories/example/_layers/sha256/751620502a7a2905067c2f32d4982fb9b310b9808670ce82c0e2b40f5307a3ee/link\")","time":"2020-11-07T14:43:22.893942372Z","trace.duration":74122,"trace.file":"/go/src/github.com/docker/distribution/registry/storage/driver/base/base.go","trace.func":"github.com/docker/distribution/registry/storage/driver/base.(*Base).GetContent","trace.id":"11e24830-7d16-404a-90bc-8a738cab84ea","trace.line":95,"vars.digest":"sha256:751620502a7a2905067c2f32d4982fb9b310b9808670ce82c0e2b40f5307a3ee","vars.name":"example"}
{"err.code":"blob unknown","err.detail":"sha256:751620502a7a2905067c2f32d4982fb9b310b9808670ce82c0e2b40f5307a3ee","err.message":"blob unknown to registry","go.version":"go1.11.2","http.request.host":"docker-registry.apps.kube.lab","http.request.id":"c079b639-0e8a-4a27-96fa-44c4c0182ff7","http.request.method":"HEAD","http.request.remoteaddr":"10.233.70.0","http.request.uri":"/v2/example/blobs/sha256:751620502a7a2905067c2f32d4982fb9b310b9808670ce82c0e2b40f5307a3ee","http.request.useragent":"docker/1.13.1 go/go1.10.3 kernel/3.10.0-1127.el7.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/1.13.1 \\(linux\\))","http.response.contenttype":"application/json; charset=utf-8","http.response.duration":"1.88607ms","http.response.status":404,"http.response.written":157,"level":"error","msg":"response completed with error","time":"2020-11-07T14:43:22.894147954Z","vars.digest":"sha256:751620502a7a2905067c2f32d4982fb9b310b9808670ce82c0e2b40f5307a3ee","vars.name":"example"}
10.233.105.66 - - [07/Nov/2020:14:43:22 +0000] "HEAD /v2/example/blobs/sha256:751620502a7a2905067c2f32d4982fb9b310b9808670ce82c0e2b40f5307a3ee HTTP/1.1" 404 157 "" "docker/1.13.1 go/go1.10.3 kernel/3.10.0-1127.el7.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/1.13.1 \\(linux\\))"

I believe the issue has to do with ingress controller because when EOF error shows up, there is something weird in ingress-controller logs:

10.233.70.0 - - [07/Nov/2020:14:43:41 +0000] "PUT /v2/example/blobs/uploads/dab984a8-7e71-4481-91fb-af53c7790a20?_state=usMX2WH24Veunay0ozOF-RMZIUMNTFSC8MSPbMcxz-B7Ik5hbWUiOiJleGFtcGxlIiwiVVVJRCI6ImRhYjk4NGE4LTdlNzEtNDQ4MS05MWZiLWFmNTNjNzc5MGEyMCIsIk9mZnNldCI6NzgxMTczNywiU3RhcnRlZEF0IjoiMjAyMC0xMS0wN1QxNDo0MzoyOFoifQ%3D%3D&digest=sha256%3A101c41d0463bc77661fb3343235b16d536a92d2efb687046164d413e51bd4fc4 HTTP/1.1" 201 0 "-" "docker/1.13.1 go/go1.10.3 kernel/3.10.0-1127.el7.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/1.13.1 \x5C(linux\x5C))" 606 0.026 [docker-registry-docker-registry-5000] [] 10.233.70.84:5000 0 0.026 201 06304ff584d252812dff016374be73ae
172.16.1.123 - - [07/Nov/2020:14:43:42 +0000] "HEAD /v2/example/blobs/sha256:101c41d0463bc77661fb3343235b16d536a92d2efb687046164d413e51bd4fc4 HTTP/1.1" 200 0 "-" "docker/1.13.1 go/go1.10.3 kernel/3.10.0-1127.el7.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/1.13.1 \x5C(linux\x5C))" 299 0.006 [docker-registry-docker-registry-5000] [] 10.233.70.84:5000 0 0.006 200 a5a93c7b7f4644139fcb0697d3e5e43f
I1107 14:44:05.285478       6 main.go:184] "Received SIGTERM, shutting down"
I1107 14:44:05.285517       6 nginx.go:365] "Shutting down controller queues"
I1107 14:44:06.294533       6 status.go:132] "removing value from ingress status" address=[172.16.1.123]
I1107 14:44:06.306793       6 status.go:277] "updating Ingress status" namespace="kube-system" ingress="example-ingress" currentValue=[{IP:172.16.1.123 Hostname:}] newValue=[]
I1107 14:44:06.307650       6 status.go:277] "updating Ingress status" namespace="kubernetes-dashboard" ingress="dashboard" currentValue=[{IP:172.16.1.123 Hostname:}] newValue=[]
I1107 14:44:06.880987       6 status.go:277] "updating Ingress status" namespace="test-nfs" ingress="example-nginx" currentValue=[{IP:172.16.1.123 Hostname:}] newValue=[]
I1107 14:44:07.872659       6 status.go:277] "updating Ingress status" namespace="test-ingress" ingress="example-ingress" currentValue=[{IP:172.16.1.123 Hostname:}] newValue=[]
I1107 14:44:08.505295       6 queue.go:78] "queue has been shutdown, failed to enqueue" key="&ObjectMeta{Name:sync status,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[]OwnerReference{},Finalizers:[],ClusterName:,ManagedFields:[]ManagedFieldsEntry{},}"
I1107 14:44:08.713579       6 status.go:277] "updating Ingress status" namespace="docker-registry" ingress="docker-registry" currentValue=[{IP:172.16.1.123 Hostname:}] newValue=[]
I1107 14:44:09.772593       6 nginx.go:373] "Stopping admission controller"
I1107 14:44:09.772697       6 nginx.go:381] "Stopping NGINX process"
E1107 14:44:09.773208       6 nginx.go:314] "Error listening for TLS connections" err="http: Server closed"
2020/11/07 14:44:09 [notice] 114#114: signal process started
10.233.70.0 - - [07/Nov/2020:14:44:16 +0000] "PATCH /v2/example/blobs/uploads/adbe3173-9928-4eb5-97bb-7893970f032a?_state=nEr2ip9eoLNCTe8KQ6Ck7k3C8oS9IY7AnBOi1_f5mSl7Ik5hbWUiOiJleGFtcGxlIiwiVVVJRCI6ImFkYmUzMTczLTk5MjgtNGViNS05N2JiLTc4OTM5NzBmMDMyYSIsIk9mZnNldCI6MCwiU3RhcnRlZEF0IjoiMjAyMC0xMS0wN1QxNDo0MzoyOC45ODY3MTQwNTlaIn0%3D HTTP/1.1" 202 0 "-" "docker/1.13.1 go/go1.10.3 kernel/3.10.0-1127.el7.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/1.13.1 \x5C(linux\x5C))" 50408825 46.568 [docker-registry-docker-registry-5000] [] 10.233.70.84:5000 0 14.339 202 55d9cab4f915f54e5c130321db4dc8fc
10.233.70.0 - - [07/Nov/2020:14:44:19 +0000] "PATCH /v2/example/blobs/uploads/63d4a54a-cdfd-434b-ae63-dc434dcb15f9?_state=9UK7MRYJYST--u7BAUFTonCdPzt_EO2KyfJblVroBxd7Ik5hbWUiOiJleGFtcGxlIiwiVVVJRCI6IjYzZDRhNTRhLWNkZmQtNDM0Yi1hZTYzLWRjNDM0ZGNiMTVmOSIsIk9mZnNldCI6MCwiU3RhcnRlZEF0IjoiMjAyMC0xMS0wN1QxNDo0MzoyMy40MjIwMDI4NThaIn0%3D HTTP/1.1" 202 0 "-" "docker/1.13.1 go/go1.10.3 kernel/3.10.0-1127.el7.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/1.13.1 \x5C(linux\x5C))" 51842691 55.400 [docker-registry-docker-registry-5000] [] 10.233.70.84:5000 0 18.504 202 1f1de1ae89caa8540b6fd13ea5b165ab
10.233.70.0 - - [07/Nov/2020:14:44:50 +0000] "PATCH /v2/example/blobs/uploads/0c97923d-ed9f-4599-8a50-f2c21cfe85fe?_state=WmIRW_3owlin1zo4Ms98UwaMGf1D975vUuzbk1JWRuN7Ik5hbWUiOiJleGFtcGxlIiwiVVVJRCI6IjBjOTc5MjNkLWVkOWYtNDU5OS04YTUwLWYyYzIxY2ZlODVmZSIsIk9mZnNldCI6MCwiU3RhcnRlZEF0IjoiMjAyMC0xMS0wN1QxNDo0MzoyMC41ODA5MjUyNDlaIn0%3D HTTP/1.1" 202 0 "-" "docker/1.13.1 go/go1.10.3 kernel/3.10.0-1127.el7.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/1.13.1 \x5C(linux\x5C))" 192310965 89.937 [docker-registry-docker-registry-5000] [] 10.233.70.84:5000 0 22.847 202 d8971d2f543e936c2f805d5b257f1130
I1107 14:44:50.832669       6 nginx.go:394] "NGINX process has stopped"
I1107 14:44:50.832703       6 main.go:192] "Handled quit, awaiting Pod deletion"
I1107 14:45:00.832892       6 main.go:195] "Exiting" code=0
[root@bastion registry]# 

After that happens, ingres-controller pod is not ready, and after some seconds it is again ready.

Is it to do with config reload of kubernetes nginx ingress controller? In such case, do I have to add any special variable to nginx.conf?

Any help is welcome! Kind regards!

EDIT

The moment EOF appears, ingress-nginx crashes, and pods become not ready.

[root@bastion ~]# kubectl get po 
NAME                                        READY   STATUS      RESTARTS   AGE
ingress-nginx-admission-create-lbmd6        0/1     Completed   0          5d4h
ingress-nginx-admission-patch-btv27         0/1     Completed   0          5d4h
ingress-nginx-controller-7dcc8d6478-n8dkx   0/1     Running     3          15m

 Warning  Unhealthy  29s (x8 over 2m39s)   kubelet                   Liveness probe failed: Get http://10.233.70.100:10254/healthz: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

As a consequence, any of my applications are not reachable:

[root@bastion ~]# curl http://hello-worrld.apps.kube.lab
Hello, world!
Version: 1.0.0
Hostname: web-6785d44d5-4r5q5
[root@bastion ~]# date
sáb nov  7 18:58:16 -01 2020

[root@bastion ~]# curl http://hello-worrld.apps.kube.lab
curl: (52) Empty reply from server
[root@bastion ~]# date
sáb nov  7 18:58:53 -01 2020

Is the issue to do with performance of nginx? If so, what options would you recommend me to tweak ingress-nginx?

1
the annotation nginx.ingress.kubernetes.io/proxy-body-size solved your issue?Mr.KoopaKiller
It only solves error 413. The issue is still there, nginx experiences a peak of memory and dies when uploading heavy image. I think it is a memory issue.José Ángel Morena Simón
Did try with a smaller image to see if the behaviour is the same? Try to update a small image to verify it.Mr.KoopaKiller

1 Answers

1
votes

You should try another Docker registry to ensure its actually caused by ingress. It does not make sense why ingress would fail due to an image size.

You can try JFrog JCR which is free and you could then deploy JCR into your kubernetes and expose it via a LoadBalancer (external ip) or ingress.

You then have the option to verify this way that it is really an ingress issue as you can push a docker image via LoadBalancer (external ip) and if that works but ingress fails you know this is specifically caused by your ingress.

JFrog JCR is also free and available at chartcenter here