Good day everyone! Everything is done on Kubernetes on AWS (EKS)

I am trying to run nginx proxy in a container that will be thrown onto an internal service. The scheme is as follows:

alb ingress controller -> service nginx -> nginx pod -> my service -> myservice pod

Error in error.log nginx pod:

2019/04/10 06:07:31 [error] 6#6: *116 connect() failed (111: Connection refused) while connecting to upstream, client: 172.31.43.108, server: myservice.host.com, request: "GET / HTTP/1.1", upstream: "http://172.31.9.18:80/", host: "172.31.8.88" 

So, everything works: alb ingress controller -> my service -> myservice pod

INGRESS did here for this article, it works fine: https://kubernetes-sigs.imtqy.com/aws-alb-ingress-controller/guide/walkthrough/echoserver/

My configs:


myservice-ingress.yaml

 apiVersion: extensions/v1beta1 kind: Ingress metadata: name: myservice-ingress namespace: myservice annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/target-type: ip alb.ingress.kubernetes.io/subnets: subnet-xxxxxxxx,subnet-xxxxxxxx alb.ingress.kubernetes.io/tags: Environment=dev,Team=test alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80}]' spec: rules: - host: myservice.host.com http: paths: - path: /* backend: serviceName: nginx-svc servicePort: 80 

myservice-service.yaml:

 apiVersion: v1 kind: Service metadata: namespace: myservice name: myservice-svc #name: nginx-svc spec: ports: - name: myservice protocol: TCP port: 80 targetPort: 8080 selector: app: myservice stack: master clusterIP: None 

nginx-service.yaml:

 apiVersion: v1 kind: Service metadata: name: nginx-svc namespace: myservice spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: app: myservice-nginx clusterIP: None 

nginx-deployment.yaml:

 apiVersion: extensions/v1beta1 kind: Deployment metadata: name: myservice-nginx namespace: myservice spec: replicas: 1 template: metadata: labels: app: myservice-nginx spec: containers: - image: cshlovjah/nginx imagePullPolicy: Always name: myservice-nginx ports: - name: liveness-port containerPort: 80 hostPort: 80 volumeMounts: - name: nginx-config mountPath: /etc/nginx/conf.d/default.conf readOnly: true subPath: default.conf volumes: - name: nginx-config configMap: name: myservice-config items: - key: default path: default.conf 

default.conf:

 upstream myservice_service { server myservice-svc; keepalive 64; } server { listen 80; server_name myservice.host.com; #location / { # root /usr/share/nginx/html; # index index.html index.htm; #} location / { add_header Access-Control-Allow-Origin *; proxy_pass http://myservice_service; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; proxy_http_version 1.1; proxy_cache_key sfs$request_uri$scheme; proxy_read_timeout 86400s; } location /healtz { alias /usr/share/nginx/html; index healtz.html healtz.htm; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } access_log /var/log/nginx/access_myservice.log; error_log /var/log/nginx/error_myservice.log; } 

myservice-deployment.yaml:

 apiVersion: apps/v1 kind: Deployment metadata: namespace: myservice name: myservice spec: selector: matchLabels: app: myservice stack: master strategy: type: Recreate template: metadata: labels: app: myservice stack: master spec: containers: - image: cshlovjah/kubia name: myservice ports: - containerPort: 8080 name: myservice 

Thank.

    1 answer 1

    All problem is removed! The endpoints looked:

     $ kubectl get endpoints -n myservice NAME ENDPOINTS AGE myservice-svc 172.31.9.18:8080 100m nginx-svc 172.31.45.33:80 102m 

    in default.conf:

     upstream myservice_service { server myservice-svc:8080; keepalive 64; }