keepalive
in one place, thereby reducing the workload for the partners themselves.<partner_tag>.domain.local
, in Nginx there was a map
, where <partner_tag>
partner’s address. From the map
address was taken and proxy_pass
was made to this address.map
that we parse the domain with and select the upstream from the list: ### берем префикс из имени домена: <tag>.domain.local map $http_host $upstream_prefix { default 0; "~^([^\.]+)\." $1; } ### выбираем нужный адрес по префиксу map $upstream_prefix $upstream_address { include snippet.d/upstreams_map; default http://127.0.0.1:8080; } ### выставляем переменную upstream_host исходя из переменной upstream_address map $upstream_address $upstream_host { default 0; "~^https?://([^:]+)" $1; }
snippet.d/upstreams_map
looks like: “one” “http://one.domain.net”; “two” “https://two.domain.org”;
server{}
: server { listen 80; location / { proxy_http_version 1.1; proxy_pass $upstream_address$request_uri; proxy_set_header Host $upstream_host; proxy_set_header X-Forwarded-For ""; proxy_set_header X-Forwarded-Port ""; proxy_set_header X-Forwarded-Proto ""; } } # service for error handling and logging server { listen 127.0.0.1:8080; location / { return 400; } location /ngx_status/ { stub_status; } }
keepalive
and closes immediately after the response is completed. Even if we proxy_http_version 1.1
, nothing will change without upstream ( proxy_http_version ).map
to keep the "tag" "upstream_name"
.map
for parsing the schema: ### берем префикс из имени домена: <tag>.domain.local map $http_host $upstream_prefix { default 0; "~^([^\.]+)\." $1; } ### выбираем нужный адрес по префиксу map $upstream_prefix $upstream_address { include snippet.d/upstreams_map; default http://127.0.0.1:8080; } ### выставляем переменную upstream_host исходя из переменной upstream_address map $upstream_address $upstream_host { default 0; "~^https?://([^:]+)" $1; } ### добавляем парсинг схемы, чтобы к кому надо ходить по https, а к кому надо, но не очень - по http map $upstream_address $upstream_scheme { default "http://"; "~(https?://)" $1; }
upstreams
with tag names: upstream one { keepalive 64; server one.domain.com; } upstream two { keepalive 64; server two.domain.net; }
server { listen 80; location / { proxy_http_version 1.1; proxy_pass $upstream_scheme$upstream_prefix$request_uri; proxy_set_header Host $upstream_host; proxy_set_header X-Forwarded-For ""; proxy_set_header X-Forwarded-Port ""; proxy_set_header X-Forwarded-Proto ""; } } # service for error handling and logging server { listen 127.0.0.1:8080; location / { return 400; } location /ngx_status/ { stub_status; } }
keepalive
directive, we set proxy_http_version 1.1
, - now we have a pool of connections, and everything works as it should.upstream
to new addresses and let traffic to them. In general, also a solution. Throw in cron reload nginx
every 5 minutes and continue to drink tea.dns resolvers
and configure dns cache
. Thus, Haproxy will update the dns cache
if the entries in it have expired, and replace the addresses for upstream in the event that they have changed. frontend http bind *:80 http-request del-header X-Forwarded-For http-request del-header X-Forwarded-Port http-request del-header X-Forwarded-Proto capture request header Host len 32 capture request header Referer len 128 capture request header User-Agent len 128 acl host_present hdr(host) -m len gt 0 use_backend %[req.hdr(host),lower,field(1,'.')] if host_present default_backend default resolvers dns hold valid 1s timeout retry 100ms nameserver dns1 1.1.1.1:53 backend one http-request set-header Host one.domain.com server one--one.domain.com one.domain.com:80 resolvers dns check backend two http-request set-header Host two.domain.net server two--two.domain.net two.domain.net:443 resolvers dns check ssl verify none check-sni two.domain.net sni str(two.domain.net)
"tag" "upstream"
, so I decided to take it as a basis, parse and generate a haproxy backend based on these values. #! /usr/bin/env bash haproxy_backend_map_file=./root/etc/haproxy/snippet.d/name_domain_map haproxy_backends_file=./root/etc/haproxy/99_backends.cfg nginx_map_file=./nginx_map while getopts 'n:b:m:' OPT;do case ${OPT} in n) nginx_map_file=${OPTARG} ;; b) haproxy_backends_file=${OPTARG} ;; m) haproxy_backend_map_file=${OPTARG} ;; *) echo 'Usage: ${0} -n [nginx_map_file] -b [haproxy_backends_file] -m [haproxy_backend_map_file]' exit esac done function write_backend(){ local tag=$1 local domain=$2 local port=$3 local server_options="resolvers dns check" [ -n "${4}" ] && local ssl_options="ssl verify none check-sni ${domain} sni str(${domain})" [ -n "${4}" ] && server_options+=" ${ssl_options}" cat >> ${haproxy_backends_file} <<EOF backend ${tag} http-request set-header Host ${domain} server ${tag}--${domain} ${domain}:${port} ${server_options} EOF } :> ${haproxy_backends_file} :> ${haproxy_backend_map_file} while read tag addr;do tag=${tag//\"/} [ -z "${tag:0}" ] && continue [ "${tag:0:1}" == "#" ] && continue IFS=":" read scheme domain port <<<${addr//;} unset IFS domain=${domain//\/} case ${scheme} in http) port=${port:-80} write_backend ${tag} ${domain} ${port} ;; https) port=${port:-443} write_backend ${tag} ${domain} ${port} 1 esac done < <(sort -V ${nginx_map_file})
Source: https://habr.com/ru/post/436992/