nginx 官网文档翻译--负载均衡

阅读数:1486 发布时间:2016-07-09 22:59:36

作者:zzl005 标签: nginx 朱忠来005

简介

Load balancing across multiple application instances is a commonly used technique for optimizing resource utilization, maximizing throughput, reducing latency, and ensuring fault-tolerant configurations.

It is possible to use nginx as a very efficient HTTP load balancer to distribute traffic to several application servers and to improve performance, scalability and reliability of web applications with nginx.

跨应用的负载均衡是一项广泛使用的技术,这个技术用来优化资源利用,最大化吞吐,减少等待时间,保证容错配置。

可以将 nginx 作为一个非常有效的 HTTP 负载均衡来向多个应用服务器分发流量,以此来提高性能,可扩展性和可靠性。

负载均衡方法

The following load balancing mechanisms (or methods) are supported in nginx:

以下是 nginx 支持的一些负载均衡机制(方法):

默认的负载均衡配置

The simplest configuration for load balancing with nginx may look like the following:

http {
    upstream myapp1 {
        server srv1.example.com;
        server srv2.example.com;
        server srv3.example.com;
    }

    server {
        listen 80;

        location / {
            proxy_pass http://myapp1;
        }
    }
}

nginx 负载均衡的最简单的配置就像下面这样:

http {
    upstream myapp1 {
        server srv1.example.com;
        server srv2.example.com;
        server srv3.example.com;
    }

    server {
        listen 80;

        location / {
            proxy_pass http://myapp1;
        }
    }
}

In the example above, there are 3 instances of the same application running on srv1-srv3. When the load balancing method is not specifically configured, it defaults to round-robin. All requests are proxied to the server group myapp1, and nginx applies HTTP load balancing to distribute the requests.

Reverse proxy implementation in nginx includes load balancing for HTTP, HTTPS, FastCGI, uwsgi, SCGI, and memcached.

To configure load balancing for HTTPS instead of HTTP, just use “https” as the protocol.

When setting up load balancing for FastCGI, uwsgi, SCGI, or memcached, use fastcgi_pass, uwsgi_pass, scgi_pass, and memcached_pass directives respectively.

在上面的这个例子里,同一个应用的三个实例分别运行在srv1,srv2,srv3上,当配置文件中没有特别指定时,默认会以轮循的方式来实现负载均衡。所有的请求都会被代理到服务器组 myapp1 上,并且 nginx 应用负载均衡来分发这些请求。

Reverse proxy implementation in nginx includes load balancing for HTTP, HTTPS, FastCGI, uwsgi, SCGI, and memcached.

To configure load balancing for HTTPS instead of HTTP, just use “https” as the protocol.

When setting up load balancing for FastCGI, uwsgi, SCGI, or memcached, use fastcgi_pass, uwsgi_pass, scgi_pass, and memcached_pass directives respectively.

在 nginx 中反向代理工具包括 HTTP, HTTPS, FastCGI, uwsgi, SCGI, 和 memcached 的负载均衡。

配置 HTTPS 负载均衡取代 HTTP,仅仅需要使用 https作为协议。

而当为 FastCGI, uwsgi, SCGI, 或是 memcached 设置负载均衡时,需要分别使用对应的 fastcgi_pass, uwsgi_pass, scgi_pass, 和 memcached_pass 指令。

最小连接数负载均衡

Another load balancing discipline is least-connected. Least-connected allows controlling the load on application instances more fairly in a situation when some of the requests take longer to complete.

With the least-connected load balancing, nginx will try not to overload a busy application server with excessive requests, distributing the new requests to a less busy server instead.

Least-connected load balancing in nginx is activated when the least_conn directive is used as part of the server group configuration:

    upstream myapp1 {
        least_conn;
        server srv1.example.com;
        server srv2.example.com;
        server srv3.example.com;
    }

另一个负载均衡规则是最小连接数。最小连接数对于(某些请求需要很长时间去处理的这种情况)能够更为合理的控制负载。

使用最小连接数负载均衡这种方式,nginx 会尽量不让一个已经有大量请求的繁忙服务器过载,nginx 会将新的请求分发给一个不那么繁忙的服务器。

当 least_conn 指令作为 服务器组(server group)配置中的一部分时,最小连接数负载均衡方法会被使用。

    upstream myapp1 {
        least_conn;
        server srv1.example.com;
        server srv2.example.com;
        server srv3.example.com;
    }

会话持久性

注:

上面说了轮询和最小连接数两种负载均衡方式,那么这两种方式有什么弊端呢?

Please note that with round-robin or least-connected load balancing, each subsequent client’s request can be potentially distributed to a different server. There is no guarantee that the same client will be always directed to the same server.

If there is the need to tie a client to a particular application server — in other words, make the client’s session “sticky” or “persistent” in terms of always trying to select a particular server — the ip-hash load balancing mechanism can be used.

我们注意到使用轮询和最小连接数两种负载均衡方式,后续的每一个客户端请求都有(被分发到一个不同服务器的)潜在的可能性,不能保证同一个客户端指向同一个服务器。

如果需要将客户端和一个指定的服务器『捆绑』在一起——换句话说,就是使客户端的会话保持持久,ip-hash 负载均衡可以总是指定同一个服务器。

To configure ip-hash load balancing, just add the ip_hash directive to the server (upstream) group configuration:

upstream myapp1 {
    ip_hash;
    server srv1.example.com;
    server srv2.example.com;
    server srv3.example.com;
}

配置 ip-hash 负载均衡,仅仅需要将 ip_hash 添加为服务器组(server group )的一部分:

upstream myapp1 {
    ip_hash;
    server srv1.example.com;
    server srv2.example.com;
    server srv3.example.com;
}

权重负载均衡

It is also possible to influence nginx load balancing algorithms even further by using server weights.

In the examples above, the server weights are not configured which means that all specified servers are treated as equally qualified for a particular load balancing method.

With the round-robin in particular it also means a more or less equal distribution of requests across the servers — provided there are enough requests, and when the requests are processed in a uniform manner and completed fast enough.

可以通过分配服务器权重的方式来进一步地影响 nginx 负载均衡算法。

在上面的例子中,服务器权重都没有配置,这意味着,每一个负载均衡方式所指定的那些服务器拥有相同的权重。

当特别指定轮循的方式,这意味着,一个在不同服务器之间的或多或少相同的请求的分发————当提供了足够的请求时,并且当请求被以统一的方式足够快的处理完成。

注:
(上面这段需要重新翻译、理解)

When the weight parameter is specified for a server, the weight is accounted as part of the load balancing decision.

    upstream myapp1 {
        server srv1.example.com weight=3;
        server srv2.example.com;
        server srv3.example.com;
    }

With this configuration, every 5 new requests will be distributed across the application instances as the following: 3 requests will be directed to srv1, one request will go to srv2, and another one — to srv3.

It is similarly possible to use weights with the least-connected and ip-hash load balancing in the recent versions of nginx.

当为一个服务器指定权重参数时,权重会被作为负载均衡决定的一部分来考虑。

    upstream myapp1 {
        server srv1.example.com weight=3;
        server srv2.example.com;
        server srv3.example.com;
    }

如果是上面这个配置,每5个新的请求会被以下面的方式分发:
3个会被指向 srv1,1个会被指向 srv2 ,另外一个指向 srv3。

在最近一些版本的 nginx 中同样可以在使用 最小连接数和 ip-hash 时使用权重参数。

健康检查

Reverse proxy implementation in nginx includes in-band (or passive) server health checks. If the response from a particular server fails with an error, nginx will mark this server as failed, and will try to avoid selecting this server for subsequent inbound requests for a while.

The max_fails directive sets the number of consecutive unsuccessful attempts to communicate with the server that should happen during fail_timeout. By default, max_fails is set to 1. When it is set to 0, health checks are disabled for this server. The fail_timeout parameter also defines how long the server will be marked as failed. After fail_timeout interval following the server failure, nginx will start to gracefully probe the server with the live client’s requests. If the probes have been successful, the server is marked as a live one.

nginx 中的反响代理包含带内(或是被动)服务器健康检查。
如果一个服务器返回了一个错误响应。nginx 将会标记这个服务器错误,并且暂时性的避免后续请求发送到这个服务器。

max_fails 指令设置了(在 fail_timeout 时间内与服务器通信过程中)连续的不成功的尝试数量。

默认情况下,max_fails 被设置成了 1 。当被设置成 0 的时候,这个服务器的健康检查就不会启用了。fail_timeout参数 同时也定义了多长时间这个服务器才会被定义成失败。

在紧接着服务器失败的 fail_timeout 间隔之后,nginx 会启动对这个(有着活动客户端请求的)服务器的调查。如果调查是成功的,这个服务器会被标记为存活。

更多

In addition, there are more directives and parameters that control server load balancing in nginx, e.g. proxy_next_upstream, backup, down, and keepalive. For more information please check our reference documentation.

Last but not least, application load balancing, application health checks, activity monitoring and on-the-fly reconfiguration of server groups are available as part of our paid NGINX Plus subscriptions.

The following articles describe load balancing with NGINX Plus in more detail:

相关文章推荐: