OS Ubuntu 12.04, apache

Purpose: to 2 servers (VPS) served one site. Those. so that they have a common base, and if one server goes down, there is a second one that keeps the site. (it is not necessary to make the database on a separate server)

As I understand it, there are several possibilities to realize this:

  1. That servers were loaded equally (50/50 shared load)
  2. To work alone, and when it falls, the second worked

Please tell me which way to go, and which way to choose. Or maybe I am mistaken, and there is a third way.

PS I hope nothing terrible, that the ssl certificate is bolted to the domain, will it work on different servers?

Thank you very much!

    3 answers 3

    Comrade skykub wrote something monstrous on the basis of diesel fuel. you can make everything easier

    you take 3 vps-ki, from one you do frontend with nginx, which will proxy the backend. domain + ssl fasten to the frontend. on backends you raise a web server that will listen to requests from the frontend. On one of the backends there will be a master base, on the other there will be a slave in the hotstandby mode. so far, the front-end is doing well through requests for backends. Web servers on backends respond in turn, together they access the same database, which is the master. when one of the backends breaks down. then on the frontend we turn off proxying to it, make a master from the slave if the backend with the current master falls. if the backend with the slave fell, simply raise it. when everything is working on the frontend, we again load the load on both servers. This can be done automatically automatically, but difficult. simple solutions in the style rolled something, skopipastil configs somewhere and everything works like a clock, unfortunately, no. although maybe someone is doing something in this direction.

    • I described exactly the cluster moving from node to node. Pens can be done and even easier to do, but the task is exactly that. And, by the way, not necessarily diesel, you can also AIX;) (well, if there are a couple of ibm). And with load sharing, this is generally a real-time server. There is even more difficult. - skykub
    • A cluster is a cluster (More than one server running in parallel.) Do not confuse the concepts, you need a “cloud”! - areshin

    You are overly complicating things too: nginx Load Balance Example . Go to the 21st century :)
    Nginx is able to balance the load itself and, accordingly, will not send anything to a non-working piece. With databases, everything is correctly described, hot standby is your choice ( there is no problem to google )

    If you want exactly 2 servers, then here you can either solve it by hardware (what will not work here) or with the help of CARP .

    The Common Address Redundancy Protocol (CARP) allows you to share the same IP address. This may be used for balancing. Hosts may use separate IP addresses provided as in the example provided here.

    Unfortunately I did not set it up myself - there was nothing :(

    UPD: There is a good description on the habre: Failsafe system based on mySQL replication and network protocol CARP

      To work alone, and when it falls, the second worked

      1. You put the same system (for example, diesel)
      2. You put a cluster (for example veritas)
      3. You lift On one node group SQLgrp - the base there will be twisted. In the same place (in the group) you will mount an external drive, where the base will actually lie. The mount point should take place on both nodes.
      4. On another node, you bring up the Apachegrp group - there will be an Apache spinning and others like it. You will also mount an external drive, where will be the user's home directories, email, ftp, www. The mount point should take place on both nodes.

      Disk drives must be autonomous, otherwise if one node fails to another, there will be nothing to mount. Users and groups must be synchronized between nodes by ID (UID, GID), otherwise there will be rights conflicts when moving from node to node.

      IMHO is expensive, hemorrhoid and deserves attention only when sooo serious projects.

      • > IMHO is expensive, hemorrhoids and deserves attention only when sooo serious projects. Of course, if such solutions are applied, then so. Only since your grandfather was the administrator of the web has fallen in price greatly, including the cluster one. - zb '
      • No need to insult. My grandfather was a machinist of a locomotive. And I work on such systems now. Just yesterday, I served a two-cluster on the PrimePower 650. And I know the prices. What is the question, is the answer - skykub
      • Where is the insult? You know the prices of equipment that is very strange for such purposes, and you know the main problem that is really difficult to solve if you do not have a subnet in BGP? setting fallback to different data centers with switching in a couple of minutes. And then no solarium and other delights of the late XX century will not help. Any central fault-tolerant services help, but again you start to depend on them. And they also fall. - zb '