X
X
X
X

Knowledge Base

BerandaKnowledge BaseHostingHosting terminologyWhat is a Load Balancer?

What is a Load Balancer?

What is a Load Balancer?

In this article we will introduce you to Load Balancer and the different types of load distributions.

Load Balancer literally means load distributor. Two types of distributions are most commonly used:

  • at the TCP / IP level (Layer 4 of the OSI model);
  • at the application level (Layer 7 of the OSI model).

Load balancing is necessary when the development of a website reaches a point where one server is not enough for the load generated by the site itself. The website is then hosted on at least two servers, with the load distributed between them.

Round Robin

The simplest type of Load Balancer is to set two or more A records for a domain. In this way, your site can be loaded from two different servers, with the DNS server playing the role of Load Balancer. This type of load distribution is called Round Robin.

How to create a Round Robin Load balancer?

Each domain has a basic A record (converter domain.com -> IP address) of type:

domain.com A

Sample record:

goodexample.eu.    120    IN    A    12.345.67.8

To create a Round Robin Load balancer, you need a second server containing the same content as the main one, but with a different public IP. Example: 12.345.67.9. Then create a second A record for the domain, but with the other IP address.

Example:

goodexample.eu.    120    IN    A    12.345.67.8

In this way, not one IP address will be responsible for the domain, as with most sites, but two or more. Accordingly, when checking for DNS records, you will get the following result:

goodexample.eu.    120    IN    A    12.345.67.8
goodexample.eu.    120    IN    A    12.345.67.9

This is enough to have a simple load distribution between two or more servers.

When is it used most often?

  • When you don't want to invest extra money in hardware;
  • When you don't have the knowledge to set up a hardware Load Balancer;
  • When the traffic is extremely heavy and a combination of hardware Load Balancer and DNS load balancing is required.

An example of this principle will be given with ebay.com:

In a Linux terminal, run the dig command (or nslookup in the Command prompt on Windows). We receive the following information:

dig ebay.com
ebay.com.        3231    IN    A    66.211.175.229
ebay.com.        3231    IN    A    66.211.172.37
ebay.com.        3231    IN    A    216.113.181.253

What do we understand from the above lines? We understand that the first time we load, we will open ebay.com from a server with an IP address of, for example, 66.211.175.229. After nearly 60 minutes (the number 3231, shows the time in seconds for which the information that ebay.com has the address 66.211.175.229 will be cached) when reloading, the site will now be opened by a server with IP address 66.211.172.37. Thus, each visitor visits the site from a different server and this reduces the load on each of them.

Why is this principle rarely used and not always applicable?

This is for several reasons:

  • We do not have the freedom to choose to whom, when and from which server to load the website. This means that in the case of ebay.com, the load is not distributed 33.33% on each of the servers. One server may be much busier than the others.
  • If one server stops working for some reason, we can't stop the traffic to it. This is due to the caching time. In this case, if a server with an IP address of 66.211.175.229 fails, the site will not load within 60 minutes until the information is refreshed, even though the site is accessible to other servers. As can be seen from the example, Round Robin is not very suitable for important sites, because when you stop the server, it leads to loss of traffic and potential customers who fail to load the website. We do not have the freedom to redirect traffic to another running server, as the client's computer has cached information about the IP address of the broken server.
  • If the site is updated, we must update the information on each of the servers.

All these disadvantages lead to the need for a different type of Load Balancer. In other words: creating a hardware cluster.

Hardware cluster

Generally speaking, the hardware cluster consists of at least three interconnected servers - Load Balancer, which distributes the load between 2 or more Node. By Node we mean a server that performs identical functions to another server in the entire cluster infrastructure. For example, Load Balancer (Server) connected via a switch (10/100 Mb / s Ethernet Switch) to 5 node servers with a shared network disk array (Networked Disk Storage).

From this example it is clear that when one of the Node servers stops working, we can automatically remove it from the list by a rule in Load Balancer (Server), and this change will be reflected instantly and will not lose traffic and customers. as there will be no user who cannot load the site.

What are the advantages of this type of Load Balancing?

  • The Load Balancer server itself experiences almost no load. This is due to the fact that it does not process information, but only redirects requests to the servers behind it.
  • Ability to add countless Node machines.
  • Ability to decide which visitor to which Node to forward. The load can be distributed on a Round Robin basis (each visitor at random), a 50/50 principle for even load distribution or setting priority servers with better hardware capabilities.
  • The servers can be combined with a single file system, which makes it easier to administer the sites located on them.
  • Failover capability - if a server in the system stops working, it can be removed from the list without disrupting the operation of sites on the network. – if a server in the system stops working, it can be removed from the list without disrupting the sites on the network.

Tidak dapat menemukan informasi yang Anda cari?

Buat Tiket Dukungan
Apakah anda terbantu?
(783 kali dilihat / 423 orang merasa terbantu)

Top