Load Balancing and Virtualization - BunksAllowed

BunksAllowed is an effort to facilitate Self Learning process through the provision of quality tutorials.

Community

Load Balancing and Virtualization

Share This
Load balancing is the term used to describe the technology that supplies resources with service requests. Implementing load balancing can be done through either software or hardware. As an optimization technique, load balancing can be implemented to prevent system congestion, increase utilization and throughput, and decrease latency and response time.

In the absence of load balancing, cloud computing administration would be extremely challenging. Through regulated redirection, load balancing provides the necessary redundancy to render an inherently unreliable system reliable. Fault tolerance is also achieved through the integration of a failover mechanism. High availability applications and server farms and computer clusters almost invariably incorporate load balancing.

Various mechanisms may be employed by a load-balancing system in order to allocate service direction. Load balancing mechanisms at their most basic level involve the load balancer monitoring a network port for service requests. Upon the arrival of a request from a client or service requester, the load balancer designates the destination of the request using a scheduling algorithm. 
 
Custom assignments based on additional factors are prevalent in contemporary scheduling algorithms, alongside round robin and weighted round robin, quickest response time, least connections and weighted least connections.

Workload administrators are load balancers that are more sophisticated. In order to assign tasks to individual resources, they assess various factors including the current utilization of the resources in their pool, response time, work queue length, connection latency and capacity, and more. 
 
Polling resources for health, enabling standby servers to become operational via priority activation, implementing workload weighting according to resource capacity to achieve asymmetric loading, compressing HTTP traffic, offloading and buffering TCP, ensuring security and authentication, and shaping packets through content filtering and priority queuing are all functionalities of load balancers. 
 
An Application Delivery Controller (ADC) functions as an intermediary server between a server farm that provides Web services and a firewall or router, acting as both a load balancer and application server. A virtual IP address (VIP) is allocated to an Application Delivery Controller, which assigns that address to a pool of servers according to criteria specific to the application.



Happy Exploring!

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.