Cloud computing and the global spread of the Internet have allowed enterprises to expand their markets and customer bases. The scalability and flexibility of cloud computing is hurting enterprises to expand their computing systems in accordance with business requirements. This flexibility reduces the need for capital investment in equipment that may be needed in the future.
Also, it allows you to make decisions about the establishment of services for calculations and data storage at a more detailed level. If there is peak demand for one or two days, you can create additional servers in the cloud. When the demand subsides, you can free these resources. Flexibility in calculations and storage is an indispensable element for ensuring the quality of service. However, this is not the only factor. From the point of view of the user, the quality of the application is determined, in particular, by its “responsiveness.” Applications that are perceived as slow to work from the user’s point of view are problematic, and their use can cause user dissatisfaction and loss of revenue. A one-second page load delay results in:
- Reducing the number of page views by 11%
- Reducing customer satisfaction by 16%
- Loss in terms of money at 7%
Reduce application response time
The “responsiveness” of applications is affected by many factors. For example, how the code is written, how the database was designed, bandwidth, and network latency. One way to improve application performance is to optimize application code, and bookkeeping can also be used for saving the financial records and data of the company. It includes:
- Choosing more efficient algorithms
- Code analysis to identify time-consuming functions
- Rewriting queries to the database in such a way that less data is returned.
- Optimizing the database architecture – creating additional indexes and other measures to reduce the number of I / O operations performed by the database
The introduction of improvements at the program level in some cases can significantly improve performance. Such improvements can be expensive and take longer than other approaches.
Also, cloud computing allows enterprises to implement a well-known, but the sometimes questionable, approach to “throw additional equipment at the solution.” It may turn out that it will not be faster to revise and fix the code but to scale the servers on which it is running vertically.
You can also implement vertical scaling by deploying the application on a server with a large number of cores and faster storage devices. Applications that are suitable for servicing distributed loads can be scaled horizontally. This scaling involves adding additional servers to the load balancing cluster and allowing the load balancer to distribute the work among a large number of servers.
Both of these approaches in cloud computing help increase performance, assuming that there are no bottlenecks outside of the servers (for example, the time spent performing I / O operations in the storage array). If I / O performance is a problem, you can improve performance by moving to faster storage technology.
Although optimizing the application code and database architecture often increases the server’s throughput, these improvements do not always reduce the response time of the application. You cannot reduce network latency, that is, the time it takes to send data between two network devices, by improving the algorithms on the server or optimizing database queries.
The term “cloud acceleration” refers to cloud computing technologies that reduce the “responsiveness” of the application as a whole by reducing the time it takes to deliver content to the end-user. We will not go into the technical details. Still, it should be noted that you can implement cloud acceleration in combination with content delivery networks (CDN) to distribute content on the globe and reduce network traffic due to special optimization. With the implementation of cloud acceleration, there are four main problems:
- Scalability and geographic reach
- Consolidate services
To successfully implement the cloud acceleration solution, you need to investigate each of these problems.
Scalability and geographic reach: Physical and technical factors limit network capabilities. You will never be able to correct the laws of physics to increase the speed of signal transmission. Although the organization can improve the technical characteristics of its network equipment, its business still depends on the infrastructure used by various Internet service providers (ISPs) in the world. CDN compensates for network restrictions by supporting copies of data around the globe and, when responding to a user’s request, use the resource closest to it and the best path between endpoints. For example, a client from Amsterdam can receive content from a data center in Paris, whereas a client from Shanghai will receive the same content from a data center in Singapore.
Enterprises can deploy and maintain their own data centers or infrastructures by placing them on the territory of providers (co-location) and covering the whole world. Such a deployment must be sufficiently dense to ensure global reachability and the ability to respond to requests from customers, employees, and partners, wherever they are. In addition, these configurations should contain enough scalable equipment to cope with peak requests that each data center may encounter.
Redundancy: Another consideration is redundancy. The equipment breaks down. In the software, failures occur. Network connections are lost. If the data center fails, other data centers around the world should change their configurations to respond to the traffic that was served by the failed site.
In addition, redundancy means storing up-to-date copies of the content. There must be replication procedures that ensure the timely distribution of content between all the data sites.