The Basics Of AWS Cloud

Whether you provide training supplies for the technical industry or advice on dog care, an understanding of Amazon’s web services architecture will help you understand how to better scale up your throughput. In this article you will learn the basics of the design of an AWS data centre, the design and implementation of an instance and how load balancing works.

Computer centres

The most important data centres are clusters of servers located in the region. Each region has Accessibility Zones (ZA) within that region. AWS data centers are spread around the world and strategically located to meet customer needs.

Each region provides computing power through a structure called EC2. EC2, which stands for Flexible Computing Cloud, allows users to rent virtual computers on which they can run various applications.

EC2 – What is inside the body?

Not all EC2 computers are made in the same way. In order to adjust the computing power of the applications as much as possible, the servers are characterized according to their capacity.

The machines are characterized by five different groups: General purpose machines optimised for computers, memory optimised machines, machines optimised for faster calculations and machines optimised for data storage. These applications are as follows:

  • General Purpose (Type A, T and M) – These computers provide a balance between computer, storage and network resources and are designed for different workloads.
  • Optimized computing (type C) – These instances are well suited for high-performance processors, simulation and machine learning applications.
  • Optimized memory (Type R and Type X) – Memory-optimized machines are suitable for workloads that simultaneously process large data sets.
  • Accelerated optimisation (type P and G) – Accelerated optimisation is intended for specific applications, such as graphic processing.
  • Optimized memory (I, D and H types) – Memory optimized for workloads requiring read and write access to very large datasets in local memory.

Types of starts

The type of start required depends on the workload. Things can be on request, booked, local or specialised. The following applications apply:

  • On-Demand Cases – On-Demand Cases apply when the workload is low. On request, it offers predictable pricing and is the easiest way to start a calculation.
  • Reserved circumstances – Reserved circumstances require a commitment of at least one year. Reserved copies can be converted, which entails a long workload that makes it possible to change the type (e.g. m4.wide next week ending with t5.wide in 3 months). A scheduled backup works at certain times – for example, every Saturday for 12 hours when large amounts of data need to be processed.
  • Ad hoc bodies – Ad hoc bodies are particularly useful in situations where computer data is needed. This allows the user to bet on the computing power and gives a discount of up to 90% compared to on-demand prices. Spot copies are suitable for batch jobs, data analysis and image processing. They are not suitable for critical locations or databases.
  • Dedicated Instances – These instances ensure that no other customers share your material. Full control over the placement of the copies is guaranteed, as well as the visibility of the physical cores of the equipment. This type of specimen is the most expensive. It sees advantages in applications that have a complex licensing model or are subject to legal requirements.

Scale and load balance

For most business applications, one specimen does not meet the requirements of the desired application. Therefore, additional servers need to be processed. Instead of creating multiple static instances, it is better to dynamically adjust server capacity. There are two ways to do this: horizontal scaling and vertical scaling.

Vertical ladder

Vertical scaling increases the computing power of an instance. For example, if your application runs on m2.micro, the vertical scale of this application means that you can run it on m2.large. Vertical scalability is very common in types of database solutions where capacity increases at approximately the same rate. Vertical scalability has limitations that are usually imposed by the size of the equipment.

Horizontal ladder

The horizontal scale increases the number of cases in support of operations. It is very common for web/internal applications and can easily be implemented to meet demand. High availability combines well with horizontal scaling, as well as conscious rinsing of servers in other regions to meet demand and reduce the risk of data loss.

Compensation of spring loading

Charge balancers are the magic behind the scaling of EC2 instances. The elastic load balancer is a server that redirects traffic to multiple EC2 instances when needed. It also carries out a performance check of the authority and ensures that it is able to receive traffic. If the answer is no, traffic is diverted to another location. This provides extra protection for the balancer. There are three types of elastic load balancers: conventional load balancers, application load balancers and network load balancers.

Conventional load balancer

The classic load balancer was created in 2009 and supports the HTTP, HTTPS and TCP protocols. It is no longer recommended because the Load Balancer Application and Network Load Balancer offer more possibilities.

QuestionLoad balancer

The Application Load Balancer (ALB) supports HTTP, HTTPS and WebSocket. The ALB is ideally suited for micro-services and container applications such as Docker and Amazon ECS. You have a fixed hostname and use a private IP address in the case of EC2.

Network Balancer

The Network Balancer (NLB) supports TCP, TLS (secure TCP) and UDP. They work with less delay than an application load balancer. The main use is extreme performance or TCP and UCP traffic.

Automatic scaling group

In practice, it appears that the workload of the users increases and decreases significantly. It is therefore necessary to quickly add and remove servers. Automatic scaling of group scales by adding or removing EC2 cases in accordance with the required load It determines the minimum and maximum number of copies to be delivered.


If you specialize in software development or are responsible for business development, a basic understanding of the AWS principles will help your organization understand the potential costs and benefits of different types of infrastructure. Insight into AWS gives you a basic understanding of cloud architecture, regardless of which cloud provider you choose.

Related Tags:

aws tutorial w3schools,aws tutorial udemy,aws documentation pdf,what can you do with aws,aws basic interview questions,aws use cases,amazon web services subsidiaries,aws cloud vs on-premise,cloud computing basics,what is aws cloud,cloud computing models,types of cloud computing,aws fundamentals pdf,how to create aws ec2 instance,aws training websites,amazon web,practical aws,amazon ecs course,aws basics pdf,aws guides,aws key concepts,aws terminology,aws projects for beginners,aws study material pdf free download,what is aws in guru99,aws tutorial point pdf free download,aws tutorial youtube,cloud guru cloud fundamentals,what is cloud computing course,introduction to cloud computing course,acloudguru aws developer – associate,aws cloud practitioner free training,aclude guru,aws javatpoint,aws ec2 tutorial,aws tutorial for beginners pdf,what is aws for dummies,aws for non technical,aws for beginners udemy,aws java tutorial,cloud computing,aws certification,aws cloud practitioner,aws tutorial free,aws console