Skip to main content

AWS cloud practitioner notes

 AWS Certified cloud practitioner:


What is cloud computing?

cloud computing is the on demand delivery of the compute power, database storage, applications and other IT resources through a cloud services platform with pay-as-you-go pricing.

you can provision exactly the right type and size of the computing resources you need. you can access many resources as you need almost instantly like servers, storage,databases and application services as well.

Amazon web services owns and maintains the network connected hardware required for these application services, while you provision and use what you need via a web application.


*****************************************

Deployment models of Cloud:


Private Cloud:


Cloud service used by a single organisation, not exposed to the public. complete control.


security of the sensitive applications meeting specific business needs.

**********************

Public Cloud:

Cloud resources owned and operated by the third party. cloud service provider delivers resources over the internet.

**********************

hybrid cloud:

keep some servers on the premises and extend some capabilities to the cloud. control over sensitive assets in your private infrastructure.


***************


cloud computing types:

ON premise:

Everything we need to manage:

APPLICATION

DATA

RUNTIME

OPERATING SYSTEM

VIRTUALIZATION

SERVER

STORAGE

NETWORKING


*************************


IAAS:

we manage:

APPLICATION

DATA

RUNTIME

OPERATING SYSTEM


cloud provider manage:

VIRTUALIZATION

SERVER

STORAGE

NETWORKING

Ex: Amazon EC2, GCP.


***********************


PAAS:

we manage:

APPLICATION

DATA


cloud provider manage:

RUNTIME

OPERATING SYSTEM

VIRTUALIZATION

SERVER

STORAGE

NETWORKING

Ex: Google app engine, Heroku.


**************************


SAAS:


everything below is managed by cloud provider:

APPLICATION

DATA

RUNTIME

OPERATING SYSTEM

VIRTUALIZATION

SERVER

STORAGE

NETWORKING

Ex: Google apps.



********************************


Regions:


Aws has regions around the world

names can be us-east, eu-west

How to choose the region?

compliance with data governance and legal requirement.

proximity to customers for reduced latency.

Available services within the region.

pricing vary region to region and is transperent on the service pricing page.

*************************


Availability zones:


Each region has many availability zones. min 3 max 6.

region: ap-southeast-2|

zones:

ap-southeast-2a

ap-southeast-2b

ap-southeast-2c

Each zone is separate from other zone so that they are isolated from disasters. they are connected with high bandwidth, and low latency networking.


*************************

IAM:

Identity and Access management, Global service

Root account created by default, should not be used or shared.

users are people within your organisation and can be grouped.

groups only contains the users

users don't have to belong to the group and user can belong to multiple group.

*****************************

IAM permissions:

users or groups can be assigned JSON documetns called as policies.

These policies defines the permissions of the user

In AWS, you apply the least privilege principle. dont give more permission than a user needs.

How to create the new user in the AWS console ?

go to IAM

go to users

click on create user

give username and password

add to certain group you want to add


***************************************


IAM policies:


policies can be applied on the groups.

once the policy is applied on the group, group members will get affected.

Inline policy can be applied to single user.

Every policy has permissions defined in this policy.

every user will be assigned policy and every policy has permissions assigned.

Every user will have some permissions to perform the tasks.

****************************


IAM Policy structure:

consists of the version, id, statement

statement consists of the statement id, effect, principal, action, resource, condition.

action consists of the actions that the policy allows. like getobject, putobject etc which will provide the permissions to the users.

*****************************


IAM MFA:


Multifactor authentication is done through Google authenticator app and authy app in AWS.

How can users access the AWS?

1. AWS management console

2. AWS command line interface

3.AWS SDK: Software development Kit.


***********************


CLI:

• Tool that helps you interact with AWS services using the commands in the command line shell

• Direct access to public APIs of the AWS services.

  • you can develop the scripts to manage the resources

• its open source

• Alternative to AWS management console.

****************************

Cloudshell: 

cloud shell is the browser based shell that you can use to manage AWS services using the command line interface and the range of the preinstalled development tools.

************************

IAM roles:


some AWS service will need to perform actions on your behalf.

to do so we will assign the permissions to aws services with IAM roles.

common roles:

ec2 instance roles

lambda function roles

roles for the cloud formation.

we can create our own roles and assign it to the EC2 instance(virtual server.)


********************************


EC2:

it stands for elastic compute cloud. infrastructure as a service.

it mainly consists of:

• renting the virtual machines (EC2)

• storing the data on the virtual drives(EBS)

• distributing the load accross the machines (ELB)

• Scaling the services using the auto-scaling group(ASG)

*****************************


EC2 sizing and configuration options:

• operating system: linux, windos and macos

• how much compute power and cores and CPU

• memory

• storage space (network attached space)

• network card

• firewall rules

• bootstrap script


**********************************


ec2 instance types:

every instance type has different cpu and memory count.

storage and network performance is also different for different instance types.

• ex: t2.micro instance type has 1 cpu 1 memory,

EBS only storage, low to moderate network performace


***********************************

EC2 user data:

* its possible to bootstrap our instances using the ec2 user data script.

• bootstraping means launching commands when the machine starts

*that script only run once at the instance first start.

• ec2 user data is used to automate boot tasks such as:

installing the updates, installing the softwares, downloading the command files..

*******************************


how to create the ec2 instance?

• ec2

• instances

launch instances

select name for the instance

select OS. it can be linux,macos, ubuntu,windows

select architecture as 64 bit

*select instance type

create key pair in case if you want to use ssh to connect to the server once it is built. select the key pair type as rsa

• select storage

*go to advance settings and go to user data and paste the commands there to

install httpd, start https, enable httpd, and put hello world text to /var/www/html/index.html

user data is similar to startup script in GCP.

the commands written in the user data will be executed automatically when the server boots up.

*******************************


types of ec2 instances:


• general purpose

• compute optimised: used for

batch processing workloads

media transcoding

high performance web servers

high performance computing

scientific modeling and machine learning

• memory optimized

• storage optimized: used for

high frequency online transaction processing system

relational and nosql databases

data warehousing applications

distributed filesystems

• high performance computing

****************************


Security Groups:

• Security groups are the fundamentals of the network security in AWS.

• they control how traffic is allowed into or out of our ec2 instances.

• security groups only contains allow rules

• security groups rules can reference by ip or by security group.

• they act as the firewall on the instance.

* they regulate access to ports.

• control inbound network

• control outbound network.

• security groups can be attached to multiple instances.

• its good to maintain one separate security group or ssh servers.

• if your application is not accessible, then its a security group issue.

ports to know:

• 22 ssh

21 ftp

22 sftp

80 http

• 443 https

3389 rdp


*********************************

EBS volume:

• Elastic block store is the network drive which you can attach to your instance while they run.

• it allows your instance to persists data, even after their termination.

⚫ they can only be mounted to one instance at a time.

they are bound to specific availability zone.

think of them as network USB stick.

free tier: 30 gb of free ebs storage of type SSD or magnetic per month. its a network drive and not the physical drive

• it uses network to communicate to the instance, which means there might be the bit of latency.

⚫it can be detached from an instance and attached to another one quickly.

⚫its locked to availability zone.

⚫ to move the ebs volume from one zone to another, you need to snapshot it.

• you get billed for provisioned capacity

⚫ you can increase the capacity of the drive over time.

by default.when instance is deleted, root ebs volume is deleted.

⚫by default, other attached ebs volume is not deleted when instance is deleted.

⚫ snapshot can be taken from ebs volume.

⚫ new ebs volume can be created from that snapshot. new snapshot can be created in different availability zone as well.

once the snapshot is taken, that snapshot can be copied to any region.

• when you delete the snapshot, that goes to aws recycle bin, and snapshot can be recovered from recycle bin.


**********************************

AMI Overview:

Amazon machine image.

similar to template in vmware.

• it is the customization of the ec2 instance.

• you get to add your own software, configuration, operating system, monitoring.

• faster boot time

• AMI are built for specific region.

• you can launch ec2 instance from aws provided public ami

⚫ otherwise we can create our own AMI.

⚫ in aws marketplace we can get the ami created from others


******************************

EC2 Instance Store:

• EBS volumes are network drives with good but limited performace. • if you need high performance hardware disk, use ec2 instance store.

• better i/o performance.

• ec2 instance store lose their storage if they are stopped.

• good for buffer/cache/temporary content.

• risk of data loss if hardware fails

backup and replication are your responsibility.


***************************

Elastic filesystem: EFS

• it is managed nfs and can be mounted on the 100s of the instances

• EFS works with linux instances.

• highly scalable, expensive, highly available.

• pay per use, no capacity planning

****************************


EFS IA:

elastic file system infrequent access:

• storage class that is cost optimized for the files not accessed every day.

• upto 92% lower cost compared to efs standard

• efs will automatically move your files to efs - IA based on the last time they were accessed.

enable efs -IA with lifecycle policy.

• move files that are not accessed for 60 days to EFS IA

********************************


Scalability:

• scalability means that an application/system can handle greater loads by adapting.


*******************************

vertical scalability:

• vertical scalability means increasing the size of the instance.

• for example. an application runs on hte t2.micro

• scaling that application means running on the t2.large

• vertical scalability is very common for non distributed systems such as databases.

*there is the limit to vertical scalability (hardware limit)

********************************


horizontal scalability:

• it means increasing the number of instances for your application

• horizontal scaling implies distributed systems

⚫ this is very common for web applications and modern applicaiton.

• its easy to horizontally scale thanks to cloud offering such as amazon ec2.


******************************

high availability:

• it usually goes hand in hand with horizontal scaling

• high availability means running your application in at lease 2 availability zones.

• goal of the high availability is to survive a data center loss.


************************

Scalability:

• Ability to accomodate a larger load by making the hardware stronger or by adding the nodes.

• elasticity: once the system is scalable, elasticity means that there will be autoscaling so that the system can scale based on the load. This is cloud friendly. pay per use

• Agility: new IT resources are only the click away, which means that you reduce the time to make those resources available to your developers from weeks to just minutes.

******************************


Load balancing:

• load balancers are the servers that forward internet traffic to multiple servers downstream.

• it spreads the load accross the multiple servers

• seamlessly handle failure of the downstream instances.

⚫ do regularly health checks on the servers.

• provide the SSL termination for your websites.

4 kinds of load balancer offered by AWS:

• Application load balancer (http/https)

• network load balancer(tcp)

gateway load balancer

• classic load balancer

*****************************

Auto Scaling group:

⚫ in real life load on your websites and application can change.

⚫ in the cloud, you can create and get rid of the servers very quickly.

• The goal of hte Auto scaling group is to scale out to match increased load.

• scale in to match the decreased load.

• Ensure we have minimum and maximum number of machines running

• Automatically register new instances to load balancer.

• Replace unhealthy instances.


******************************


Auto Scaling group- scaling strategies:

Manual Scaling: update the size of the Auto scaling group manually.

Dynamic Scaling: Respond to changing demand

• cpu>70% then add 2 units

• cpu < 30% then remove 1 unit.

predictive scaling:

• uses machine learning to predict the future traffic ahead of time.

• automatically provision hte right number of ec2 instances in advance.

*****************************

Amazon S3 :(stores the data in buckets)

use cases:

• backup and storage

• disaster recovery

• archive

• hybrid cloud storage

• application hosting

• media hosting

• data lakes and big data analytics

• software delivery

• static website


***************************

Amazon s3 buckets:

• Amazon s3 allows the people to store objects in buckets(directories)

• buckets must have globally uniques name accross all regions.

• buckets are defined at region level

• s3 looks like hte global service but buckets are created in a region

• buckets have naming conventions that needs to be followed.

• we can call buckets as directories and objects as files.

*************************

Amazon S3 -Objects:

• objects has a key

key is a full path to objects

• ex: s3://mybucket/myfile.txt

• there is no concept of directories within the buckets but Ul will trick you to think otherwise.

• maximum object size is 5tb

**********************************


Amazon S3: Static website Hosting:


S3 can host static websites and have them accessible on the internet.


• website url will be depending on the region.


if you get 403 forbidden error, make sure the bucket policy allows public reads.


steps for static website hosting:


amazon s3 > buckets


⚫ open your bucket which you have created.


• properties


⚫ static website hosting


⚫ enable


specify the index document as index.html(this will be homepage of your website)


upload index.html file to your bucket.

*********************************


Amazon S3 -Versioning

you can version your files in amazon s3

it is enabled at the bucket level

same key overwrite will change the version 

it is best practice to version your buckets

protect against the unintended deletes(ability to restore the version)

easy roll back to previous version


Notes: 

Any version that is not versioned to enabling versioning will have version "null".

suspending versioning does not delete the previous versions.

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

How to enable versioning in buckets?

  • s3
  • buckets
  • create a bucket
  • properties
  • bucket versioning
  • enable

once the version is enabled, it will show the versions of the files once the files are changed. 

ex. if I make the changes to index.html file and upload to the bucket, then we will have 2 versions to the bucket. second version will have the index.html file which is updated file

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>


Amazon S3 -Replication : 

  • must enable versioning in source and destination buckets
  • cross region replication(CRR)
  • same region replication (SRR)
  • buckets can be in different AWS accounts
  • copying is asynchronous
  • must give proper IAM permissions to S3
use cases: 
  • CRR: compliance , lower latency ,replication  across accounts
  • SRR : log aggregation, live replication between production and test accounts.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

S3 Storage classes : 
every file that is uploaded in the bucket will have a specific storage class.

  • standard : general purpose
99.99 percent availability
used for frequently accessed data
low latency and high throughput
sustain 2 concurrent facility  failures.
use cases : big data analytics , mobile gaming applications, content distribution.
  • standard IA
for the data that is less frequently accessed but requires rapid access when needed.
lower cost than standard
99 percent availability
use cases : disaster recovery ,backups
  • one zone IA
very high durability 99.99999999 %
99.5 % availability 
use cases : storing the secondary backup copies of on-premise data, or data you can recreate.
  • glacier instant retrieval
low cost object storage meant for archiving /backups
price for storage + object retrieval cost

millisecond retrieval , great for data accessed once a quarter. 
minimum storage duration of 90 days.
  • glacier flexible retrieval
minimum storage duration of 90 days. 
expedited (1 to 5 mins) , standard (3 to 5 hours) , bulk (5 to 12 hours) -free

  • deep archive 
minimum storage duration is  6 months
retrieval speed is 12 hours for standard data, and 48 hours for bulk data. 
  • intelligent tiering
small monthly monitoring and auto tiering fee
move objects automatically between access tiers based on usage.
there are no retrieval charges.
can move between classes manually or using s3 lifecycle configuration.

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

S3 durability and availability : 

Durability : 
  • High durability of objects across multiple AZ
  • if you store 10000000 objects with Amazon s3, you can on average expect a loss of single object once every 10000 years.
  • same for all the storage classes. 
Availability : 
  • measures how readily available the service is . 
  • varies depending on the storage class
  • s3 standard has 99.99 availability . not available for 53 minutes in a year.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

Databases in AWS: 

  • Storing the data on disk can have its limits. 
  • sometimes, you want to store data in databases.
  • you can structure the data
  • you build the indexes to efficiently query/search through the data. 
  • you define the relationship between your data sets. 
  • databases are optimized for the purpose and come with different features, shapes and constraints.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

Relational databases : 

  • looks like the excel sheet. 
  • contains rows and columns
  • can use sql language to perform queries / lookups
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

non relational databases : 
  • nosql = non relational databases
  • nosql databases are purpose built for specific data models and have flexible schemas for building modern applications. 
  • benefits : 
  • flexibility
  • scalability 
  • high performance
  • highly functional
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Amazon RDS overview : 
  • RDS stands for relational database service 
  • its a managed DB service for DB use SQL as a query language
  • ex : postgres
  • mysql
  • mariaDB
  • oracle
  • microsoft SQL server
  • Aurora (AWS proprietary database)
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

Advantages of RDS vs deploying DB on EC2 : 

  • RDS is the managed service
  • automated provisioning and OS patching
  • continuous backups and restore to specific timestamp
  • monitoring dashboards
  • read replicas for improved read performance
  • maintainance windows for upgrades
  • storage is backed by EBS
  • we cant use ssh to the instances because RDS is managed service . 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>.

Amazon Aurora : 
  • Aurora is proprietary technology from AWS
  • it is not open source 
  • postgresql and mySQl are both supported as aurora DB 
  • Aurora is cloud optimised and claims 5x performance improvement over mysql on RDS.
  • aurora claims 3x performance of postgresql on RDS. 
  • aurora storage automatically grows in increments of 10 GB, upto 128 GB 
  • Aurora costs more than RDS 
  • not in the free tier. 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

Amazon Aurora Server less : 

  • Automated database instantiation and auto scaling based on actual usage
  • postgreSQL and MySQL are both supported as Aurora serverless DB 
  • no capacity planning is needed. 
  • least management overhead. 
  • pay per second, can be more cost effective 
  • use cases : good for infrequent intermittent or unpredictable workloads. 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>..

Dynamo DB : 
  • fully managed highly available with replication accross 3 AZ. 
  • NOSQL -database : not a relational database. 
  • scales to massive workloads, distributed serverless database. 
  • millions of requests per second. 
  • fast and consistent in performance . 
  • single digit millisecond latency.
  • integrated with IAM  for security ,authorization and administration.
  • low cost and auto scaling capabilities. 
  • standard and infrequent access.

  • In DYNAMO DB, we directly create the tables and we dont have to create the Database as it is allready created and it is serverless.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>...
Global tables in Dynamo DB : 

  • make the dynamo DB tables accessible with low latency in multiple regions. 
  • Active -active replication (read / write access to tables created in multiple region)

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>..

Redshift overview : 

  • Redshift is based on PosstgreSQL, but its not used for OLTP. 
  • its OLAP, online analytical processing. 
  • load data once every hour, not every second . 
  • 10x better performance than other data warehouses, scale to PBs of data. 
  • columnar storage of data. 
  • Massively parallel query execution, highly available.
  • pay as you go based on instances provided. 
  • has the sql interface for performing the queries.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>.

Redshift Serverless : 
  • automatically provisions and scales the data warehouse underlying capacity. 
  • Run analytics work loads without managing the data warehouse infrastructure. 
  • pay only for what you use. 
  • use cases : reporting , dashboarding applications, real-time analytics. 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

Amazon EMR : 

  • stands for elastic map reduce 
  • it helps creating the hadoop clusters to analyze and process the vast amount of data. 
  • the clusters can be made of 100s of the EC2 instance 
  • also support the apache spark , presto , flink 
  • takes care of ll the provisioning and configuration. 
  • autoscaling and integrated with spot instances . 
  • use cases : machine learning , data processing , big data . 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>......

Amazon Athena : 

  • serverless uery service to perform analytics against the S3 objects.
  • uses the standard sql language to query the files .
  • supports CSV , JSON , ORC. 
  • pricing is 5 dollars per TB of data scanned . 
  • use compressed or columnar data for cost -savings. 
  • use cases : business intelligence , analytics , reporting , analyze ,ELB logs . 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>.

Amazon QuickSight: 

  • Serverless machine learning powered business intelligence service to create interactive dashboards. 
  • fast automatically scalable , embeddable with per session pricing . 
  • used in business intelligence , building the visualisations , perform ad-hoc analysis. 
  • get the business insights using the data. 
  • Integrated with RDS , Aorora , Athena. 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>


Document DB:


Aurora is an AWS implementation of postgresql/mysql


same way Document DB is for mongoDB.


mongodb is used to store, query and index JSON data.


similar deployment concepts as Aurora.


Fully managed, highly available with replication across 3 AZ.


DocumentDB storage automatically grows in increments of 10gb.


Automatically scales to workloads with millions of requests per seconds.

***"""*******************************


Amazon Neptune:


Fully managed graph database.


A popular graph dataset would be a social network.


users have friends


posts have comments


comments have likes from users


user share and likes the post.


highly available accross 3 AZ, with upto 15 read replica.


build and run applications working with highly connected datasets-optimised


for these complex and hard queries.


can store up to billions of relations and query the graph with milliseconds latency.


highly available with replication across multiple AZ


great for fraud detection, recommendation engines, social networking

 


********************************



Amazon Timestream:


Fully managed, fast, scalable, server less time series database Automatically scales up and down to adjust the capacity


store and analyse trillions of events per day.


1000 times better and cheaper than relational databases.


built in time series analytics functions. (helps you identify patterns in your data in near real time).

***********************************

Amazon managed blockchain:


blockchain makes it possible to build applications where multiple parties can execute transactions without the need for hte trusted, central authority.


Amazon managed blockchain is managed service to:


join public blockchain networks


create your own scalable private network.


compatible with frameworks like hyperledger fabric and etherium

******************************************

what is docker?


docker is the software development platform to deploy the apps apps are packaged in containers that can be run on any OS


apps run the same, regardless of where they run:


any machne


no compatibility issues


predictible behaviour


less work


easier to maintain and deploy


works with any language. any OS, any technology


Scale containers up and down very quickly.


on virtual machine we can have multiple dockers:


docker running java


docker running mysql


docker running node js


etc


docker is a sort of the virtualization technology but not exactly resources are shared with the host > many containers on one server

****************************************

where docker images are stored ?


docker images are stored in docker repositories


public docker repositories are stored at hub.docker.com where we fnd base images of docker


private docker image is provided by amazon ECR. Elastic container registry.

******************************


ECS:


Elastic container service


launch docker containers on AWS


you must provision and maintain the infrastructure (the ec2 instance)


AWS take care of starting/stopping containers


has integrations with application load balancer

****************************************

Fargate:


Launch docker containers on AWS


you do not provision the infrastrucutre (no ec2 instance to manage) server less offering


AWS just runs containers for you based on CPU/RAM you need

****************************************

ECR:


for storing the container images we need the registry called elastic container registry


private docker registry on AWS


this is where you store the docker images so they can be run by ECS or Fargate

****************************************


what is serverless ?


Serverless is a new paradigm in which the developers dont have to manage servers anymore. they just deploy code.


they just deply functions


serverless was pioneered by AWS lambda but now also includes anything thats managed. "databases, messaging, storage, etc"


serverless does not mean there are no servers it means you just dont manage/provision/see them.


Server less services provided by AWS:


Amazon S3


Dynamo DB


Fargate


Lambda


**************************************


AWS Lambda:


virtual servers in the cloud are limited by cpu and RAM


They are continoously running


Scaling means intervention to add or remove hte servers that is kind of time consuming


Amazon Lambda provides the virtual functions means no servers to manage


limited by time short executions


run on demand


scaling is automated

**************************************

Benefits of Lambda:


Easy pricing


pay per request and compute time


free tier of 1000000 aws lambda requests and 400000 gb of compute time


integrated with whole AWS suite of services


Event driven: functions get invoked by AWS when needed


Integrated with many programming languages


easy monitoring through AWS cloud watch


Easy to get resources per functions (upto 10 gb of RAM)


lambda scales up and down automatically to handle your workloads,and you dont pay anything when your code isnt running.


examples of lambda function:


serverless thumbnail creation


serverless cron job we create the lambda function for the cron job and cloud watch will trigger this lambda function after every specific interval

*************************************



AWS batch:


Fully managed batch processign at any scale efficiently run 100000 of computing jobs on AWS A batch job is a job with start and an end batch will dynamically launch EC2 instance s or spot instance. AWS batch provisions the right amount of compute e /memory batch jobs are defined as docker images and run on ECS

************************************""*"

lambda vs batch:


GUAR tcs


lambda has time limit


it has limited runtime


limited temporary disk space


serverless


batch has no time limit


any runtime as long as its packaged as a docker image rely on EBS / instance store for disk space it relies on EC2 instance. 


*********************************


what is cloud formation:


Cloud formation is a declarative way of outlining your AWS infrastructure, for any resource (mast


of them are supported). we create the stack in cloud formation


For Example:


within cloud formation template, you can say


I want security group


I want 2 ec2 instances using this security group


i want s3 bucket


i want load balancer in front of these machines


Then cloud formation creates those for you in the right order, with the exact configuration that you specify.


benefits:


no resources are manually created, which is excellent for control 

changes to the infrastructure are reviewed through code 

we can destroy and recreatee the infrastructure on the cloud on the fly automated generation of diagram for your templates

 supports almost all the AWS resources


you can use custom resources for the resources that are not supported


**************************************

AWS cloud Development kit:


Define your cloud infrastructure using the familiar language

 code is compiled into cloud formation template 

you can deploy infrastructure and application runtime code together

 great for lambda functions


great for docker containers in ECS.

*************************************

Typical 3 tier Architechture for deploying web app:


user-----------> load balancer--------> autoscaling groupwhich contains multiple ec2  instances--------> ec2  instances will store data in AMazon RDS for read/write data. 


This architechture can be created manually in AWS. or it can be created using cloud formation. but this can be managed in better way

*************************************

As a webdevelopers you dont want to manage infrastructure. you just want to deploy the code


you dont want to configure the database for reading and writing the data. you dont want to configure the load balancer, you dont have to think about scaling concerns.


most web apps have the same architecture. (load balancer plus autoscaling group).


All the developers wants is for their code to run.


to solve this problem, there comes Elastic beanstalk.


***************************************



Elastic beanstalk:


Elastic beanstalk is a developer centric view of deploying an application on AWS.


It uses all the components we've seen before: EC2, ASG, ELB,RDS.


but its all in one view thats easy to make sense of.


we still have full control over the configuration.


bean stalk - PAAS.


responsibility of Elastic Beanstalk:


Instance configuration and OS is handled by Beanstalk.


Deployment strategy is configurable but performed by elastic beanstalk capacity provisioning


load balancing and auto scaling


application health -monitoring and responsiveness.


Just the appliction code is the responsibility of developer.


***************************************


Three Architecture models:


Single instance deployment: good for dev


LB + ASG: good for production or pre-production web apps


ASG only: good for non-web apps in production

***************************************

AWS Codecommit:


before pushing code to servers, it needs to be stored somewhere. developers usually store the code in the repository.using the git technology. famous public offering is github, AWS competing product is codecommit.


Codecommit: source control service that hosts git based repositorries. makes it easy to collaborate with others on code.


the code changes are automatically versioned.


benefits:


fully managed


scalable and highly available


private, secured, integrated with AWS

***************************************

AWS code build:


its a code building service in cloud.


compiles the source code, run tests and produces the package that are ready to be deployed.


codecommit----> codebuild---> ready to deploy artifact.


Benefit:


fully managed and server less continuously scalable and highly available


secure


pay as you go pricing only pay for the build time.


***************************************




***************************************


***************************************


***************************************



Comments

Popular posts from this blog

Linux basic commands

 Linux basic commands: du  -sh  *  |  sort  -h  -r   |  head  -n  40  :    list out first 40 files in the directory that are taking more space in the directory.  cd : change directory Is-l listing the items in long listing format  pwd : print working directory Is-I format: type :no of links:owner : group:size :month :day :time :name cd/: go to/directory whoami: tells us by which username we are logged in. touch jerry: creates the file named jerry in present working directory. cp jerry lex: copy the content of jerry file and paste it to lex file. vi text1: creates the file text1 and open it in vi editor mkdir superman: creates the directory called superman mkdir abc def  : creates 2 folder in one command. touch filename wont work in /etc/ folder if logged in by normal account. man cp: shows manual for cp command. echo "india is my country"> file1 puts the text in file1. rm filename: remove the filename  mv lex luther renames the file from lex to luther  mv luther /h

patching tasks

 Patching a Linux system is a critical task to ensure that the system remains secure, stable, and up-to-date with the latest features and fixes. Here’s a comprehensive guide to the tasks involved in Linux patching: 1. Pre-Patching Preparation Backup System : Ensure you have a full system backup, including critical data, configuration files, and applications. Test the backup to verify its integrity. Check Disk Space : Verify that you have enough disk space, particularly on /var , /tmp , and /boot partitions. Review Current Patch Level : Determine the current patch level and installed packages using package management tools like yum , apt , dpkg , or rpm . Check System Logs : Review system logs to identify any issues that might affect the patching process. Test in a Staging Environment : If possible, apply patches in a staging environment that mirrors production to identify potential issues. Notify Stakeholders : Inform stakeholders about the scheduled maintenance window and expected do