Greatra Mayana

Career & Employment Opportunities

How To Become A Cloud Engineer | Cloud Engineer Roles and Responsibilities | Intellipaat


hello everyone, welcome to this session
on how to become a cloud engineer today in this session I will be discussing all
the key points that you need to keep in mind why are you making a shift to the
cloud domain so let’s get started now before we move on guys I’d like you all
to subscribe to the Intellipaat channel and also click on the notification
button to never miss out on any updates from us and also if you have any queries
please comment it down in the comment section and we’d be happy to answer for
you right so now let’s move on and start out with the agenda of this session so
first we’re gonna talk about why should you apply for a cloud engineer role and
then we want to discuss what is a cloud engineer role exactly after that I’m
going to talk about all the skills that you need to gain in order to become a
cloud engineer followed by the future of the cloud domain in the next 10 15 or
even 20 years once we’re done with that we’ll talk about several job roles that
are there in the cloud domain which basically coincide with the cloud
engineer job role and towards the end we’ll discuss some top interview
questions that will be asked you as a cloud engineer now the top two skills
for a cloud engineer are cloud and DevOps and hence we’re gonna teach you
the interview questions for AWS and DevOps the top AWS and DevOps interview
questions that will be asked to you when you sit for cloud engineer interviews
right so this is the agenda for this session I hope it’s clear to you now
let’s move on and start off with the first topic that is why should you
become a cloud engineer why should you become a cloud engineer now guys
let’s start off with the basic necessity what is the salary that a cloud engineer
actually owns right so let’s talk about that so what we have done is we have
basically collected data from a lot of sites which basically do job postings we
have done it from zip recruiter we have done it from indeed we have done from
last job and this is what we have seen so in the u.s. a cloud engineer an
average cloud engineer basically earns 128 thousand dollars per annum in India
an average cloud engineer owns around 7 lakh rupees per annum and in the UK you
have around 60,000 pounds per annum ok so guys this is an average salary you
know based on your experience let’s say you have a 10 or 12 years of experience
in the id domain and you plan on to shift on to the cloud sphere in that
case you might even get a salary up to 20
plus lakhs per annum in India right I’m just talking about with the limited
experience that I have so I have friends who have salaries for 25 plus lakhs per
annum but that given that they also have a very strong background in the IT
domain right I also have seen people who have basically shifted from now non IT
domains and they are started in the cloud Swear and even in that case given
that you know you can implement what you have learnt in your past experience in
the cloud domain they easily can grab a salary of around twelve to fifteen lakhs
based on the experience right so this these are the two things that I talked
about the other thing is it also depends like I said on an experience and if
you’re a fresher and you’re just starting out you know and you’re
thinking of basically starting off with the cloud domain you can easily earn
around 4.5 to 5 lakhs per annum if you are a fresher given you have you don’t
have any experience in the nursery and you are just starting off as a cloud
engineer in this space right so this is the salary that you get when you
basically become a cloud engineer and based on your experience you can expect
a different or a varied salary exposure okay moving forward guys now let’s talk
about the things which are important for you to get a higher salary now how do
you differentiate yourself from the people who are also they’re appearing
for the same profile and are there in the interview room
so what differentiates you is your certification so what are you certified
with are you certified from AWS are you certified from Microsoft or are you
certified from Google cloud now these are the top three cloud providers in the
market right now and each and every company who works on cloud would have
their product on any of these cloud right and basically that is the reason
any company who is basically expecting a cloud engineer in their firm they would
expect you to have a certification from either of these three right and once you
get certified that basically tells the company okay so now you’re certified in
this domaine we now know that you know cloud
up till a particular proficiency and then they start asking is asking
questions from that particular level now this might not be helpful for some
people who have a hybrid kind of a profile wherein they are working as a
software developer full-time and they also work on cloud for some part of
their project right but if they try to shift their domain they might get a job
even without getting certified that is also possible but usually certification
hits you when let’s say you are working in a domain where cloud is not at all
used right let’s say you’re working as a system administrator and let’s say
you’re working for a company like IBM which have their own private cloud so
what you don’t have experience in or what you cannot put in your resume is
that you have worked on AWS you worked on GCP you have work knows you’re on Azure
certain projects see that you cannot put so for people like these for people who
don’t have the working experience in these domains that are AWS Microsoft
Azure and Google cloud if you are planning to become a cloud engineer you
would be to get certified in these cloud Domains right so once you get certified
that basically tells the company even though this person has not worked on
production for these clouds this guy has been basically certified by a parent
body of the top writer for in our case let’s talk about AWS so in case of a
AWS as a parent company is Amazon so Amazon certifies this guy that he is
proficient in their cloud infrastructure right and this is what the certification
tells me now on this certification obviously you will have to also do some
projects that you would be doing independently and that will have to add
to your resume and that is how you will tell the company that you are appearing
for that I have done some projects in Cloud although I did not get the time
while I was working in my company in the production environment but this is what
I have done in the side time of mine right I have done these projects and I
have also bought a certification from this company that I am a solution
architect or a developer or navigator which of a certification you do right so
a certification is very important and with the certification you can expect a
salary hike right and the salary average salary which we
have discussed this just one thirty-nine thousand is just a number
but whatever we discussed you can aim for those salaries a given you don’t
have the experience in a current company with the certification so you will have
to get certified if you want to become a cloud engineer and you don’t have the
relevant experience right now okay moving forward guys now let me tell you
some of the average salaries based on the certifications so if you are a AWS
certified Solutions Architect in the US you can expect around $130 thousand
dollars per annum if you are a AWS certified developer you can expect
around one hundred and thirty thousand dollars and if you are a Microsoft Azure
certified Solutions Architect you can expect around 121 thousand dollars now
all these salaries are basically the average salary so what I can tell you is
from one hundred and twenty two hundred and thirty thousand dollars in the US
you can earn once you get certified by these companies and given you have the
relevant experience okay now you might have noticed over
here that you have something called as a Solutions Architect or you have
something called as a developer and all of these are basically certified by AWS
so what are these nowadays for the people who are from the IT domain and if
they want to make a shift to the cloud and you also want to bring along with
yours with you the experience that you have in the idea industry what you can
do is let’s say you are a developer right now okay and what you want to do
is you want to become or you want to do development in AWS with let’s say you
have an application of node.js and you want that node.js application to
interact with some services on AWS so what we will have to know you’ll have to
know some SDKs you’ll have to know how to interact with AWS ApS so all that
knowledge is basically gained by a person who is a AWS certified developer
when we compare him with the Solutions Architect and the Solutions Architect
job is not to code so most of the people that I’ve trained or that I know
basically our Solutions Architect the reason for that is when they started off
in in their IT journey they not know much about cloud and they
started off as a normal software engineer or a system admin or something
like that and now for the people who don’t know coding right and they still
want to be in cloud people can easily crack the solutions architect profile
the reason for that is in the solutions architect profile what do you have to
know is how plantings what I often know is how to architect applications without
happening or without knowing the actual implementation of the code so that is
what the solutions accurate profile is but mind you is this profile is super
tough as well as you progress in your career so you will find architects who
worked well opposed before who were system admins before they have the
complete knowledge of the inside and out of the product they can code also and
they also know how to plan so those kind of people are also solutions architect
and obviously their side salary are on the higher range if I talk about a
Solutions Architect who knows coding who knows in and out of the application and
knows multiple clouds his salary can go up to 40-plus lakhs per annum as well in
India right so that is the kind of scope that you have when you get yourself
acquainted with the AWS it’s a journey that you’ll have to start with right but
you will even find solutions architect who are sitting at four or five lakhs
per annum and those people are basically freshers who’ve joined the industry and
they just know AWS right you will also find solutions architect who are on 14
or 15 lacs based on the experience that they had and now they have shifted to
cloud so based on your experience you can go up the ladder right the sillies
certified solutions architect as you see it is just a certification but what
matters is what experience you’re bringing to the table and that that just
makes everything change or that makes the rice tolling when you’re basically
applying for the job okay moving forward guys now let’s talk about what is the
cloud engineer also we have talked about why you should be a cloud engineer where
I talk to guys about we have talked about the money aspect
let me also give you one more aspect guys that forbes
or some independent body we are going to discuss about it in the future
in the next couple of slide but they have predicted that whatever
revenue cloud is earning right now in 2019 by 2023 that revenue itself off the
whole cloud industry is gonna double right so that basically means whatever
number of job postings that I’m gonna show you today they are going to
basically double or triple by the year 2023 or 2025 right so that’s the kind of
growth that cloud is gonna see in the future and if there is any time to shift
to the cloud domain it is right now right so if you are thinking about
moving to the cloud domain or if you want and are intrigued with what cloud
is and your job in the cloud domain right now is the time that you should
migrate from your current job role to cloud right now what will you do when
you become a cloud engineer what exactly will be your role let’s discuss that now
there is a cloud engineer or like for my friends who are not acquainted with what
cloud is because cloud is nothing but in the most simplest terms possible it is
the use of remote servers on the Internet
okay so if you have remote servers on the internet and you use them from the
comfort of your own laptop you are basically on cloud you’re basically
using servers which are not there in your premises you are not you’re using
servers which said away from you so that means you’re using the cloud so
don’t get confused and what cloud means cloud basically means you don’t know
where the server exists you just have an IP address you have the username and the
password and you just type that in your you can see the desktop of that machine
and you can do anything you want that is cloud right in the most simplest terms
but it gets a little complex when we talk about so I talked about a server
that you would be existing on cloud that you can connect with right now you have
specialized servers in cloud which can do a particular task for example you
have back-end servers you have front end servers you have database servers and
you can do a mix and match of these servers and create a whole ecosystem of
applications around you so this is what a cloud engineer
he knows how to use the services provided by a cloud provider in the best
in the most optimum fashion possible and create a highly available and highly
reliable application out of it right doesn’t matter if you quote the best
application in the world but if your infrastructure is not supporting your
application you won’t get the kind of performance you want out of your
application right and that is exactly why you need cloud engineers because
they know how to deploy your application they know which server to deploy it on
or which service to use it on to use on cloud Rider deploy a particular
application if I were to give you an example of how it works from the top of
mind is there are two back-end languages are not sure if everybody of you would
have heard of it but just here to the example and you know what I’m talking about
so there is a plank with programming language called PHP and there’s a
programming language called nodejs no no GS basically works on single
thread while PHP works on multi thread right so what happens is when you have
to choose between PHP and node.js for your application let’s say you work in a
company and that company has made an applicant development team has made an
application and now they come to you asking or they are planning to make an
application and they come to you asking which programming language should we use
should we use PHP or should we use node.js now you as an architect you tell
them that you know if you’re using PHP it’s a multi-threaded language if you
use notice that the single threaded language now this is something that you
will find in any of the documentation but what needs to what what an architect
needs know is what are the advantages of using single thread language and what
are the advantages of using a multi thread language when you use a single
thread language you cannot give that language processing intensive tasks for
example if there is one task that requires a lot of processing that would
not function well on node.js given it’s a single threaded language right
while on PHP if there are multiple threads which can run at a single time
that particular language can accept process you know
application which involves a lot of processing right but if your application
does not have that much of a processing it’s just about doing crud operations on
databases right in those kind of scenarios you can use node.js because it
performs way better than PHP now this knowledge which I just told you isn’t
knowledge that an architect should know right or a person who is handling a team
or who’s one deploy or application of a particular infrastructure or in a
particular environment should know and this is similar to this knowledge in
infrastructure a cloud engineer should also know right for example if my
application is processed intensive it if it requires a lot of processing how much
of RAM or how much of CPU would a single server require that I have to plan which
service on the cloud like for example in AWS there are three type of compute
services mainly right you have ec2 you have elastic beanstalk you have lambda
right so where to put which application you have to know for example lambda
requires back-end or basically does back in processing Cu right so back in part
of your application would go on AWS lambda now obviously if it’s going on in
W lambda you also have to make it highly available so either with lambda handles
it for you but when you talk about your front end where a lot of users are going
to ping your website that task will basically requiring a lot of scaling up
and down because the traffic is not the same at every time what I basically mean
by that is let’s say you are running a company and at around 4 p.m. you get the
maximum traffic right so in that case you need a server capacity of around
nine or ten servers at that time and at 6 p.m. the traffic drops ok at that time
you don’t need those many servers so how to configure the auto scaling properties
and cloud providers right that in all all this knowledge is what an architect
gains when he basically gets certified or you know when he roasts himself by
doing work day in and day out as cloud engineer right so there’s a cloud
engineer role to sum it up is to know brought application would go on what
service of the cloud and what would be the best practices of deploying an
application that is what a cloud engineer
right now he has to work on planning he has to work on architecting then he also
has to work on managing the infrastructure once it’s deployed and
obviously he also has to look at how we can monitor and support this particular
application all this planning all this implementation is done by the architect
or the administrator or the developer but essentially they are known as cloud
engineers and this is what your role would be when you shift into the cloud
domain so to sum it up if you are applying the best practices and what are
the principles you have learnt in cloud then that profile would sum up to become
a cloud engineer profile now moving forward guys let’s talk about the skills
that a cloud engineer should have right so a cloud engineer or a person if I’m a
company and I’m if I am hiring a cloud engineer I would expect certain skills
out of a person so I would expect he would he should know some programming
language I would expect he would know some Linux Kipling’s skills that he
should have I would I would expect him to have some troubleshooting skills and
I would expect him to know multiple cloud to means now this is not what I
think the companies would expect this is basically the research that we have done
and I’ll show you in some human guys some I will show you some job
descriptions from companies for basically hiring cloud engineers and
I’ll show you what kind of skills that they ask right so how to become a cloud
engineer so I told you guys some companies expect some skills now how to
get yourself acquainted in those skills but before that let me show you actually
what are the recommended skills that are expected from a cloud engineer so guys a
job description is what we will be looking at so in this particular section
of job roles and responsibilities that I’ll be showing you some job
descriptions from multiple companies and then we’ll sum it up and see what are
the required skills which the industry is demanding right now to become a cloud
engineer ok so let’s start off for the first Job Description so this is a job
description which is by Cisco right and they are basically hiring a cloud
infrastructure engineer and what they expect this guy to know is DevOps he
should know how to resolve customer issues that is you should be customer
centric right
you should know how to deploy cloud applications on AWS you should know
about VPC’s which are basically services in AWS ec2 again a service etc right so
these are the skills that they expect from a cloud engineer you would be
wondering what is DevOps away oh right so I will explain that in a few moments
but as you can see these are all the skills that they expect from a cloud
engineer in the cisco company now let’s go let’s talk about Amazon so Amazon is
also hiring cloud engineer and what they expect is they should know
troubleshooting they should have experience with AWS Google cloud
Rackspace software management etc they should know a programming language among
Java Perl Ruby or Python and they should have experience in managing full stack
applications right this is the expectation from a cloud engineer by
Amazon company then there is a company McAfee which I’m sure all of you know
now in this you are expected to know AWS you are expected to know as your you are
expected to know Google Cloud right you are expected to know cloud formation
which is basically a service in AWS you are expected to know puppet ansible and
chef which are basically DevOps tools you are expected to know Python right so
all these skills are required so what I’m trying to tell you is it is not work
if you just know one cloud provider it will not work if you just know one
programming language you have to know a combination of multiple things and only
then this cloud in your profile can be taken on okay and more and more skills
and more and more relevant you become with the job descriptions for example
I’m not focusing or not on any one company what I have done is I have
generally taken out skills from multiple cloud engineer profiles and then I have
accumulated them together let me show you how that looks like so the required
skills are these so if you were to sum it up if you want to become a cloud
engineer you should have knowledge of AWS your and GCP right then you should
have experience in any of these programming languages Python Java Go R Clojure I would recommend you if you are planning on to learn program
language you should basically either learn Python or go basically because
they are huge when it comes to the demand that they have right now you
should have a little experience in Linux operating systems not a little actually
I’ll say you should be an intermediate in your linux then you should also
have no tools like puppet chef git Docker Kubernetes Nagios Terraform
etc right so don’t get scared guys with all these big words I’m not only
telling you the problem or what all the companies expect I also give you a
solution as to how you can prepare for this ok so and you should also have an
understanding of EP eyes and web services ok so these are all the skills
that are required to become a cloud engineer and I would say even if you
gauge around 60 to 70% of these skills you can apply for a cloud engineer job
this is basically job description for a generic company which is out there now
how relevant you become to become a cloud engineer is this list the more and
more you know this list the more and more you become relevant and you can
apply for any cloud engineer job as you move along ok now let’s talk about what
are the skills that we have summarized so I’ve told you guys that you should
know Linux so first off when you start on the journey of becoming till now in
general you should start off by learning Linux if an elipse you can skip this but
if you don’t know Alex first you should learn this right then you
should start off with all the cloud providers that is enablers as your ng CP
you should start learning that once you are done with Linux and cloud providers
you now know how things work on cloud ok now what you have to know is how
companies implement or upload or basically help their development team to
integrate their development tools with clout and that is exactly what DevOps is
for if I were to give you an example let’s say there’s a developer in a
company and what he knows is basically a place where you can upload your code
right for example a developer’s best friend is basically a version control
system let’s assume he is using github right so all he knows how to code
howl applaud it on github but from github how do you test that application
and if the test is complete how do you make it available on production that
lifecycle is basically what DevOps caters to okay now how does cloud fit in
divorces when you have deployed a application on let’s say your production
server most of the companies now are basically
using cloud right which basically means they don’t have any infrastructure of
their own probably the developers machine will be there in the company but
the server on which the application is deployed is not in the company it’s on
the cloud so I think you got the idea that whenever your applications are
either getting deployed on testing server or on the prod server these
servers are nothing but these are servers which are deployed on the cloud
which could be either a WRO GCP so how to create the DevOps lifecycle on the
cloud is what you’ll have to learn and that’s why you also have the DevOps
tools as a skill when you want to become a cloud engineer then you should also
know a programming language the reason for that is I told you guys a cloud
engineers should also know how to code the reason for that is let’s say an
application stops working right and now the developers have no clue why it’s not
working so it’s your job to understand the code and understand where exactly
the problem is occurring and that’s why it is recommended that you should know a
programming language otherwise if you are planning to aim for the cloud
developer role in that case what you will be doing is you basically will be
implementing your application with the cloud right and if you’re integrating
your application with the cloud which basically means you have to interact
with the services of the cloud then you have to know how to implement SDKs right
in a programming language and that exactly would be covered when you know a
programming language right and finally because today in this today’s world we
are no longer working with monolith monolithic applications what are
monolithic applications there is no one code which will basically run my whole
application you basically have different parts of the application deployed
separately and these different parts interact with each other and these are
what API is are in the most simplest terms okay so if you did not get that
let me just tell you in a brief tense that earlier all the applications
had one single codebase and they were deployed and the application used to
work but what happened was as we progressed in the software development
journey we realized that let’s say I’m using the uber app and I have to add a
feature in the payments section so I’ll have to open the whole codebase again
and I have to go to the payments section of the code and I’ll have to make
changes and then approve the whole code again this unnecessarily
broke something or the other in the whole codebase and now what we follow is
a micro services architecture and in the micro services architecture
what we do is we basically break the application into multiple parts and then
these parts interact with each other using API endpoints right so as a cloud
engineer with the modern applications that we deploy you should also know what
our API is okay so these are the required skills for you to become a
cloud engineer and you should follow this particular fashion when you are
basically preparing for a cloud engineer role okay now I do that is how you can
prepare for a cloud engineer role right now if you were to do it on your own
this is the way if you want help from us or from any learning company I will tell
you a way as to how you can learn that as well okay but before that I’d like to
tell you there is the present is good I mean if you want to become a cloud
engine right now it’s the best possible time you can become if you become right
now what is the future that lies in front of you it should not be like that
in two-three years the demand goes down all right so let me tell you the future
of a cloud engineer so guys these are basically some statements that I have
picked up from these companies like Forbes says cloud engineer or cloud
architect profile is in the Forbes top 15 list of highest paint or tech jobs
right now right folks says around 151 $46,000 is the median salary for a cloud
professional which is basically a jump because when they did the survey in 2016
it was 22,000 dollars less than the figure that you see right now so as the
number of years are passing by the the obviously the inflation is also there
but $22,000 is not the kind of inflation that you would see guys right it’s a
huge jump in the medians in the average salary of a cloud
engineer then according to glass store in in the last month alone right we are
talking right now in august two thousand nineteen so in july 2019
they were five thousand seven hundred and sixty five jobs posted in india just
for a cloud engineer and they were thirty three thousand two hundred and
seventy two jobs which are posted that were in the US okay so this is the huge
demand that you can see that is there for a cloud engineer profile similarly
there’s a company called can alice’s so they say AWS which is the biggest cloud
provider in the market right now it owns around thirty one point five person of
the cloud market share now you would think that you know thirty one point
five is not a huge number but when you compare it with Azure which is the
second largest cloud provider it has on 14 of 15 person to the cloud market
share which is actually the half of what AWS owns and it is the second largest
cloud Rider Microsoft Azure so as you see AWS has a whole lead and the reason
for telling you this is when you’re planning to learn about the cloud
providers you which which are basically totally 3 which are AWS is your and GCP
you should always start with AWS given you don’t know AWS yet right the reason
for that is once you learn AWS and they say you don’t you plan on going on with
the other skills like the DevOps skills and the programming languages
skills you can actually skip as you’re in GCP for a while because when you
apply for jobs 3131 companies would be using AWS 16 companies would be using
Microsoft Azure and if I were to go by my memory around eight to nine percent
or six to seven percent is actually by the GCP so out of 131 companies AWS 16
companies Microsoft Azure and six to seven companies would be Google cloud so
this is the current figure so if you go by probability you are likely to get a
job as a cloud engineer if you know in a blue s then you know if you know a
Microsoft Azure and GCP but re we should know all the three but
if you were to plan as to how to learn all the skills that we have discussed
today you should start with AWS so this point is specifically
and finally according to report linker oh so this was the fact that I was
talking about in the beginning of the session that right now the cloud
industry has the revenue of around 258 billion dollars annually right so by
2023 this is going to become six hundred and twenty three point three billion
dollars that’s more than the double of what the cloud industry is gaining right
now right so this basically just tells us how much the cloud industry is gonna
grow in the future all right moving forward now let’s talk about the job
roles that you have in cloud so I told you you can become a cloud engineer you
can become a cloud engineer but actually a cloud engineer is basically a general
profile which companies post on the job portal this is basically built up with a
combination of job roles so what are those job roles so the first one finding
solutions architect which I told you about wherein you have to plan you have
to know the in and out of your application you should know which part
of your application should be deployed in which service of the cloud that job
is basically the solutions architecture then you have the cloud developer so
cloud way leper is a person who knows how to read up applications and also how
to write SDKs so that the application can interact with the cloud services so
that’s a guy who is a cloud developer then you have a person who’s a sis ops
administrator’s office administrator is a person who basically manages the
infrastructure once it has been deployed by the architecture right so once the
infrastructure has been either designed or applied by the architect this is what
guys job would be to implement that architecture or handle that architecture
according to the norms that has been specified by the solutions architect so
this is what does this observation interest so for all my friends who have
an experience in development and they feel that you know I want to stay in
development right now I don’t want to become a full stock cloud engineer I
basically want to be in development but I would want some of my job to include
cloud then you profile that you’re aiming for is cloud developer for my
friends who are already system administrators and they basically want
to scale up themselves by becoming system administrators for
cloud provider they can basically aim for the SIS ops
administrator profile and for the people who want to move up the ladder they
basically have around 14 to 15 years of experience either all basically they are
from an online indie background and then want to shift in the cloud domain you
should start off with the Solutions Architect profile okay then have some
more profiles which are basically cloud network engineers so people who are
focused on just networking aspect of cloud if you want to become a network
guru then in your case you should opt for the cloud network engineer profile
and then you have the cloud of ops engineer which is basically a profile
which basically expects you to no cloud in and out which expects you to no
devops in and out and this is basically the profile that I was telling you about
that you should aim for right you should know all the skills that are basically
expected from a person who is a cloud engineer and basically then would become
a generate cloud engineer who can apply to any company so this is the guy so
your aim is this if you want to start off from a particular point your your
point would be this you become a Solutions Architect first get yourself
certified in that then work your way up for cloud dobson similarly you can
become a cloud developer oSIsoft site administrator and then you can get
certified in cloud devops engineer similarly for cloud network engineer as
well ok now we discussed all the jobs which are there in cloud if you want to
become a full stock if you want to implement cloud in your job I just
showed you some profiles which you get can get certified in now some of our
viewers today would be wondering like the current job that I am in right now
can i implement cloud in that so let me show you some stats so there is the top
15 tech jobs which have the cloud related skills in the job description
that is what we are going to discuss right now so guy is a software engineer
if you go by the generic job descriptions that we see every day more
and more software engineer job descriptions are expecting the software
engineers to know cloud which basically means if you are a software engineer
right now and you are aiming for the future and you still want to be a
software engineer then almost 8 to 9 percent currently job description
expects you to 8 to 9 percent of your job description that means let’s
say you have hundred skills how those ten skills should be in cloud right so
this is what the 7.92 number means then you have the senior software engineer
profile which basically again expects you to have around 6.7% skills in cloud
software architect expects you to have 6.2 1% so basically the aim is to show
you guys that all the profiles it is see over here although they do not coincide
with any cloud profile still they have some elements of cloud skills in their
job description which basically means most of the companies which are related
to IT today are doing something or the other thing on cloud and that’s why they
expect all their employees to no cloud right so what I’m trying to tell you
from this slide is whatever profile you are in if you want to be relevant to the
industry you should know or you should start upgrading yourself with cloud
today ok then these are some of the other
profiles that you can have a look at for example a cloud engineer profile a
generate cloud engineer profile data engineer profile a Java developer
profile system engineer data scientist then a system advocate a Java developer
dotnet front-end will be back a little all of these people they have some
element of cloud skills in the job description and that is what you should
aim for if you are currently sitting in these profiles and you don’t have this
cloud skill you should get started today ok now how to get started this is a
question that most of you must be waiting for you know I’ve showed you a
lot of skills that you should know why if you want to become a cloud engine and
now some people are lucky because the jobs are there in probably their jobs
expect them to know those skills so you can invest your nine hours of time there
is pain in office to get yourself acquainted with those skills but some of
us we are not working in the cloud dominated it’s a bit difficult for us to
manage what we do in a job and a side learning which is all the skills that
I’ve just showed you right so how do you get started how to get yourself ready
for the cloud engineer profile let me discuss that so Intel apart basically
the company that we work for is an e-learning company and we basically
social or absorb descriptions and what we came up with was this cloud in the
ops architect masters course now in this we
have put down all the skills from all the job descriptions that we have got
you know for cloud related jobs and also we are in touch with since we are car
with a training company all our instructors are basically working
professionals so we have got in touch with top cloud and devups professionals
and ask them what are the relevant skills that you think are expected from
a cloud engineer person who is basically appearing for an interview so based on
what the feedback that we got from our instructors based on the feedback we did
from our research we have come up with this course and this these are the
things these are the skills that we teach to a person when we when we take
him basically on board with us in the cloud into ops architect master’s course
today in this session we are going to discuss the top EWS questions that can
be asked to you in your next AWS interview so without wasting any time
let’s go ahead and start with the agenda to see what all we’ll be covering in
today’s session so we’ll start this session by first discussing the domains
from which we have collated these questions these domains are directly
mapped to the AWS exam blueprint which was recently updated in June 2018 so
there is a high possibility that your next AWS interview might contain
questions from these domains so I want you to pay the utmost attention that you
can so that you can gain as much knowledge as you can from this session
all right so let’s take a top-down approach let’s start from the simplest
questions that are some general questions on AWS that can be asked to
you in an interview all right so the first question says what is the
difference between an ami and an instance so guys an ami is nothing but a
template of an operating system it’s just like a CD that you have of an
operating system which you can install on any machine on the planet right
similarly an ami is a template or is an installation of an operating system
which you can install on any servers which fall into the Amazon
infrastructure all right you have many types of a.m. eyes you have windows mi
you have Ubuntu VM eyes you have sent away say mi as its
there are a lot of a.m. eyes that are present in AWS marketplace and you can
install them on any servers which are there in the AWS infrastructure alright
coming on to instances what are instances so instances are nothing but
the huddle machines on which you will install am i right so like I said a.m.
eyes are templates which can be installed on machine these machines are
called instances and again instances also have types based on the hardware
capacity for example of one CPU and 1gb of machine is called T 2 dot micro right
similarly you have T 2 dot large you have T 2 dot extra large then you have
IO intensive machines you have storage intensive machines you have memory
intensive machines and all of these have been classified in different classes
right depending on their hardware capability so this was the difference
between an AMI and an instance our next question asks us what is the difference
between scalability and elasticity alright so guys scalability versus
elasticity is a very confusing topic and if you think about it so scalability is
nothing but increasing this the the machines resources for example if your
machine has 8 GB of RAM today you increase it through 16 GB therefore the
number of machines are not increasing you’re basically just increasing the
specification of the machine right and this is called scalability when you talk
about elasticity we are basically increasing the number of machines
present in an architecture we are not increasing the specification of any
machine for example we choose that we require a 3 GB machine with around 8 GB
or 10 GB of storage right so any replicas which will be made or any order
scaling which will happen it will only happen to the number of machines
it will nowhere be related to this specification of the machine the
specification of the machine will be fixed the number of machines will go up
and down and this is called elasticity on the other hand scalability is called
is basically termed as the change of the specification of the machine that is you
are not increasing the number of machines you’re basically just
increasing the specs of the machine for example the RAM the memory or the hard
disk etc and this is the basic difference between
scalability and elasticity moving forward our next question is which AWS
offering enables customers to find buy and immediately start using software
solutions in their a SS environment now you can think of it as say you want a
deep learning MA or you want a Windows so where am i which specific software is
installed on it right so some of them are available for free but some of them
can be purchased in the AWS marketplace so the answer for this is AWS
marketplace it’s basically a place where you can buy all the AWS systems that you
are or all the AWS or non AWS softwares that you require to run on the AWS
infrastructure right so the answer is AWS marketplace moving on a next
question would fall under the domain of resilience architecture so all the
questions that we’ll be discussing henceforth in this domain will all be
dealing with the resiliency of an architecture all right so a customer
wants to capture all client connection information from his load balancer at an
interval of five minutes which of the following options should be chosen for
his application all right so I read out the options for you the option a says
enable AWS cloud trail for the cloud blanche for the load balancer option B
says cloud trail is enabled globally option C says install the Amazon Cloud
words logs agent on the load balancer an option D says enable cloud watch metrics
on the load balancer all right now if you think about it cloud trail and cloud
watch are both monitoring tools so it’s a bit confusing but if you have steadily
deeply or if you understand how cloud trail works and how cloud watch works it
is actually not that difficult all right so the answer for this is a that is you
should enable AWS cloud trail for the load balancer reason being option B is
not correct cloud trail is not enabled by default or is not enabled globally to
all the services option C says install Amazon Cloud watch so option C an option
D you will not even consider reason being that you’re talking about the log
of the client information right what all people are connecting to the load
balancer what IP addresses are connecting to the
load balancer etc cloud watch deals with the local resources of the instance that
you are basically monitoring for example if you are monitoring ec2 instance cloud
watch can monitor the CPU usage or the memory usage of that particular instance
it cannot take into account the connections which are getting connected
to your AWS infrastructure right on the other hand cloud Rail deals with all
these kind of things wherein client information or any kind of data which
can be fetched from a particular transaction all of that can be recorded
in the logs of cloud trail and hence for this particular question the answer is
enable AWS cloud trail for the load balancer moving on our next question is
in what scenarios should we choose classic load balancer and application
load balancer all right so for this question I think the best way to answer
this question would be to understand what exactly is classic load balancer
and what exactly is application will balancer all right so a classic load
balancer is nothing but you know it’s an old-fashioned load balancer which does
nothing but round-robin based distribution of traffic which means it
distributes traffic equally among the machines which are under it it cannot
recognize which machine requires which kind of workload or it requires which
kind of traffic whatever data will come to a classic load balancer will be
distributed equally among the machines which have been registered to it on the
other hand application load balancer is a new-age load balancer which basically
deals with identifying the workload which is coming to it right it can
identify the workload based on two things it can either identify it based
on the path for example you can say that you you have a website which deals in
image processing and video processing so you can see it it might go to in telecom
slash images or slash videos so if if the path is slash images the application
load balancer build directly route the traffic to only the images servers right
and if the path is slash videos the application load balancer will
automatically route the traffic to the video servers and this is application
load balancers of whenever you are dealing with
multivariate traffic that is traffic which is meant for a specific group of
servers you would use application load balancer on the other hand if you have
servers which which do the exact same thing right you just want to distribute
the load among them equally then in that case you would use a classic load
balancer a next question says if you have a
website which performs – scoffs that is rendering images and rendering videos
both of these pages are hosted in different parts of the list right but
under the same domain name which AWS component will be apt for your use case
among the following all right so this I think is an easy question reason being
we just discussed this right so the answer for this is application load
balancer the reason being the kind of traffic which is coming is specific to
its workload and this can be differentiated easily by an application
load balancer okay so we are done with the resilient architecture questions now
let’s move on to the performance architecture domain we will be
discussing about how to about architectures which are performance
driven right so let’s take a look at the first question so the first question
says you require the ability to analyze a customer’s clickstream data on my
website so they can do behavioral analysis so your customer needs to know
what sequence of pages and adds their customers clicked on this data will be
used in real time to modify the page layouts as customers click through the
site to increase stickiness and advertise click through which option
meets the requirement for captioning and analyzing that in this data alright so
the options are Amazon SNS AWS cloud trail AWS kindnesses and AWS SES so
let’s first start with the odwin out options right so we have Amazon SNS
which deals with notifications so obviously because we want to basically
we want to track user data right so SNS would not be the app choice for it
because sending multiple notifications in a short amount of time would not be
act similarly SES would also not be the app choice because then we will be
getting emails on basically the user behavior and this would amount to a lot
of emails so hence it’s not an appropriate solution I think then we
have AWS cloud trail and AWS kindnesses actually both these services can do this
work but the key word over here is real-time right you want the data to be
in real-time so since the data has to be in real-time you will choose AWS
kindnesses slough trail cannot pass on logs for real-time analysis kindnesses
especially built for this particular purpose enhanced for this particular
question the answer will be AWS kindnesses moving on that our next
question is you have a standby IDs instance will it be in the same
availability zone as your primary RDS instance okay so the options are it’s
only true for Amazon Aurora and Oracle RDS second option is yes third option is
only if configured at launch and the fourth option is no all right so the
right answer for this I want to think about it like this that whenever you
want a standby RDS instance it will only be there when your RDS instance stops
working now what could be the reasons that your RDS instance could stop
working probably it could be a machine failure or it could be a power failure
at your at at the as a place where your server has been launched it can also be
probably a natural calamity which would have struck your DSN to various ever
exists so all of these could be reasons which could lead to disruption in your
RDS service right now if your standby RDS instance is also in the same
availability zone as your primary these conditions cannot be tackled or these
situations cannot be tackled alright so it is always logical to have your stand
by machines in some other place right so that even if there is a natural calamity
or if there is a power failure you your instance is always up and ready and
because of that AWS does not give you the option of launching your standby RDS
instance in the same availability zone it always has to be in another
availability zones and that’s why the answer is no your RDS instance will not
be in the same availability zone as your primary instance alright so our next
question is you have a web application running on six Amazon ec2 instances
consuming about 45% of resources on each instance you are using or scaling to
make sure that six instances are running at all times the number of requests this
application processes is consistent and does not
experience spikes alright so the application is critical to your business
and you want high availability at all times you want the load to be
distributed evenly between all instances and you also want to use the Amazon EMI
for all instances which of the following architectural choices should you make
alright so this is a very interesting question so basically you want to run 6
Amazon ec2 instances a six Amazon easy row instances and they should be highly
available in nature and they would be using an AM and of course because they
are Auto scaled so which among the following would you choose so you have
the options deployed six e zero instances in one availability zone and
ELP deployed three ec2 so is in one region and three in another region and
you zlb you should deploy three ec2 on one easy that is availability zone and
three in another availability zone and should deploy to zero instances in three
regions and use an elastic load balancer all right now the correct answer for
this would be see the reason being that EMI is are not available across regions
right so if you have created an ami in one region it will not be automatically
available in the region you will have to do some changes and only then or do some
operations and only then it will be available in the end of the region so
this is reason number one so the region options mention away get casted out
because of this reason second if you look at the first option which is
deploys 6e zero instances on one availability zone that defeats the
purpose of high availability because like I said if there is any natural
calamity or a power failure at a datacenter then all your instances will
be known so it’s always advisable to have your servers distributed but since
we have that limitation of using an EMA and therefore and also the limitation
that it is not accessible across regions we would choose distributing our
instances among availability zones and I’d say we have we just had the option
of r2 availability zone right it could be three availability zones and
we could deploy to two servers in each and this would also amount to high
availability all right and of course because you want to load balanced
traffic if you apply an lb on top of three availability zones it will work
like a charm regions across regions it can become a
problem right and but in availability with drones it definitely works and will
work perfectly all right so the answer for this question is you would be
deploying ec2 instances among multiple availability zones in the same region
across an ILP alright so a next question is why do we use elastic caches and in
what cases alright so the answer for this is basically related to the nature
of the service of elastic caches so elastic as the name suggests it’s
basically a caching which can be accessed faster than your normal
application for example if you talk about a database instance from you which
you are gathering information right if you are always dealing with the same
kind of query for example you’re always fetching the password for particular
users right so if you’re using an elastic asset that data can be captured
or can be cached inside elastic caches and whenever a similar request comes in
which is asking for that kind of data your my sequel instance will not be
disturbed the data will directly be relayed from ElastiCache and that is the
exact use of elastic cache right so you use elastic cachet when you want to
increase the performance of your systems right whenever you have frequent reads
of the similar data so if you have frequent areas of similar data we will
probably be querying the same kind of data every time and basically that will
increase the load on your database instance but to avoid that you can you
can basically introduce an elastic cache a layer between your database and your
front-end application and that would not only increase the performance but also
decrease the load on your database instance right so this was all about
performant architectures guys are next to mean would deal with secure
application and their architecture so let’s go ahead and start with the first
question of this domain which talks about a customer wants to try
access to their Amazon simple storage surface buckets and also use this
information for their internal security and access audits which of the following
will meet the customer requirement so basically you want to just track access
to the s3 buckets now if you wanna track access let’s see what are the options so
you can enable clout trail to audit all Amazon s3 buckets
you’re gonna enable server access logging for all required Amazon s3
buckets enable the request appeal opposite attract is assessed via AWS
billing or you can enable a herbalist s3 event notification spot put and post all
right so I would say the answer is e and reason being bi is the answer not be
because server access logging is actually not required when you want to
deal with tracking access to the objects present in the s3 bucket a requester
pays option to access why AWS building again it’s not required because there’s
a very simple feature of cloud trail which you which is available to all the
buckets across s3 so why not use that and using notifications for s3 will not
be apt reason being there will be a lot of operations that would be happening so
rather than sending notifications over each and every operations it is better
that we log those operations so that whatever information we want after out
of the log we can take in rest we can ignore right so the answer for this is
Amazon using AWS cloud trail okay an excavation has imagine you if you have
to give access of AWS to a data scientist in your company the data
scientist basically requires access to s3 and Amazon EMR how would you solve
this problem from the given set of options okay so you basically want to
give a particular services access to an employee and we want to know how to do
that okay so the options are we should give him credentials for route a second
option being clearly user and I am with the manage policy of EMR and s3 together
create a user and I am with manage policies of EMR and s3 separately give
him credentials for admin account and enable MFA for additional security okay
so a rule of thumb that is never give root credentials to anyone in
your company even yourself you should never use root credentials always create
a user for yourself and access AWS through that user right
this was point number one second whenever you you want to give
permissions to services or permissions of services to of particular services to
people you should always create or use policies that pre-exists in AWS right so
when I say that I basically mean never merge two policies okay so for example
if you if you are using EMR NSA together that basically means that you create a
policy that gives you you know the required access in one document that is
in one document you mentioned the access for AMR and the in the same document you
mentioned the axis for s3 as well well this is not suggested reason being you
have policies created by AWS which is which are basically created and tested
by AWS so there is no chance of any leak in terms of security aspect second thing
is see needs change right so if tomorrow your user says he doesn’t want access
for EMR anymore he probably wants access for easy – right
so in that case what will you do if you had the policy in the same document you
would have to edit that document correct but if you create a document separately
for each and every service all you have to do is remove the document for EMR and
add the document for the other service that he requires probably easy – you
just add the document for easy – and your SD document will not be touched
right so this is more easier to manage than to you know writing everything in
one document and editing the code later to give permissions of specific services
that he requires now right so that is something that is not much manageable so
the answer for this is create a user in I am with a managed policy of EMR and s3
separately alright let’s move on to the next question so how do system
administrator add an additional layer of login security to a you
a doubloon admin console so okay so this is a simple question the answer for this
is enable multi-factor authentication so am l – multi-factor authentication
basically deals with rotating keys that the keys are always rotating so every 30
seconds a new key is generated and this key is required while you’re logging in
so once you’ve entered your email and password it will not shade away dog you
enroll again give you a confirmation page for code that you have to enter
which will be valid for those 30 seconds now this can be done using apps so you
have a app called if you have an app from Google you have apps from other
third-party vendors as well right so these apps are basically compliant with
your AWS right and you can use them to have access to the keys which are
changed at every 30 seconds all right so it is better so if you want to enable
multi-factor authentication it is the best way of adding a security layer over
the traditional username and password information that you enter all right
so our next to mean deals with cost optimized architectures so let’s discuss
these questions as well so a first question is why is AWS more economical
then traditional data centers for applications with varying compute
workloads all right so let’s read out the options so we have Amazon Elastic
Compute costs are billed on a monthly basis okay Amazon ec2 costs are billed
on an hourly basis which is true Amazon ec2 instances can be launched on demand
when needed true customers can permanently run enough instances to
handle peak workloads alright so I’ll say because this question is talking
about the economical value of AWS option B and option C are correct reason being
you’re charged according to the R and at the same time you can have them on
demand if you don’t need them after to us just pay for two words and then you
can you don’t have to worry about where that server went right so this is very
economical as compared to the fact that when you buy servers
and their need finishes say after one or two years when their hardware gets
outdated so it becomes a bad investment on your part right and that is the
reason AWS is very much economical in terms of reason being that you know the
other charges you according to the are and also gives you the opportunity of
using servers on the basis of on-demand pricing all right so this would be the
answer so option be an option C would be the right answer for this particular
question moving further our question says you’re launching an instance under
the free tier usage from EMI having a snapshot size of 50 GB how will you
launch the instance under free usage here so the answer for this question is
pretty simple it is not possible right you have a limit on how much of size SAP
search size you can use that would fall into the free tier 50 GB is not the size
is basically a size which will not fall under the Amazon free tier rules and
hence this is not possible all right an expression says your
company runs a multi tier web application the web application does
video processing there are two types of users which accesses service Premium
users and free edition users the SLA for the Premium users for the video
processing is fixed while for the free users it is indefinitely that is a
maximum time limit of what if 8oz how do you propose the architecture for this
application keeping in mind cost efficiency all right so to rephrase this
question basically you have an application which has two kinds of
traffic – free traffic and one is premium traffic the premium traffic has
an SLA that the tasks say should be completed and say what are or towards
the free traffic they do not guarantee it when it will finish and it has a
maximum SLA of 48 hours so if you were to optimize the architecture for this at
the backend how would you design the architecture that you get the maximum
cost efficiency possible using this architecture alright so the way we can
deal with it is there is a thing called spot instances in AWS which basically
deals with bidding so you bid for AWS servers in
the lowest price possible and as long as the server prices are the in in the
range that you specify you have that instance for yourself so
all the free users who are coming to this Web site can be alerted to spot
instances because there is no ASL a so even if the prices go high and the
systems are not available it does not matter right you can wait for the
applications for processing if you are dealing with free users but for premium
users since there is an SLA you have to meet a particular deadline I would say
you use on-demand instances they are a little expensive but I think because
Premium users are paying for their membership that should cover that part
and spot instances would be the cheapest option for people who are freeloaders or
people who are coming free on your website because they do not have any
agency of their work and hence can wait if required if the prices are too high
for you alright so our next two main we’ll talk about operationally excellent
architectures so let’s see what all questions are covered in this particular
domain all right so imagine that you have an AWS application which is
monolithic in nature so monolithic applications are basically which do not
which which have the whole codebase in one single computer right so if that is
the kind of application you are dealing with it’s called a monolithic
application now this application requires 24/7 availability and can only
be down for a maximum of 15 minutes if had your application been not monolithic
I would say that there would be no downtime but since if the monolithic
application the question has mentioned there is an expected downtime of say 15
minutes how do you ensure the database hosted on your EBS volume is backed up
now since it’s a monolithic application even the database resides on the same
server as that of the application so the question is how will you ensure that a
database is backed up in case there is an out page so for this answer I will
say the answer is pretty easy you can schedule EBS snapshots for a zero
instance at particular intervals time and these snapshots would basically
act as a backup to your database instances which have been deployed on
ec2 so hence the answer is EBS instance box snapshots alright an actuation has
which component of AWS global infrastructure does AWS CloudFront used
to ensure low latency delivery now AWS cloud front is basically a content
delivery network which basically means if you are in the US and the application
that you’re accessing has servers in India it will probably catch the
application in a US server so that you can access that application faster than
tools and traffic packets over to India and then receiving them back alright so
this is how CloudFront works basically catches the application to your nearest
server and so that you get the maximum latency sorry the minimum latency
possible and it is possible using AWS edge locations okay so as locations are
basically the servers that are located too near your near your place or near a
particular availability zone which basically catch the applications which
are available in different regions or are at fire fire places today in this
session we are going to discuss the top DevOps interview questions that can be
asked to you in your next step ops interview all right so let’s go ahead
and get started with this session with the first slide which talks about the
agenda so basically we have divided all our interview questions under these
domains so those domains are continuous development then we have virtualization
and containerization continuous integration configuration management and
continuous monitoring and then in the end we have continuous testing so we’re
gonna follow this sequence when while we are discussing the questions right so
let’s go ahead and start with the first domain which is continuous development
and let’s see what our first question is so our first question asks us can we
explain the gate architecture now this is fairly an important question reason
being only if you understand the underlying basics of how gate works will
you be able to trouble dude a problem when you face it and when
you are working in a company as a divorce engineer all right so let us try
to explain what it basically is and how its architecture is now most of you
might know that git is a distributed version control system now what is the
distributed version control system let us explain it using a diagram in a
distributed version control system basically your repository it is
distributed among the people who are contributing to that repository and that
is why it is called distributed so that means that anyone who wants to make the
change in the code that is present in this repository has to first copy this
repository on his local system commit the changes to the local file system of
this repository and only then he can push this repository on to the or push
the code changes or the feature additions and everything to the remote
repository right nobody can work directly on the remote repository and
this is the main principle of how git works and that is the reason it is
called a distributed version control system right if you want to talk about
the lifecycle as to what are the steps to implement if somebody wants to say
upload or change some code present in our remote repository the first thing
that I have to do is pull the repository from the remote system once they pull
the repository it becomes their local repository change whatever files they
want to change and then once they have done with the changes they will have to
do a git commit or they’ll have to commit the files to the local repository
once the files have been committed they will have to be pushed to the remote
repository so that it becomes visible to anyone and everyone who will pull this
project the next time all right and this is how the whole gate architecture works
now I hope you guys understand what is the working of gate and what exactly is
the architecture of gate moving forward now let’s talk about the next question
which says in gate how can you revert a commit that has an already been
and made public right so basically you have done some changes in the code you
committed those changes to your local repository and now you’ve also pushed
the changes to the github repository now if you have a CI DCI CD pipeline in
place which basically means that the moment you commit to get it
automatically takes the code and deploys it on a server if that is the kind of
configuration that you have done then probably the code that you have pushed
has also been deployed on a server and that is when you repent you know you
come come to sense that you know the code is wrong and you quickly have to
change the code so that everything becomes working again right now this is
a very hot fix or this is a very quick fix that probably every DevOps engineer
employs when they whenever there is a problem in the production server right
so what is that quick fix the quick fix basically says that whatever commit
whatever last commit was working perfectly just roll back to that so that
everything becomes normal until unless you have fixed new commits that is the
basically the intention behind the revert procedure now how can you
implement the word processor it can be implemented using the get revert command
and let me show you a quick demo of the get revert command so you know how you
can implement it in computer alright so this is my terminal guys basically I
will SSH into and EWS over and in so I have a github repository that I have
created for demo purposes so like we’ll discuss the first stage in the lifecycle
of git is to clone the repository so we’ll just copy this address
so we’ll just copy this address one second yes
so we’ll just copy this address and then we’ll come here and we’ll type git clone
and then the address okay now this project is basically a website that I
created it’s a small website that I created now in order to see that website
we will have to paste this code inside a part a folder so let us go inside an
Apache folder which is present in this directory all right now I’ll do a quick
git clone along with the repository address hit enter and now if I do an LS
you can see that there is a folder called DevOps IQ which has been created
inside this folder I will go inside DevOps IQ and do an LS and you can see
there’s one more folder called above psych you alright let’s go inside that
and now if I do an LS these are the two files which are present inside my
codebase okay now if you were to see what this
website actually looks like right now I can just go here and I can type in the
IP address so it’s 18 dot triple 2.1 to 3.58 slash devops IQ and slash dev ops
IQ all right this is how the website looks like for now now I have to make
some changes so that the background becomes a little more better so what I
can do I will just go back here I’ll go do an Nano and change the code of the
website and sale I’ll change it to I have an image in the images folder let
me change it to 1 dot jpg alright let us save it and once you’ve saved it the
next thing that you have to do is commit the changes to your local repository and
let us do that so first I’ll have to add the chip files to the repository now
I’ll have to commit the changes and the message would say changed background alright so the changes have been
committed and now I push these changes to the remote repository so it’s HS har
and the password is this now before making these changes let me quickly show
you the code that you are currently going to see before I push anything on
this repository so can you see the code is images slash two dot jpg I’ve changed
the code to be one dot jpg so let me hit enter and let’s see if our code gets
changed over here so now if I do a refresh let me do a refresh you can see
the code has been changed just now so it says forty-four seconds ago the code was
changed awesome so now because my code has been changed if I go to this website
now and hit enter you can see the background is now changed it is now a
different background now what I want to do is I want to I realize that this
change that I did is probably wrong and I want to revert to a particular commit
into the older comment that was actually working all right so what I can do is
I’ll just come back to my terminal let me clear the screen first thing that you
do is do a git log so now you get to get a log of all the comments that you have
made to this particular repository now this is the particular commit that you
have particularly applied right now and this is causing you a problem so just
copy the commit ID for this and now just go ahead and do a git revert so get the
word and then give this ID which you just copied okay and hit enter so once
you do that it will tell you the information about this particular commit
ID right so just review everything and then you can see that the commit has
been reverted so now I have not pushed the changes but then if I come back here
and if I hit enter you can see the older website comes back because the code has
now been changed and if I want to make these changes to the remote
three as well all I have to do is git push origin master and he’ll ask me the
credentials and the changes have been pushed right now if I come here and I
just see the code can if I do worry afresh
you can again see that the code has changed back to two dot jpg which was
our earlier code which we made changes to all right so guys this is how you can
do a revert on on a basically a commit and a push that you have made to your
remote repository as well right so if you encounter any problem during working
a while working at the DevOps engineer you should remember this session where I
taught you how to divert a particular commit all right so with that let’s move
on to our next question which says have you ever encountered failed deployments
and how have you handled them now see they any DevOps engineer in the world
will have faced a problem in which you know probably the things that he had
planned the things didn’t go according to his plan right that absolutely
happens and if somebody is asking you in a DevOps interview have you committed
mistakes so you should just to impress them probably never say yes right so if
you have never committed mistakes that’s awesome right but then I know that every
engineer or every DevOps engineer was working the nursery would have faced a
problem while working and would basically account to a mistake that he
made while deploying things all right now the important thing or the important
key takeaway from this kind of a learning should be that whatever mistake
you make you learn from it right and you never commit it again and that is
basically the intent behind this question as well the interviewer would
want to know if you made the mistakes what did you learn from those mistakes
okay so if you if an interview was supposed to ask me this question answer
obviously encountered failed deployments and what
have I learned from them I’ll just give you the best practices that I think are
viable for any divorce engineer who is working in the industry so the first
thing that every one should follow and should make it a thumb rule is that you
should automate code testing not only does it save time because now you know
your tester does not have to wait for your developer to basically push the
code and then check it the to Aleppo can check it in real time because you have
written a script for his web for his application and all the major tests
which are to be done which are pretty common can be done using automated code
testing right now like I said it’s not only for time saving but also it removes
the the part where and humor human error can occur right so if if you are working
if you are now you know human when you work with people people commit mistakes
but if you can write a code which will basically test each and every
functionality that code will never make mistake and that is why you should
always automate things as far as possible right so like in my example the
what happened was that there was a basically commit to the repository which
was basically a feature edition right and the tester did not see the all the
functionalities of for code to see some of the functionalities that could impact
the other components of my product and because of that when it got pushed on to
production basically disaster happened everything stopped working right and
that was only because the testing did not happen properly right so for all the
critical processes of your website or your product you can basically create a
code which will test that website and basically that that would amount to that
would be basically closed on most of the doors to mistakes all right the next
thing is you should always use docker for same environment right and this is
basically the ideology behind DevOps that these kind of problems were
you know a developer used to work and a tester could not run his code on his
computer but the developer said that everything is working fine on a system
docker basically solves that problem right so use docker as much as you can
for the same environment problem that you might face then we should always use
micro services now when you are working in a company it could be that you know
the product is in the legacy phase and hence it’s on a model it’s kind of a
thing but you should never encourage this kind of an architecture right
reason being say you did a bad commit or you did a bad push on the production
server but it should not impact the other components of your product right
if probably you have done something to search and if it’s a bad commit or a bad
push the only functionality that should be impacted should be your search
functionality and not the other functionalities and then that is the
sole reason behind why we should use micro services that is we should divide
an application into different small products which we would deploy on
servers and these products should be independent of each other when you talk
about the monolithic architecture all these components are coinciding which
other I have the dependency upon each other but when you talk about micro
services kind of an architecture you remove that dependency so as to even if
one component fails it does not impact the whole application fourth point being
you should always overcome risks to avoid failures now this basically means
that if there is a code change or if there is a future edition which works
some time then sometimes it does not and you’re not able to figure out why is
exactly that thing happening it is better to wait and troubleshoot it than
to push it just to meet your date’s right because the latter can cause you a
big problem in production when you are in a company like probably like ëtil or
in a company like Samsung or Ericsson where their products each second of
their websites uptime bring money right so if your website is down
for 30 seconds that could amount to a huge loss and that would be on you right
so – for you to not face that kind of a situation always be a hundred percent
sure before you make change or release onto the production server all right so
this is the end to the domain of continuous development let’s go ahead
and now talk about virtualization and containerization alright so let’s start
with the first question of this domain which says what is the difference
between virtualization and containerization now this is a very
important question guys because most of us get confused between virtualization
and containerization so let’s see what are the differences between these two
things so virtualization is nothing but
installing a new piece of operating system on top of a virtualized Hardware
what does that mean so basically there are surface like hypervisor or any of
the software which specializes in virtualizing hardware so if you have a
server which has around 64 gigs of ram and thousand TB of hard disk space with
a software like hyper hypervisor what you can do is you can take that space
and divide it among multiple operating systems right you can deploy multiple
operating systems on the same hardware by virtualizing the hardware so as to
the operating system will feel that say if you virtualize 1gb of ram from this
whole system and say around 100 gb of storage the operating system will think
that you know it only has 1 GB and hundred 1 gb of ram an hundred GB of
storage space available toward it cannot go beyond that reason being it does not
know of the hardware which is beyond or which is which is ahead of the
hypervisor software alright so in virtualization basically
you have an hypervisor which is on top of your with sits on top of your
operating system and virtualizes the hardware beneath it right then you have
a guest operating system so basically once your virtual is the
and where you install guest operating systems on top of that for example the
best example for this would be VirtualBox right you install VirtualBox
and then you can install operating systems on the VirtualBox with the given
spec that you will decide right and once you’ve installed the guest operating
systems on top of that there would be the binary Zoar the libraries that you
probably would be downloading or that came with the operating system and on
top of that you have the applications which would be running right so the key
takeaway from virtualization should be that it’s the whole operating system is
installed from the kernel level to the application level everything is fresh
everything is new now let’s talk about containerization so the thing in
containerization is that the host on top of host operating system you installed a
software called the container engine now the container engine is just like any
other software like you have an eye provider you have a container engine now
the container engine does not encourage installing a whole operating system for
example if you want to run a container for Ubuntu on say a Mac machine you can
do that right it will basically in that container you will have basically the
bare minimum libraries that amount to become the Ubuntu operating system minus
the canal right so in a container you do not have a kernel the kernel is always
always used of the host operating system and this is the main difference between
virtualization and containerization that in virtualization you have a separate
kernel present of the virtual operating system but in containerization you do
not have that and that is the reason that containers are very small they have
the bare minimum libraries required for that container to behave as a particular
operating system but the container itself does not contain any operating
system it basically is based on the same kernel on which the host operating
system resides all right and this is the the main difference between
virtualization and containerization moving forward now the next question
says without using docker that is without using docker to get into a
container can we see the processes that are running inside the container of the
docker container engine all right so this basically is relating to the same
fact that if if you want to see the processes of a container which are
running inside the docker container engine if you can see them from the
outside basically that means that you know the processes are running in the
same kernel of the host operating system right the processes that are running in
the docker container engine would be basically as an addition to whatever is
running on the host operating system as well and you can see that using the PS
aux command right so for the host operating system it’s just like any
other software or any other process that it has to run but for the container it
it basically thinks that it is running inside an operating system which it
actually is not right so can you see the processes so the answer is yes you can
see the processes which are running inside docker container and how can you
see that how can you basically see these processes let me demonstrate it to you
okay so we will we have come back to our AWS so let me clear up scream alright so
as if I do a docker PS right now you can see that there are new containers which
are running on this system as of now now what I’ll do is I’ll run a container for
Ubuntu so I’ll do a docker run – IT and then – Dee and then open – all right
this ran a container for me and if I do docker PS now you can see that there’s a
container running which is basically of the ubuntu image so if I go inside this
container now so I exact – IT and then bash doctor it’s a –
I T and then the sorry I forgot the container ID so the container ID and
then – so if I do that I’m inside the container right and there’s no process
running inside this container as of now now if I were to duplicate this let me
quickly again do an SSH so I will do an SSH into the same server
again so that I’m on Tolo okay great so if I do a PS aux these are all the
processes which are running inside the operating system right now right but let
us make it a little simpler what we can do is let me see all the processes which
has the word watch in them right so let me make it more clear for you so these
are the processes which have the watch keyword inside them okay so there are
basically four processes which are running and which have the keyword watch
inside of them now what I wanna do is inside this container I’m gonna launch a
watch process so what is that watch process that watch process is basically
going to watch a particular command in a set interval of time and what is that
command I basically want to see let’s see the others – L command okay so what
is it doing it is keeping a watch on the command LS – L in every one second
right you can see the time over here it is incrementing every second and
basically it’s keeping a watch on all these files which are there inside the
container continuously okay now again so this is the dollar prompt that says we
are outside the container right now now if I again do the same command that is
again I search for processes which have the word watching it I can actually see
that there is a new process which is running over here and this process is
running inside the container which I’m able to see from the host operating
system level right so the host operating system is doing is basically treating
this particular process as if it was running on its own system that is the
container and the host operating system because they are sharing the same kernel
the host operating system is taking this process as if it was running inside of
it right but if we if we look closely this basically this watch command is
running inside the container right let me just quickly stop it you can see we
are still inside the container and we have stopped the watch command and if I
go here and if i refresh you can see that this watch command is again in corn
which was being mentioned over here before and this is exactly what we
wanted we basically wanted to see a process which was running inside a
container from outside the container that is from the host operating system
and that is exactly what we just did alright so the question that without
using docker can you see the processes that angles that are running inside a
container so the answer is yes you can do it alright so the next question is
what is the docker file used for what do you basically use a docker file form so
a doctor file is nothing but it’s a it’s basically a text document to create
image using an older image and adding some files into it alright so this it’s
basically like a script that you run in Linux which can do all the things for
you that are required for example I might need an Apache image and I want my
website to be put inside the ver /ww slash HTML folder inside this particular
Apache container now in order to do that if I were to do it without a doc of
I would have to first download the opossum so I would probably type docker
run – ID – ID and then Apache once I have done that I will exact into the
container and then go to the directory called Val www HTML probably I will do a
get clone of the website that I want and then my website will be available in
that container and hence I will be able to use it right this is one way second
way is I can create a docker file which will basically build this image for me
without me having to do all these things all these manual things which I just
told you right so let’s see how we can do that so let me just exit this
container and let me remove the container which are just running inside
my system right now okay fed over docker PS now it’s clean now what I want to do
is I want to run this particular container docker run – 90 – P I want to
basically expose the port 83 to this containers port 80 and I want to run it
as a demon so that it runs in the background and there is it okay so I
have the container running which is this and what I want to do is I want to
basically copy the website into this container so let me do a docker exact
into this container – 90 this is the container ID and container ID and then bash so I want to
go inside this particular folder so if I do an LS
over here you can see that there is an extra HTML and then an extra PHP which
are running right so it’s on exposed on port 83 which basically means if I go to
a browser and if I quote with the source IP address on port 83 I should be able
to see this Apache page and this is basically the container which I just ran
over here okay what I want to do is inside this particular directory I would
be basically copying the code of my website now let’s see how we can do that
so let me just exit this container let me do a drop called PS let me do a talk
stop to this particular container so what would this to so basically if my
Apache was running over here it should stop once I have stopped this container
okay so it’s stopped so if I do a refresh over here you can see the sea
side can’t be reached this is exactly what we want okay now let me do a git
clone of my github and we get flown alright awesome
now I’ll go inside this folder and basically I want to copy this particular
folder inside the container alright so for doing that let’s contain a docker
file and what I want to say is in the image Sh har slash web app I want to add
the folder DevOps IQ and where do I want to add it inside the container I want to
add it in this particular directory okay this is where I’m gonna add it and inside dev ops IQ okay fair enough and
that is it that is all you have to do I’ll just come out of this editor and
I’ll now do a docker build of this talk of file with the name test so it says
successfully built an image and it has been tagged as test great now if I’d run
this image now docker run – I T – PC I run it on port 80 for run it as a daemon
and run the image ok great so if I go to port 80 for now let’s see if the
container is working first so yes the container is working now if I
go inside DevOps IQ what do I see great so I can
see the web so basically my website is now available inside the container
by simply writing a docker file to do that and this is exactly what we wanted
awesome guys so what is the docker file use for it is basically used for
creating an image without having to do all the manual stuff of adding your
files and everything alright now once this image of yours is ready you can
push it to docker hub and anybody in the world can download it and can basically
use your website on the local system great now the next question is explain
container orchestration ok so forth so till now we have seen that you know we
can deploy a container we can use it we can probably deploy an application on it
and we can use it on the web browser right but it is not that simple when we
talk about a website like Amazon or a website like Google right it has a lot
of components with it for example on Amazon you would see that you have a
comment section then on the home page you see that there are a lot of products
which have the prices the ratings now each and every component the prices the
ratings the name of the product the image of the product the comment section
each and everything is basically a micro source it is a small part of an
application which is running independently of all the other parts of
the website right and all of this is possible using containers so basically
what they would have done is they would have run each and every component inside
a container now the problem over here is now when you have a website like Amazon
you would be dealing like you will be dealing with minimum like ten or eleven
containers for one particular copy of that website or one particular instance
of that website right now when you’re dealing with ten eleven containers these
containers have to be working in conjunction to each other they should be
in sync with each other they should be able to communicate with each other
right and they should also be able to we should also be able to scale a
particular container in in case it goes down for example the
comment section container it goes down for some reason
now if it goes down we have to keep a watch on it
and we have to redeploy it if it goes down and all of these activities which I
just told you comes under container orchestration right if you were to
manually deploy these containers on docker you will have to keep a manual
check on all these containers but imagine when you have thousands or ten
thousands of containers that you are dealing with in those kind of scenarios
you need container orchestration now container orchestration can be done
using various software so you have a software called cuban at ease and before
that there was a software called docker swarm which was which basically made a
life easier by doing all the manual work for us that is it will check the
containers health it could scale them in case they become unhealthy they could
always also notify you know the administrators by an email in case
something happens right they can also run a monitoring software for you or
average which basically gives you a report or the health status of all the
containers which are running inside that software so this is what this is a very
small part of what a container orchestration to we can do right and
basically if you were to understand what container orchestration is that is like
I said when you work with multiple containers you have to take take in note
a lot of things and that is possible using the container orchestration tools
like humanities and docker swamp okay so the next question is what is the
difference between Dockers home and communities now they’re both container
orchestration tools we just saw that but why do we have to or if I were to choose
between humanities and Dockers one which should I choose alright so let’s look at
the differences between each one of them so the first difference which is
probably the most important difference or probably as I say is the deciding
factor whether you know you should go ahead with this tool given that you have
a short deadline and you have to deploy a project so instead
locust swarm is very easy it comes prepackaged with the docker software so
if you are installed Doka Doka swarm is already installed on your system you
don’t have to worry about anything on the other hand installing cuban ”tis is
a very tough job right there are a lot of dependencies for cuban at ease you’ll
have to see the system you’ll have to see the operating system on which it is
running and a host of other things right it has a lot of dependencies and hence
it is very tough to install but the moment you install it it becomes a very
helpful that as Humanities becomes very helpful because of the features that it
offers which brings us to our second point docker swamp is faster than Cuban
it is reason being that it has less features than Cuban at ease and
therefore making it a very light software and hence faster than Cuban at
ease so if you want to use docker swarm you should be reading about what docker
swamp does not offer and what Cuba Nettie’s offers and if you feel you do
not need all the features that Cuban ities is offering you can go ahead with
Dorcas forum and deploy your application in a faster manner but like I said Cuban
–’tis it is is complex and does a lot of services and features because of
which it is its deployments are a little slower when we compare it to Dockers one
third point which is most important point is dr. swarm does not give you the
functionality of water scaling meaning if your containers go down or if your
containers are basically performing at their peak capacity there is no option
in Dhaka swamp to scale those containers on the other hand because of Cuba
nineties monitoring services and the host of other features you have that
option of providing auto scaling to your containers which basically means you can
automatically scale the containers up and down as and when they are required
and this is an amazing thing that Cuban –’tis handles for us alright guys so
these were the questions around the domain virtualization and
containerization so moving ed now our next domain is continuous integration so
let’s shed a light on what continuous integration is so a quest first question
itself is what is continuous integration so continuous integration is basically a
development practice or I’ll say it’s a stage which basically connects all the
other stages of the DevOps lifecycle for example you you you push a code to get
like we took an example when you push the code to get you might have
provisions which might allow you that the the moment the code is pushed on to
the remote repository it automatically gets deployed on the servers as well
well if that is the case basically that would be possible using integration
tools that would integrate your git repository with your remote server and
that is exactly what Jenkins runs it’s a continuous integration tool which helps
you which helps us integrate different like devops lifecycle stages together so
that they worked like an organism right this is what continuous integration
means so because we discussed about what continuous integration is an excavation
says create a CI CD pipeline using Karen Jenkins to deploy a website on every
comment on the main branch so on every push that you make to the remote
repository the code should automatically get deployed on a remote server alright
so this is something that we’re gonna do just now all right but before going
ahead let’s see what is the architecture for this kind of a thing all right so
this is how the whole thing is going to work basically the developer is going to
commit the code to his github the github basically once it sees a change in the
branch that we mentioned it is going to trigger Jenkins which in turn will
integrate or will take the website from the github repository and push it on to
the build server on which we want the website to be deployed all right sounds
awesome great now let’s go ahead and do this
demo so for that we will have to SSH into our server so let us do that okay
so I’m in now let me clear the screen so first let’s check if I Jenkins is
running on this so so let me check the status for Jenkins so if I do a service
Jenkins status I can see that the Jenkins service is active awesome so
I’ll just go here and I’ll go to the Jenkins website which is basically
available on 8080 alright so I’ll enter my credentials and this is how the
dashboard for Jenkins look like now our questioners or our aim is to create a
job which basically will push a website that we are uploading to get up on a
particular server all right so let’s create a new job first so let’s call our
job as demo job ok and let’s name it as a freestyle project and click OK so this
will create a job in Jenkins for us all right so our job has now been created so
what we want to do is I want to take code from my github so I’ll have to
specify the github repository over here ok and similarly I will have to say that
I want to trigger the build the moment my anything is pushed on my remote
repository alright and this should be it great so I mentioned that anything that
is pushed on to my master should trigger a build on Jenkins okay and what should
this builder what set of commands do I want to run once build is triggered so
first I want to remove all the containers which are running inside my
system so I’m going to clean up right so for that I say sudo Rock RM hyphen F
then are taller this basically is going to clean all the containers which are
running currently in the system once this is done I want to build my website
which is going or build my container which is going to have my website all
right now how can we do that for that I’ll have to push the code to my github
which will have the docker file as well okay so we created a docker file inside
okay so here it is so we have the our docker file created in the DevOps IQ
folder which was there in my home directory now what I want to do is I
want to push so what is there inside this raqqa file we saw that we could
create a docker file using if we write something like this in our docker file
and this would basically create an image with our code which is there on github
alright so what we do we’ll just push this code to our remote repository and let’s add the message that we have
ordered our taka fire great and now let’s push it to our remote repo great so it hasn’t pushed to my remote
repo and now if I just go here and check if my changes have been done or not let
me just quickly refresh it so yes I have Dhaka file in my Kim get repository
right now which was committed 42 or seconds ago awesome great so now what
I’ll do is I’ll come to my shangkun’s and I will say that build sudo docker
build the docker file now where is that document the dacha file will basically
be downloaded in the Jenkins wake workspace so that is in where lib
Jenkins fork space and then the name of the job which is demo show up and that
is it so inside this I will have my dacha file and it will basically build
it and name it as say Jenkins Jenkins it’ll name it as Jenkins in the next
step what I’ll do I’ll do a sudo docker run – IT and then – P and say I want to
deploy it on 84 or say 87 port ok and what do I want to deploy I wanted to
blow a Jenkins okay so this should do these this should basically do all the
stuff so in the first command basically we are removing any container which is
running on the system in the second command what we’re gonna do is we’re
gonna build the docker file which is available in this workspace and this
workspace will basically have my github project and the link I have specified
over here so it will basically just copy or
it will pull the project and save it in the workspace of demo job so ended
sandwich of theirs it aqua file so we are building this dacha file and we’re
naming this created image as Jenkins and then we are running this image and
exposing it to port 87 okay so let’s save it awesome
now what we have to do is I’ll have to go inside so if you want to configure a
web hook the way to do that when I say a web book basically you want your github
to interact with your champions whenever there is a push to a particular
repository so in your repository go to settings and then go to web hooks so
this is a web hook that I created for my Jenkins right so let me create it again
for you so all you have to do is click on add web book right and enter the URL
for your Jenkins over here so in my case it is this I’ll just enter it over here
followed by this keyword which is github – web hook and that is it once you
specify that and just go down click on add web hook and this should basically
send a request to Jenkins and if everything goes well
it will say last delivery was successful okay so any changes that I make to my
github now should trigger a change over here now let me delete this job because
I think even this job gets triggered when my github any changes made to my
github right so let me delete this project okay great so I just have this
job now awesome now let us see how it actually works so what I’m going to do
is I come back to my terminal and others and letting coincide DevOps
IQ and let me do some changes in the cold so today I’ll go into nano index.html so the first thing that I do
is I change the title of the website so I called it as Jenkins test website okay
and I change the image from two to one dot jpg and that is it let us see if I
just push or if I just push this website on to my server what will happen so I do
a git push sorry first I’d have to add these changed files into my repository
git push origin master sorry get commit and let me label this commit as test
push ok done now let’s push this to a remote repository git push origin master
and let’s give the credentials awesome now if you wait here it should basically
start a job so as you can see there is a job queued which is for demo job and
this called automatically triggered by my github okay so let me refresh this
okay so the moment it gives you a read that basically means that your job has
been filled so let’s see what has just happened why our job god failed so if
you go here you can see the console output just like this okay so basically
we is forward to add a sudo here and that is causing us a problem
so we can fix this by just going down and adding a pseudo here save it and
again we’ll have to change the code let’s call it us Jenkins test2 website
we’ll do a control XY and now let’s add up files to our local repository get add
now let’s commit it test push – and now let’s push this to our master I’ll enter
the credentials and this should be it okay so let’s see so our second job got
triggered automatically and it gives us some blue
now blue means that your job was executed successfully so let’s check
what happened so we were deploying it on port 85 so let’s check if it has been
indeed deployed so it was on 485 and the folder was DevOps IQ okay so let’s check
I’m not sure if it was 485 let’s check or the port that we have specified the port is 87 okay so let’s go to port
87 okay so it’s giving it a nursing unsafe port so for our troubleshooting
let’s check if the container is running so yes the container is running on port
87 but it says and unsafe port or what we can do is let us change it to say 82
and now let’s just try to build the job from here we’ll just click on bed now
job has been completed and the port was 82 yes appacha is working now let’s try
going inside DevOps iq folder and there you go you have your website with this
title which you pushed on github now for one more time for testing purposes let
us push our code once more and see what happens so I will say that this website
is test 3 website and say the I change the image as well – 2 dot jpg
okay save it do a git add to commit and say call it test push 3 and now let’s
get push origin master enter the credentials great now let’s check what will happen
okay so our build has been started and it has been completed great so if i
refresh just now it says Jenkins test3 website and the background also has been
changed so congratulation guys we have successfully completed the demo so
basically if you change anything in your github the website is automatically
getting deployed on your build server right and on top of this just for making
it more interesting what we can do is we can do a get log and we can revert on
this commit that we saw earlier okay so let’s do a git revert and then paste it
agree to everything and then push to master and other credentials everything has
been pushed job is getting triggered job is completed and if I go here again my
website got reverted to a particular previous version awesome guys so we have
completed a demo which basically asked us to create a CDC ICD pipeline using
gate and Jenkins to deploy a website on every commit on the main branch so
you’ve done it successfully awesome let’s move on to our next domain which
talks about configuration management and continuous monitoring awesome so what is
configuration management and what is continuous monitoring let’s understand
it so what is the difference between ansible chef and puppet now before
understanding the difference between ansible chef and puppet these are
basically configuration management tools what is configuration management if you
have say around 200 servers and you want to install a particular software on each
of these servers what will you do what one way what you can do is you can
basically go to each and every of these servers run a script and that basically
will install software on that on the only source right the other way to do it
is install a configuration management software using which you can deploy or
you can install all these software’s or you can control the configuration of
these all these servers from one central place and that is exactly what
configuration management means right now in configuration management you have
many tools like ansible chef puppet etc but these are the three top tools which
are used in the industry now the question is what is the difference
between ansible chef and puppet alright so let’s go ahead and see their
differences all right so let’s first talk about ansible so ansible is very
easy to learn because it is based on python so you don’t have to sweat a lot
or you don’t have to sweat much on learning
the commands for ansible because it is based on Python so if you know Python
and symbol is going to be a cakewalk for you it is preferred for environments
which are designed to scale rapidly basically with ansible the thing is that
you don’t have to install the ansible client software on the op on the on the
systems on which you want to basically deploy the configuration and Sybil just
has to be installed on the master and that is it no other configuration
required you can directly control the configuration of the client server given
you have the access to it so it offers simplified orchestration reason being
like I just told you that you don’t have to worry about installing software’s on
the client machines ansible can stand alone or take care of all the
complications that come forward when you are dealing with deploying
configurations without installing a particular software on the client
machines this is a basically a disadvantage of ansible that it has very
underdeveloped in GUI that is you only get the CLI to work with right and it
has very limited features when we compare it with puppet and chef now
let’s talk about chef what Wow is chef different from ansible so it is Ruby
based and hence it’s difficult loss now ruby is a language that not many people
are acquainted with and hence people might find it difficult to get versed
with the commands of Chef the initial setup is complicated when I compare it
to the ansible the setup was very easy because I just had to install ansible on
the host machine and on the client machine I didn’t have to install any
software so but with chef you have to do that and hence it becomes a little
complicated but once all the setup and everything is done chef is very stable
right it has it has been since it’s a community product and it has been well
contributed to it’s a very stable product and it offers you resiliency so
so of course if you when you are working on production servers prob
working on chef would be a better idea dan ansible because ansible
does not have that has that create community when you compare it with chef
and of course chef is the most flexible solutions for whis and middleware
management now middleware basically means the software management part chef
offers to be a great choice for configuration management reason being it
can it is very reliable and is very mature because it was probably among the
first configuration management tools to come out and because community has
contributed a lot to this project it is very mature in its development stages as
well now let us talk about puppet so puppet can be tough for people who are
beginners in the DevOps world right because the it uses its own language
called puppet DSL right the setup part is smooth when you compare it with chef
but it’s a little harder than ansible because when you’re using puppet you use
we use a master and an agent as well so you will have to install puppet agent on
the client machine and only then puppet will be able to interact with the client
software right now it has a strong support for automation so if you are
planning to do some configuration management that you want to automate
puppet is very compliant in that part you can easily do the automation part
using puppet and it is not suitable for scaling any deployment so if you have
say around 50 or 60 servers and you plan to add more in the future probably
puppet would not be the right choice for that kind of an architecture it is good
good to have when you have a stable infrastructure very probably not adding
servers now and then but if you are working on cloud and you do not know the
capacity that you would be running probably puppet would not be the good
would not be a good idea to manage your configuration on your
clients okay our next question is what is the difference between asset
management and configuration management so asset management basically deals with
resources and deals with hardware which will have to plan so that our IT
workforce can work with maximum efficiency right so we’ll have to plan
the planning of your Hardware of how many resources a particular team might
need giving the right resources to the right people is what asset management
counts in when we talk about configuration management it basically
employs not the hardware but the software component of what all
software’s are required by a particular employee of a team or a particular
person in the team what’s officer required by that person and for other
person what software is required I mean rather than taking the radical approach
of installing every software and every machine which should not be done because
some software’s are licensed so configuration management basically means
installing the right software on the right system on which a particular
person or on which a particular workload is going to run so our next question is
what are n RPE plugins in Nagios okay so n RP plugins are basically extensions
to Nagios which help you monitor the local resources of the client machines
right so you don’t have to SSH into the client machines to see how much of
memory or how much of CPU is being used now yours being a monitoring tool you
just have to install and the NRP extension on the client machine and it
will give you a real-time data of the resources that are being consumed on
that particular client machine and obviously when you are working in
production environment you will be monitoring multiple machines and with
the NRP plugins installed on each of those machines you can easily monitor
the resources of them at one central place and that is exactly what NRP
guinness our next question is what are the difference between an active check
and a passive check in Nagios okay so in a goose
if the the data the monitoring log that you’re getting from your clients if it’s
being delivered by an Ag u s– agent in that case it is called an active check
reason being nag u s– is actively involved in taking all the data oral or
in collecting all the data from your clients but in case when you’re dealing
with systems wherein it does not allow you to install any other software or
probably the software itself can generate monitoring logs in those cases
what happens is rather than Nagios the software component pushes the logs to
the Nagios master where it can take the logs and probably create a graph or
create the metric for you in the dashboard so basically using those logs
which are being published by some other software Nagios will create a report of
the health monitoring part of your client systems right and that is why it
is called a passive check reason being nag knows is not involved on the client
side at all it is basically the software’s own services which are
basically pushing the log to nag your master and hence it’s called a passive
check all right but if you talk about the architecture or the working of the
life cycle of how this actually works between on the master itself the logs
which are published are actually published to a queue right and whether
it’s an active check or a passive check the logs have to be published to that
queue so that the Nagios master can pick them up and create the monitoring metric
which is required all right so in a passive check and in a not
active check the queue is going to be there but it’s only difference between
the agents that is in an active check the Nagios agents are involved but in a
passive check third party software tools are involved which publish the log to
the Nagios master all right so our next question
says create an ansible playbook to deploy Apache on a client server so
basically we have to do configuration management so as to without I mean going
inside the the client system will have to install a particular piece of
software inside it okay so let me quickly do an ssh into my AWS machine
and what i’m gonna do is i have a slave machine which i have already configured
which can interact with my master that is if I’d were ansible ping call you can
see that there is a server one that has I have configured which has successfully
responded to my Master’s request okay now let me show you the server which is
basically working so this is the server which is configured with my master right
this is my client machine and on this machine I’ll have to install Apache so
if I right now go to the IP address of this machine it says connection refused reason being
that there is nothing installed on the server or there is no Apache software
installed on this particular server right now all right so let’s install
Apache now to do that you will have to write a playbook now what is the
playbook a playbook looks something like this so it’s basically a yam L file that
you’ll have to create so I have created one for me so where do you want to
install the Apache Software is part where you will have to specify an hosts
so basically my my my client machine is a part of a group called servers that
I’ve created right so the hosts are service and where can
you actually specify what part is your machine
or what which group is your machine a part of so that you can specify over
here so it’s in / EDC / ansible and / host ok so as you can see over here this
is the group name that is so us and inside servos have specified a server
one client machine which has the IP address this so this is the IP address
of my slave so if you can compare its 18.2 to 3 101 172 and if I compare it
with my slave this is the IP address of my slave
right and this has been configured over here so I can refer to my server 1 as
servers or I can refer it to as server 1 right so if I do let me do clear over
here I can say ansible – M pink and I can say so one is where it will reach
out to my cellphone or I can say service as well because it’s part of the group
service ok so this is how it works now I want to install Apache so for
installing Apache and have to write a playbook which looks like this basically
it’s a yam L file so you start with the three dashed lines and then you specify
hosts so host I’ve specified every sword every machine which is inside the
service group should install Apache on it right and what is the task I want to
install Apache – this is basically a name you can specify anything over here
then I’ve specified a PT basically I want to use the apt package to install
Apache to the latest software ok now what I can will do I will type in
ansible – playbook and I will type in Apache
not yell I’ll hit enter and now it has started to install everything so it has
it is installing on the server’s group it is gathering the facts and it saw
that it is able to communicate with server one and now it is accomplishing
the task of installing Apache all right so it has been done successfully so if I
go to my Chrome browser now and if i refresh the address you can see that
Apache is installed on this server automatically so I didn’t have to
basically SSH into the server it all happened automatically and if I
had like five six computers which were running on which were running on AWS and
if I wanted to install the software on it using Apache are using ansible this
would have been the same way it’s only that in the server group I would have
specified more IP addresses which my ansible could talk to okay so this was
tasks of basically deploying an answerable playbook on a client server
without SSH doing an ssh into that client server and doing it from a
central location alright so this is done now let’s move on to our next two main
which is continuous testing now what is continuous testing so we talked about
continuous development which is done using github we talked about continuous
integration which is possible using Jenkins who talked about continuous
configuration management which can be done using ansible and next online is we
have continuous testing so once the code has been deployed it has been integrated
with Jenkins it has been deployed on a server the next thing is automated
testing that we discuss in the best practices before right and it can be
done using a tool called selenium a software called selenium webdriver right
so the first question is list out the technical challenges with selenium so
the selenium tool is used widely for automatic testing or automated testing
but what are the problems that you get with selenium
so if you’re using selenium mobile testing cannot be done so if you have
developed an application for your mobile you cannot test it using selenium the
reporting capabilities of selenium are very limited if there is if the if your
application or your web application deals with pop-up windows or it gives
pop-up windows selenium would not be able to recognize those pop-up windows
and work on them way back oh and again selenium is only limited to web
applications so if you have an application that probably runs on
desktop probably it’s a software that you have designed you cannot test that
software using selenium selenium is only for those applications which can run
inside a browser and if your if you want to check whether there is some image in
your web page and that image should have some particular content it is a little
difficult to implement it in selenium all although it is not impossible it is
possible you’ll have to import some libraries and other things like that but
natively selenium does not support image testing you’ll have to work around will
have to work around with selenium to import some libraries which could do it
for you but like I said natively selenium does not support image testing
so a next question is what is the difference between verify and assert
commands so let us see the differences so if you’re using assert in selenium if
the command fails the whole execution comes to a halt
whereas in verify it does not come to a halt it keeps on continuing the rest of
the lines which are written in the code now why how can it be helpful how is it
helpful to basically put execution at halt whenever there is an error which
occurs for a particular line it is helpful when you’re dealing with
critical cases like for example if if there are five cases and say if case
three fails case four and five cannot execute because they have a dependency
on case three in those cases I would say that you would have to use assert with
the case three but in in the same example if you talk about case one
and case two they do not have a dependency or they do not create a
dependency for any other test cases that have to run right for example case three
case four case five and not dependent in case one in case two in those cases we
can run the verify command which will not stop even if the the test case fails
right and this basically is done to basically see what all is working and
what all is not in one shot in our testing program and for those cases you
would use verify but in the cases where you are testing critical cases and you
do not want to waste your time testing other things if one of your case fails
in those cases you will use the assert test so like I said it is used to
validate critical functionality assert command and verify is used to very
validate functionality which is of the normal behavior kind of kind of sinner
which which comes into the normal behavior that that is it does not create
a dependency for other things to not work because it stopped working
all right so a next question is what is the difference between set speed and
sleep methods so set speed is basically used for executing tasks at a particular
interval that we specify for example say I want to echo hello world at intervals
of 5 seconds in that case I can specify it using set speed but sleep method
basically suspends the execution of the whole program for a particular interval
for example if you’re doing a selenium web test and the webpage takes around 3
seconds to load and you don’t want testing to happen just after each line
you can specify a sleep method of say around 3 seconds where it it waits for 3
seconds for the website to load and only then it will start executing the tests
which follow that particular line right so this is the difference between set
speed and sleep all right guys I hope this video
useful to you and if it was please click on the like button and subscribe to our
in telepods channel for any further updates also guys if you have any
questions with regards to this video please comment it down in the comment
section and we’ll be happy to answer all your queries happy learning

10 Replies to “How To Become A Cloud Engineer | Cloud Engineer Roles and Responsibilities | Intellipaat”

  • 👋 Guys everyday we upload in depth tutorial on your requested topic/technology so kindly SUBSCRIBE to our channel👉( http://bit.ly/Intellipaat ) & also share with your connections on social media to help them grow in their career.🙂

  • Guys, what else do you want to learn from Intellipaat? Comment down below and let us know so we can create more such tutorials for you.

  • I worked in a Non tech background but completed my aws solution architect associate certification successfully…but im not getting any job interviews in cloud domain. what to do?

  • Bro what is the future of a PHP developer? I have 4 year experience, should I switch domain to .net or any other? Need guidance thanks.

  • I have basic knowledge of cloud computing and I know some services of aws like how we implement these services so plz guide me what I do next

Leave a Reply

Your email address will not be published. Required fields are marked *