Greatra Mayana

Career & Employment Opportunities

Devops Tools | DevOps Training | Intellipaat

Hey guys welcome to this session on
DevOps. So, let us start from the root here. So, what cost the DevOps movement, a
conversation between two men Patrick the boy and Alan clay Schafer., started off in
the year 2008 and when they conducted their first DevOps this event in
Belgium in the year 2009 it spread like wildfire so before moving on with this
session please subscribe to our channel so that you don’t miss our upcoming
videos and also, if you want to become a certified DevOps professional consider
this intellipaat’s course and check those details in the description and right now
let us take a quick glance at the agenda. So, to start with we’ll be learning what
led to DevOps what is DevOps and also the various
phases available in DevOps and later we’ll be learning what are the various
tools available in DevOps like git, Jenkins,
Nagios, kubernetes and docker after learning all these tools briefly we’ll
be creating an entire production level DevOps project for us to understand
those concepts in detail and after that for anyone who want to crack a DevOps
interview we have put together a set of interview questions but detailed answers
for you to crack the interview so guys without any further delay let us move on
with the session the first thing we should learn in this session is what led
to the DevOps movement and to know that it is essential to learn what sort of
difficulties they came across while getting a software out in use so
development versus operations. development and operations are two
different entities in an organization that never get along though they ideally
work in close coordination. So you might often heard about the friction
between these two teams due to the siloed mentality so siloed
mentality is the reluctance of sharing information between two divisions in an
same organization many times what worked on the developers side failed when it
got on to the operational floor and hence the never-ending conflict arose so
two operations developers always built code that are oblivious to real-world
constraints and developers are under the impression that operations are resistant
to change and hence becoming an obstacle to get a perfect product out so they
tend to lack of synchronization that effects
indefinitely delays the process of development and ultimately deployment so
another reason for this delay is the traditional waterfall model most of the
organizations operated on so testing is done only after the entire code for a
product is written and every component of the process is dependent on each
other so how does it hamper the overall
performance let’s understand it in a way with an example let’s say there are four
phases in developing a product so phase one has been built then phase two then
phase three and then phase four so when phase four has been completely
built that means we have a complete product in our hand so each phase
contributes to the next phase the output of phase one contributes to the input of
Phase two so obviously each phases has the dependency on the other phases so
right now after the complete product has been built it has been sent to the
testing team for testing purposes so while testing what if there is a bug in
Phase two itself so if phase 2 itself is faulty and incorrect obviously phase 3
and phase four will be incorrect so if phase 3 and phase four or
incorrect the time put into coding and the time put into contributing for phase
3 and phase for taking outputs from Phase two and implementing them in phase
3 and phase 4 has gone in vain and you obviously have to rework from phase two
again so while you rework from phase two so you have to rework on phase 2 phase 3
and phase four which will obviously increase the time which you need for
developing a product and also deploying it this is one of the major reasons for
it delays so this will obviously decrease the productivity of a company
and also this will increase the time and cost put into a particular product so
how should this problem be solved so what is the solution for this so to
address all these issues DevOps was introduced so DevOps is
basically development and operations put together to solve all the conflicts
between these two different departments so DevOps solves the problem
but what it actually is ok guys a quick info if you want to become a
certified DevOps professional Intellipaat offers a complete course on DevOps
which covers all the major concepts and also all the tools which a professional
should know so for further details you can check the link in the description so
now let us continue with the session ok so now let us first start with when DevOps was introduced DevOps was introduced in the year 2009 and now what is DevOps?
DevOps is not a tool or a software or a programming language. DevOps is
basically a culture a philosophy or a change in mindset that encourages
everything which is required to improve the way you do your business so the way
you do your business is you need to increase the productivity lower the
costs and also smoothen the process of product development and also DevOps
requires a change in the mindset of how people work so in a DevOps environment
the development team and the operations team work together not against each
other so what is DevOps principles believe in so basically they want to
eliminate all manual intervention so basically if they believe in automation
and also continuously integrating them and finally the most important thing
about DevOps is they follow agile methodology so agile methodology means
different phases of a product or developed and tested simultaneously so
as I told you there are consider there are four phases in developing a product
so when phase one is developed and then it is simultaneously given to the
testing team for testing for bugs so if there are any bugs it is repelled or
debugged at the same time and after that they move on to phase two this obviously
improves the productivity and also reduces the chances of delaying the
production since you now have the knowledge of what is DevOps and which
methodology it operates on let us talk about the various phases DevOps in
corporate so DevOps has four phases so first one is continuous development so
what is continuous development it is basically breaking a product into
multiple simpler pieces for a development team to fix it faster so
they develop it continuously face by face or they split the same face into
multiple tasks and develop it simultaneously and then continuous
testing testing means are testing the generated code
so whenever code is generated it is tested simultaneously it is not kept
until the final product is has been released so whenever you test it
simultaneously the results take just minutes or hours instead of taking days
and weeks so if it is immediately the bug is reported to the development team
they’ll correct the bug and then the development will be still continuing and
then the third phases continuous integration so continuous integration
means the code which is being generated or different versions of the product is
constantly saved in a local repository or a git repository as you may wish for
future footprint so if you want to get hold of your product in later times so
it’ll be obviously integrated into a repository so whenever a change has been
made in the project it will be updated in the local repository or the git
repository and finally the deployment phase that is continuous deployment so
what now happens is so the code has been developed the testing has been done so
right now it is sent to the operational floor and now they check the efficiency
of the product in real time because it is up and the information they get back
feedback the get back is from the clients so as I said that DevOps is not
a technology or a tool or a language but a culture which emphasizes on various
tools automation flows collaboration and synchronization to help the organization
which uses DevOps and benefit the complete software lifecycle so now let
us discuss some of the tools which incorporate DevOps principles so let us
start off with Jenkins so Jenkins is an important tool in the
DevOps culture and it is a tool which supports continuous testing integration
and deployment of the code so basically it automates all the processes so as I
told that DevOps firmly encourages automation Jenkins is a very important
tool so the next tool is ansible another tool used for
application deployment so this is also used for application deployment and also
software provisioning so using this tool you can deploy your applications in n
number of systems at one go so you don’t have to do it again and again so
basically what happens here is you can deploy your application in multiple
systems or any number of systems at the same time instead of doing it again and
again by provisioning different servers and uploaded uploading it again and
again so instead you can at one go on hundreds of servers you can upload your
application ok so next then we have got docker so docker I think it’s really
a fascinating tool as you can see the picture itself so the logo itself is
fascinating and what it does is it creates the perfect environment for your
applications to run so for example let’s say one application runs on Windows and
the other one requires Mac OS X so instead of having two different code
which will support both the operating systems you can simply have docker and
it would take care of all the requirements of your application so you
just upload it on docker and it will obviously take care of running it on
both windows and bow also Mac OS X and then puppet so this is a configuration
management tool which obviously enables the system administrators to work faster
so also it is used for a server provisioning that is it allocates the
works to servers and hence taking care of entire server management so puppet
basically manages all the servers it provision servers also it takes care of
all the servers it also allocates works to those servers
the next tool is Jeff so Jeff is used for automation of infrastructure so with
Jeff your infrastructure is defined as a code and ensuring that the configuration
policy will be flexible personable testable and also human readable so Jeff
is basically like cloud formation so you in cloud formation you write an
infrastructure as a code so using a JSON file or a YAML file you just upload it
to cloud formation and automatically takes care of provisioning
servers and also provisioning whatever you have mentioned hand same does Jeff
okay so the next tool we are going to see is Neji O’s so nato’s it is a
computer application that monitors systems networks and infrastructure so
what it exactly does is it alerts the users if anything goes wrong in the
infrastructure and also it rectifies the problem and then again it informs the
user so basically what it does is you don’t need any human intervention here
so it checks whether there is a problem in your infrastructure if there is any
it rectify sit and after rectifying it will also alert you so before then there
is a problem arising it’ll alert you once and after correcting the problem it
will alert you again so this is what it does okay guys a
quick info if you want to become a certified DevOps professional in
telepods offers a complete course on dough ops which covers all the major
concepts and also all the tools which a professional should know so for further
details you can check the link in the description so now let us continue with
the session and the next tool is git I think it is one of the most popular
tools in all of these tools which I explained and git is basically
distributed version control system and what does that mean so it basically
means there is you don’t need to be dependent on the server so even if the
server fails you will be able to code so all the versions of your particular
software will be stored so consider your software has four versions 1.1 1.2 and
up till 1.4 so 1.1 1.2 1.3 and 1.4 so right now you want to work on the
version 1.1 and also get some fixes for your 9.4 version so you’ll be able to
work simultaneously on all the versions of your software and you will be able to
get outputs of all the versions of the software so basically the idea of git is
to work on different versions of your software simultaneously for kids so the
next tool is maven and maven is used for compilation of source code and also it
runs tests until now in this session we have discussed about the concept the
phases and different tools that the wops uses so now you must have got an idea
what is DevOps and what it incorporates and the various tools so right now let
us discuss the advantages the waps provide so that the organization’s are
opting for it so let us discuss the three most major advantages of DevOps so
the first advantage is improved collaboration so it improves the
collaboration between these two teams so obviously that was the basic need of
DevOps and it encourages the change in the mindset to get maximum profit out of
your business so as a result when people work with each other in collaboration
obviously the productivity will increase and even she will it’ll decrease the
cost so when both teams are working together obviously it will take less
time for a product to be built which obviously reduces the cost and finally
you will maintain a good position in the market since you are able to fix your
bugs faster develop your products faster and you are obviously deploying your
products online faster than anybody else so because of this you will obviously
have happy and satisfied customers and you will be all sorted ok guys now we
have seen the advantages that DevOps gives us so now let us see how cloud
computing complements the wops okay so we know DevOps is a way to improve your
way of doing your business and cloud computing is it provides you facilities
and it provides services which you can use while creating an application and it
takes care of provisioning and also managing your entire applications
interest so cloud computing allows the developers
more control over their own components resulting in smaller wait times
so DevOps and cloud goes together so this application specific infrastructure
makes it quite easy for developers to own more confidence so by saying cloud
tools and services or using them so you can automate the process of building
managing and provisioning through sample piece of code actually you can just
create a piece of code which you can upload it all to the cloud service which
will convert them into provisioning servers or provisioning whatever
services you need from that particular cloud provider so this will obviously
eliminate the human error so the cloud services are automated and those
services obviously do everything for you they provision servers they will monitor
it and they will manage it so there is no space for human errors and hence
cloud give everything that is required for the DevOps to flourish so DevOps and
cloud go hand to hand and there if you learn both of them you have a good scope
in the future okay now we’ve seen how cloud is going with DevOps
so they go hand to hand so now let us see some companies which are using
DevOps and let us see whether they also use cloud or not so the first company is
Amazon Amazon has their own cloud provider which is Amazon Web Services
which is the world’s leading cloud provider so Amazon itself uses DevOps
that means you can see the world’s leading cloud provider is itself using
DevOps which basically says any other organization should use it so if using
DevOps makes them so successful of any other company should use them so let us
see other companies so Walmart uses it beefs up Sony Pictures Adobe Facebook
and also Netflix so Netflix a fun fact about Netflix is Netflix is completely
hosted on Amazon Web Services of now so the entire application architecture of
Netflix is hosted on Amazon Web Services after all that we’ve seen all the
concepts in DevOps and also we’ve seen how it goes with cloud computing
now let us come to the most important part so what are the career
opportunities divorce has for you so devops obviously will give you career
growth but how so DevOps is needed in every big organization that is a fortune
500 company or any organization which has a huge
development or an Operations team so if the size of the company increases the communication gap also increases so
to put them together obviously they need DevOps implemented in their company or
their organization so DevOps will be used in all the big
companies and also as per reports
as per last year the average salary in the USA was 120,000 for $61 per year
as we’ve discussed DevOps is not a technology it is a culture so nobody can
create another technology and overtake DevOps so because it’s a culture so if
consider there is a cloud provider AWS and then azure came GCP came so AWS has
a lot of competition because azure and GCP provides the same kinds of services
but DevOps it is a concept not technology so obviously learning DevOps
will give you a boost in your career we’ve seen the career opportunities now
let us see what Intel apart has to offer so Intel apart has its own divorce
course and let us see what are the details on the first Darrell is it is
completely designed and taught by industry professionals and then there
are a lot of hands-on projects which comes with every module and also case
studies and interview questions for you to crack your interview and then there
are two types of trainings so one is self-paced and one is instructor led so
as in self-paced you will be given sixteen hours of videos which you can go
through in your own pace and type as an instructor-led training there are 22
trainings either on weekends or on weekdays and it
is a 32 hard training session and then finally you will be given an Intel apat
the web certification and there is a value added when you have a
certification in your hand and the same interview if you go with a certificate
and without a certificate there is a lot of difference and obviously your career
will have a lot of growth when you learn DevOps because all the huge the fortune
500 companies use the web’s custom and the verbs culture in it
so you will have a problem statement that you have hired as you need a
volunteer in a top software they want to implement DevOps lifecycle in their
company you have asked to implement this lifecycle as fast as possible and of
software is a product based company and their product is available on this so
what exactly is if equations are there you have you need to set up the didn’t
work flow that means you as a developer your code is going to beat up luck on
the development branch and then you have to take a pull request that because is
going to be march to the master and then whatever the master code has it is going
to be filled and that is going to be pushed to the production environment
okay so this is your problem statement now for that what you require first of
all a Jenkins server ok or gel consider where you can create your job so what
type of job you are going to create can anybody tell me for this problem
statement what happens out we required freestyle nice job or what now in
project nature ok if you go with the man project so how
many dogs you need to clean so if I set up with the mavin job I have to create
total three jobs to create a pipeline but I am NOT going to use my and I job I
am going to create Jenkins pipeline job where I am going to use whatever bill
pool you have so let us see first of all what exactly the code is ok I’m not
aware of the code let me check what for the providers ok it just a simple in
distress table that means you don’t require you don’t need to require any
bill ok for Apache let’s implement this then I will show you how
Melvin is going to work with the inspire plant that is in another use case what I
am going to show you so I have I have written some Jenkins file okay so this is one that means one I’m
just copying this Jenkins file for our one and we will modify it according to
our requirement so first of all you have to go into the new item then one I am
going to build the pipeline okay this is my script so let’s modify a
little bit so that we can work on this so I don’t want this moment right away
because it is not done have a nice job I will just blow in this repository or
better here okay after so there is no building such
thing okay so we don’t need a rebuild stage we need directly per replaced it
but the problem is here matches not going to be run on
on this server master so can anybody tell me how to set up the node slave
node I want to set up a slave node on this Jenkins server where my deployment
is going to be happen where exactly in that my file is going to be deployed so
how to set up node it was there in your curriculum okay let me show you so this
is my Jenkins master this is one server okay and this is my other server
where is my Apaches go down the IP address let’s check whether my parties are poor
not so my party is not running so what I have to do that first I have to set up
my Apache properly so that my deployment can happen successfully what you need to
do we have to login into that server which is going to be ranking slave check
whether your Apache process is running or not it’s sent to us so it should be
httpd okay so how to install Apache yum install httpd – why okay so my part installed
to start the service okay so my apart is installed let’s see yeah so I’m able to
get the HTML content whatever I have put over there so that it’s confirmed my
Apaches ready now guys you tell me who I need to install gate on this server or
not okay so nothing is there right now in my
apache server just a testing page by default this is a default way of archie
provides you ok so right now i am not installing anything let’s see what will
happen before that what you need to do that we have to set up the ssh keys so
that you can make this server or partial server as in jenkins slave so how to do
that you have to see what are the your keys available here first of all you
need to create one user called jenkins provide a password to that user okay guys a quick info if you want to
become so different DevOps professional in
telepods offers a complete course on dough ops which covers all the major
concepts and also all the tools which a professional should know so for further
details you can check the link in the description so now let us continue with
the session but this general cases there is a normally it doesn’t have any ID min
rights but you have to you need to provide the admin rights to this user so
that it can do the admin work but there was your job will get failed so how to
do that you have to go into the pseudo file
pseudo file and you know to make an entry that you are giving admin rights
that means whatever the come on your controller is going to run it will run
with the route privilege no buns on duty means it won’t ask for any password so
this is my user has been created now you have to add public key of Jenkins master
in the home directory of Jenkins user okay where you need to copy you have to
create file called authorized he
and enter the content of this publicly okay let’s see whether I am able to do
the partial SS it is included in a party server or not yes oh sorry it gives you permission
denied why I will tell you because the permission of this authorized key file
is open for others so you have to change those permission let’s try again this
time I am able to access it to the slave machine successfully okay so this is
done but still on the Jenkins server you won’t see any slave machine how where
you can check it out that whether you have that slave machine or not you have
to go into the Mary Jenkins under managed node
right now you can see there is no slave available for that means Apache slave is
available so how to add that slave plate new node gives any name a perch a slave
permanent agent let’s copy the IP address of this machine okay
root directory will be home Jennings you can give an a label a patch a slave host
you have to provide a hippie address credential right now nothing is there I
am going to put this credential because I have already configured no no
certification no I don’t want otherwise it will ask for that unnecessary okay
let’s see whether it might end in slave will come up or not I know my Twitter is
going to fail let’s see launch agent now it fails can
anybody tell me what egg white is fail see the wrong and tell me what is the
prerequisite to run the Jenkins well can you tell me what is the prerequisite to
run the Durkins process successfully yes Java is not there that’s what my client
my process slave processes failed let’s see da was there or not there is
no Java so you have to install Java first and we are in charge of unfunded because
we are using Jenkins to so this is the critical cause it just you have to
remember it always requires Java 1.8 don’t install the 1.6 or 1.7 oh sorry I’m not my java has been installed let us try to
bring that slave again it’s for launch forget it is squealing whoa SSH host key
are not being verified this time SSH keys creating issue okay I got why it is
creating issue he exchange was not finished ok let’s
try from here where it was running success will be here ok got it
I will tell you what was the issue the problem is we are using here
username and password and we have to configure keys and what we did wrong so
let’s configure keys first set username with private key Jenkins as a search
keys user name is Jenkins hey hunter directed
what is my private key we have to put the content of this file okay and add let’s try with this I don’t
want a little equation launch attempt public key authentication field one
visual okay got it easy as I said so some restrictions are there we have
seen that those restrictions are upon 4:22 should listen for everywhere should
be open public accommodation they should work it was not working
because all these options were disabled and you will face this issues because
these are the cloud machines in every cloud machines will come up with these
restrictions so you have to remember all the settings okay still I am facing this
issue fine this time I will go with this not here
okay so it is taking my username and password properly so you can see what it
is trying to do it is trying to download or simply run this promoting dot jar
under this directory so if you go here under Jenkins home directory you will
see that God was there and it is executed now my slave is up and running
successfully you can see here now what must what we need to do we have to tell
the job that ro you have to run on this machine so how you can turn that job we
have to configure it your Jenkins fire or Jenkins sprint so this is the load
parameter and you have to provide that slave name it is going to run on that
machine it will download this website dot gate okay with this content okay
what we next we have to do let’s see how it looks like is throwing error let’s
first check for error it is so it is closing
here it is closing here extra spaces see it fails but it is running one
Apache slave now guys you tell me – failed it’s giving you the proper see
the logs understand the logs you can understand why this fail
tell me why it is fair it’s showing like cloning error or something and they’re
floating remote repo okay that’s fine it is failing due to cloning so who is
responsible to clone the repository which tool will do the model eat it
that’s why I asked you that at that time – we require get on this machine or not
so you have to make sure whatever the things you are writing in there case
file the pretty presents for those things should be it’s successfully run this time it just
will clone the repository now let’s try to index dot HTML in the
proper write them further steps to deploy your index dot HTML and images
folder into the proper location so I told you as it’s going to be run with
the Jenkins user we have to use sudo everything needs to be copy under HTML
folder so this is a deployment step ok let’s try and run this time what will
happen let’s see failed why images so
understand this is a folder the folder never be run with CP command so what you
need to do you have to write the proper command what exactly that come on should
work with the folder so we have to use – our recursive copy in this time it run
successfully ok it copied everything let’s see
whether this time I will get to my page or not hello world ok if I type images this is useful if this type of is a
small Jenkins why’d you have to kinda multiple champions one for multiple type
of work or it can be in another way you have to write a generic Jenkins file
which can handle any type of work so that generate Jenkins one is going to be
long sometimes so you have to make yourself comfortable in writing the
Jenkins strip or Jenkins 5 I can do the same thing I can copy this stuff into
the dense one and call that Jenkins okay so this is one use case where I am just
downloading everything and just deploy there is no compilation is happening
let’s see with the compilation using my will so I have one job let’s go and see
a city let me check for that name okay if you want to do some rnd without
making changes in your main script just click on the job which is run
successfully you can go into the replay you can see the script change as per
your requirement and do whatever you want to do you can easily do that stuff
also okay so it is running on my master machine let me check whether my Tomcat
is installed or not on in this machine okay somewhere Tomcat is up and running
on this machine so I am going to run everything on master only so this master
job is what it is trying to do it is fetching some code okay using my men I
am building the package and I am deploying that package to this Tomcat
server okay let’s see bill so you can see mavin it’d be using maven
to compile my code and it is deployed successfully in my destination folder so
if I go and run oh shit let me check on which port my Tomcat is
worth for me okay it’s turning one port 8081 so if I
copy the same are to provide the context see my application is appended okay if
you have still any doubt what I am going to do I will go to the destination
folder I will remove everything from here okay
nothing is there if i refresh this page nothing is found okay
it throws another that your context root called a hello world war 1.0.0 is not
available let’s try to rebuild this pipeline of okay let’s refresh this page see if I
show you the contents now it is getting copied writer who is doing this all this
thing my pipeline my pipeline will compile the code will clone the code
build the code and deploy the code this is way your work is going to be happen
in your in any of the organization we have to write this type of junkies file
so there are two type of techniques while you are presently in the market
one is declarative and another is scripted I hope guys you aware of this
type of Jenkins file okay so this is the way you can write your own pipelines let
me check if I have any example of big pipelines then I can show you something
so you can see this is little bit big pipeline
this is scripting okay so I will show you if you see to understand whether it
is a script about the crip descriptive or declarative if you try to use the if
command on top of stage then that is a scripted just just for understanding
purpose another purpose is that that declarative always start from pipelines
pipeline this all these are scripted okay declarative are start from pipeline
let me show you if I have that all are scripted okay I can show you
from here you okay so the
raised on the Jenkins talk so if you go so you can see all the decorative
pipelines start from pipeline okay and all the scripted will start from
node like this so this is the difference between declarative and scripted because
there is no node parameter available in declarative node is just a parameter
which you can find from the GUI and it can be written in scripting very easily
if you try to use node in it will throw error because here in declarative agent
is there so this is the way you can compare the things or figure it out
whether you are working on a declarative or you are still a lot of people I have
found that they don’t know the difference between declarative and
scripted okay so this is one of the declarative partner scripting pipeline I
was returning so it depends on so these changes will call on the basis of
environment variable that means dynamically stages is going to be added
if that variable is there for example if my pipeline parent or check out variable
is yes then only this stage is going to be included in my pipeline
otherwise it won’t include same thing is happening here so this is a scripted
pipeline dynamically your status is going to be added on the basis of your
if conditions simple if conditions are has been applied in all the stages and
those stages will be called on the basis of those condition you okay so this my plan is totally tested
and it is running in production environment okay so try to understand
this pipeline okay so you can just go and read this pipeline try to modify if
you’re able to modify yourself because it doesn’t require any dependency or
that you need those servers with this pattern well friend you know it is you
can modify it provide your server names and it can work in your panel okay so
this is regarding your project revision or I can say project demonstration how
it looks like definitely you are not going to reduce this type of small
Jenkins file you have to work on the Jenkins well because people wants that
whatever things you are writing it should be repeatable with any kind of
requirement that’s why this pipeline has lot of tendency to handle any kind of
requirement that’s why I told you it will pass using condition it is a
conditional base pipe run I have to despise the environment variables and it
will do according to the requirement so it will just run only for the building
purpose it will also run from the end to end that means it will build up it will
build the code and apply the code to the production also so everything is going
to be maintained from here if it fails it would send the require mail
notification to the required team in each stage and if it is and if it is
passed successfully run then it will send the require email to the required
team so this is the way this pipeline works okay so this is about your project
today in this session we are going to discuss the top DevOps interview
questions that can be asked to you in your next steps interview all right so
let’s go ahead and get started with this session with the first slide which talks
about so basically we have divided all our interview questions under these
domains so those domains are continuous development then we have virtualization
and containerization continuous integration configuration
management and continuous monitoring and then in the end we have continuous
testing so we gotta follow this sequence when while we are discussing the quest
right so let’s go ahead and start with the first domain which is continuous
development and let’s see what our first question is so our first question asks
us can we explain the gate architecture now this is fairly an important question
reason being only if you understand the underlying basics of how gate works will
you be able to troubleshoot a problem when you face it and when you are
working in a company as a divorce engineer all right so let us try to
explain what it basically is and how its architecture is now most of you might
know that gate is a distributed version control system now what is the
distributed version control system let us explain it using a diagram in a
distributed version control system basically your repository it is
distributed among the people who are contributing to that repository and that
is why it is called distributed so that means that anyone who wants to make a
change in the code that is present in this repository has to first copy this
repository on his local system commit the changes to the local file system of
this repository and only then he can push this repository on to the or push
the code changes or the feature additions and everything to the remote
repository right nobody can work directly on the remote repository and
this is the main principle of how gate works and that is the reason it is
called a distributed version control system right if you were to talk about
the life cycle as to what are the steps to implement if somebody wants to say
upload or change some code present in our remote repository the first thing
that I have to do is pull the repository from the remote system once they pull
the repository it becomes their local repository change whatever files they
want to change and then once they have done with the changes they will have to
do a git commit or they’ll have to commit the files to the local repository
once the files have been committed they will have to be pushed to the remote
repository so that it becomes visible to anyone and everyone
we’ll pull this project the next time all right and this is how the whole gate
architecture works now I hope you guys understand what is the working of gate
and what exactly is the architecture of gate moving forward now let’s talk about
the next question which says in gate how can you revert a commit that hasn’t
already been pushed and made public right so basically you have done some
changes in the code you committed those changes to your local repository and now
you’ve also pushed the changes to the github repository now if you have a CI
DC ICD pipeline in place which basically means that the moment you commit to get
it automatically takes the code and deploys it on a server if that is the
kind of configuration that you have done then probably the codes that you’ve
pushed has also been deployed on a server and that is when you repent you
know you come come to sense that you know the code is wrong and you quickly
have to change the code so that everything becomes working again right
now this is a very hot fix or this is a very quick fix that probably every
DevOps engineer employs when there whenever there is a problem in the
production server right so what is that quick fix the quick fix basically says
that whatever commit whatever last commit was working perfectly just roll
back to that so that everything becomes normal until unless you have fixed the
new commit that is the basically the intention behind the revert procedure
now how can you implement the word processor it can be implemented using
the gate revert command and let me show you a quick demo of the gate revert
command so you know how you can implement it in a computer all right so
this is my terminal guys basically I will SSH into and he WS over and I am in
so I have a github repository that I have created for demo purposes so like
we discussed the first stage in the lifecycle of git is to clone the
repository so we just copy this address so we’ll just copy this address
one second yes so we’ll just copy this address and then we’ll come here and
we’ll type git clone and then the address okay now this project is
basically a website that I created it’s a small website that I created now in
order to see that website we will have to paste this code inside our Apache
folder so let us go inside an Apache folder which is present in this
directory alright now I’ll do a quick git clone along with the repository
address hit enter and now if I do an LS you can see that this is a folder called
DevOps IQ which has been created inside this folder I will coincide DevOps IQ
and do an LS and you can see there’s one more folder called above psych you
alright let’s go inside that and now if I do an LS these are the two files which
are present inside my codebase okay now if you were to see what this website
actually looks like right now I can just go here and I can type in the IP address
so it’s 18 dot triple 2.12 3.58 slash DevOps IQ and slash dev ops IQ alright
this is how the website looks like for now now I have to make some changes so
that the background becomes a little more better so what I can do I will just
go back here I go do a nano and change the code of the website and say I’ll
change it to I have an image in the images folder let me change it to one
dot jpg alright let us save it okay guys a quick
info if you want to become a certified DevOps professional in telepods offers a
complete course on dough ops which covers all the major concepts and also
all the tools which a professional should know so for further details you
can check the link in the description so now let us continue with the session and
once you’ve saved it the next thing that you have to do is commit the changes to
your local repository and let us do that so first I’ll have to add the chip files
to the repository now I’ll have to commit the changes and the message
should say changed background all right so the changes have been committed and
now I’ll push these changes to the remote repository so it’s a chess har
and the password is this now before making these changes let me quickly show
you the code that you are currently going to see before I push anything on
this repository so can you see the code is images slash two dot jpg I’ve changed
the code to be one dot jpg so let me hit enter and let’s see if our code gets
changed over here so now if I do a refresh let me do a refresh you can see
the code has been changed just now so it says forty-four seconds ago the code was
changed awesome so now because my code has been changed if I go to this website
now and hit enter you can see the background is now changed it is now a
different background now what I want to do is I want to I realize that this
change that I did is probably wrong and I want to revert to a particular commit
into the older comment that were actually working all right
so what I can do is I’ll just come back to my terminal let me clear the screen
first thing that you do is do a git log so now we get to get a log of all the
comments that you have made – this particular repository now this is
the particular commit that you have particularly applied right now and this
is causing you a problem so just copy the commit ID for this and now just go
ahead and do a git revert so get the word and then give this ID which you
just copied okay and hit enter so once you do that it’ll tell you the
information about this particular commit ID right so just review everything and
then you can see that the comet has been reverted so now I have not pushed the
changes but then if I come back here and if I hit enter you can see the older
website comes back because the code has now been changed and if I want to make
these changes to the remote repository as well all I have to do is git push
origin master and you asked me the credentials and the changes have been
pushed right now if I come here and I just see the code can if I do worry
fresh you can again see that the code has changed back to two dot jpg which is
our earlier code which we made changes to all right so guys this is how you can
do a revert on on a basically a commit and a push that you have made to your
remote repository as well right so if you encounter any problem during working
while working as a DevOps engineer you should remember this session where I
taught you how to divert a particular commit all right so with that let’s move
on to our next question which says have you ever encountered failed deployments
and how have you handled them now see the any DevOps engineer in the world
will have faced a problem in which you know probably the things that he had
planned the things didn’t go according to his plan right that absolutely
happens and if somebody is asking you in a DevOps interview
you committed mistakes so you should just to impress them probably never say
yes right so if you have never committed mistakes that’s awesome right but then I
know that every engineer or every DevOps engineer was working the nursery would
have faced a problem while working and would basically account to a mistake
that he made while deploying things all right now the important thing or the
important key takeaway from this kind of a learning should be that whatever
mistake you make you learn from it right and you never commit it again and that
is basically the intent behind this question as well the interviewer would
want to know if you made the mistakes what did you learn from those mistakes
okay so if you if an interview was supposed to ask me this question also
obviously I have encountered failed deployments and what have I learned from
them I’ll just give you the best practices that I think are viable for
any DevOps engineer who is working in the industry so the first thing that
every one should follow and should make it a thumb rule is that you should
automate code testing not only does it save time because now you know your
tester does not have to wait for your developer to basically push the code and
then check it the 12 per can check it in real time because you have written a
script for his web for his application and all the major tests which are to be
done which are pretty common can be done using automated code testing right now
like I said it’s not only for time saving but also it removes the the part
where and humor a human error can occur right so if if you are working if you
are now you know when you work with people people commit mistakes but if you
can write a code which will basically test each and every functionality that
code will never make a mistake and that is why you should always automate things
as far as possible right so like in my example the what happened was that there
was a basically a commit to the repository which was basically a
feature edition right and the tester did not see the all the functionalities of
four core to see some of the functionalities that could impact the
other components of my product and because of that when it got pushed on to
production basically disaster happened everything stopped working right and
that was only because the testing did not happen properly right so for all the
critical processes of your website or your product you can basically create a
code which will test that website and basically that that would amount to that
would basically closed on most of the dough’s to mistakes all right the next
thing is you should always use docker for same environment right and this is
basically the ideology behind DevOps that these kind of problems where you
know a developer used to work and a tester could not run his code on his
computer but the developer said that everything is working fine on a system
docker basically solves that problem right so use docker as much as you can
for the same environment problem that you might face then we should always use
micro services now when you are working in a company it could be that you know
the product is in the legacy phase and hence it’s on a monolith kind of a thing
but you should never encourage this kind of an architecture right reason being
say you did a bad commit or you did a bad push on the production server but it
should not impact the other components of your product right if probably you
have done something to search and if it’s a bad commit or a bad push the only
functionality that should be impacted should be your search functionality and
not the other functionalities and then that is the sole reason behind why we
should use micro services that is we should divide an application into
different small products which we would deploy on servers and these products
should be independent of each other when you talk about the monolithic
architecture all these components are coinciding which other
the dependency upon each other but when you talk about micro-services kind of an
architecture you remove that dependency so as to even if one component fails it
does not impact the whole application fourth point being you should always
overcome risks to avoid failures now this basically means that if there is a
code change or if there is a future edition which works some time then
sometimes it does not and you’re not able to figure out why is exactly that
thing happening it is better to wait and troubleshoot it than to push it just to
meet your date’s right because the latter can cause you a big problem in
production when you are in a company like probably like ëtil or in a company
like Samsung or Ericsson where their products each second of their websites
uptime brings in money right so if your website is down for 30 seconds that
could amount to a huge loss and that would be on you right so – for you to
not face that kind of a situation always be 100% sure before you make change or
release onto the production server all right so this is the end to the domain
of continuous development let’s go ahead and now talk about virtualization and
containerization all right so let’s start with the first question of this
domain which says what is the difference between virtualization and
containerization now this is a very important question guys because most of
us get confused between virtualization and containerization so let’s see what
are the differences between these two things
so virtualization is nothing but installing a new piece of operating
system on top of a virtualized hardware what does that mean so basically there
are Sophos like hypervisor or any of the software which specializes in
virtualizing hardware so if you have a server which has around 64 gigs of ram
and thousand TB of hard disk space with a software like hyper hypervisor what
you can do is you can take that space and divide it up
multiple operating systems right you can deploy multiple operating systems on the
same hardware by virtualizing the hardware so as to the operating system
will feel that say if you virtualize 1gb of ram from this whole system and say
around 100 gb of storage the operating system will think that you know it only
has 1 GB and hundred 1gb of RAM 100 GB of storage space available toward it
cannot go beyond that reason being it does not know of the hardware which is
beyond or which is which is ahead of the hypervisor software all right
so in virtualization basically you have an hypervisor which is on top of your
with sits on top of your operating system and virtualizes the hardware
beneath it right then you have a guest operating system so basically once your
virtualized the hardware you install guest operating systems on top of that
for example the best example for this would be VirtualBox right you install
VirtualBox and then you can install operating systems on the VirtualBox with
a given spec that you will decide right and once you’ve installed the guest
operating systems on top of that they would be the boundaries or the libraries
that you probably would be downloading or that came with the operating system
and on top of that you have the applications which would be running
right so the key takeaway from virtualization should be that it’s the
whole operating system is installed from the kernel level to the application
level everything is fresh everything is new now let’s talk about
containerization so the thing in containerization is that the host on top
of host so operating system you install a software called the container engine
now the container engine is just like any other software like you have an i
provider you have a container engine now the container engine does not encourage
installing a whole operating system for example if you want to run a container
for ubuntu on say a mac machine you can do that right it will basically in that
container you will have basically the err minimum libraries that amount to
become the Ubuntu operating system – the kennel right so in a container you do
not have a kennel the kennel is always always used of the host operating system
and this is the main difference between virtualization and containerization that
in virtualization you have a separate kernel present of the virtual operating
system but in containerization you do not have that and that is the reason
that containers are very small they have the bare minimum libraries required for
that container to behave as a particular operating system but the container
itself does not contain any operating system it basically is based on the same
kernel on which the host operating system resides all right and this is the
basically the main difference between virtualization and containerization
moving forward now the next question says without using docker that is
without using docker to get into a container can we see the processes that
are running inside the container of the docker container engine all right so
this basically is relating to the same fact that if if you want to see the
processes of a container which are running inside the
docker container engine if you can see them from the outside basically that
means that you know the processes are running in the same kernel of the host
operating system right the processes that are running in the docker container
engine would be basically as an addition to whatever is running on the host
operating system as well and you can see that using the PS aux command right so
for the host operating system it’s just like any other software or any other
process that it has to run but for the container it it basically thinks that it
is running inside an operating system which it actually is not right so can
you see the processes so the answer is yes you can see the processes which are
running inside a docker container and how can you see that how can you
basically see these processes let me demonstrate
it to you okay so we will we have come back to our AWS so let me clear up
scream alright so as if I do doc appears right now you can see that there are no
containers which are running on this system as of now now what I’ll do is
I’ll run a container for open – so I’ll do a docker run – IT and then I open D
and then open – all right this ran a container for me and if I do doc appears
now you can see that there’s a container running which is basically of the ubuntu
image so if I go inside this container now so I will do a talker exec – IT and
then bash docker 8 SEC – I T and then the sorry I forgot the container ID so
the container ID and then bash so if I do that I’m inside the container right
and there’s no process running inside this container as of now now if I were
to duplicate this let me quickly again do an SSH so I will do an SSH into the
same server again so that I’m on Tolo ok great
so if I do a PS aux these are all the processes which are running inside the
operating system right now right but let us make it a little simpler what we can
do is let me see all the processes which has the word watch in them right so let
me make it more clear for you so these are the processes which have the watch
keyword inside them ok so there are basically four processes which are
running and which have the keyword watch inside of them now what I wanna do is
inside this container I’m going to launch a watch process so what is that
watch process that watch process is basically going to watch a particular
command in a set interval of time and what is that command I basically want to
say let’s see the others – L command okay so what is it doing it is keeping a
watch on the command LS – L in every one second right you can see the time over
here it is incrementing every second and basically it’s keeping a watch on all
these files which are there inside the container continuously okay now again so
this is the dollar prompt that says we are outside the container right now now
if I again do the same command that is again I search for processes which have
the word watch in it I can actually see that there is a new process which is
running over here and this process is running inside the container which I’m
able to see if from the host operating system level right so the host operating
system is doing is basically treating this particular process as if it was
running on its own system that is the container and the host operating system
because they are sharing the same kernel the host operating system is taking this
process as if it was running inside of it right but if we if we look closely
this basically this watch command is running inside the container right let
me just quickly stop it you can see we are still inside the container and we
have stopped the watch command and if I go here and if i refresh you can see
that this words command is again in corn which was being mentioned over here
before and this is exactly what we wanted we basically wanted to see a
process which was running inside a container from outside the container
that is from the host operating system and that is exactly what we just did
alright so the question that without using docker can you see the processes
that angles that are running inside a container so the answer is yes you can
do it alright so the next question is what is the dockerfile used for what do
you basically use a docker file form so a doctor file is nothing but it’s a it’s
basically a text document to create a image using an older image and adding
some files into it all right so this it’s basically like a script that you
run in Linux which can do all the things for you that are required for example I
might need an Apache image and I want my website to be put inside the VAR /ww
slash HTML folder inside this particular Apache container now in order to do that
if I were to do it without a docker file I would have to first download the
Apache me so I would probably type docker run – ID – ID and then Apache
once I have done that I will exact into the container and then go to the
directory called Val www HTML probably I will do a git clone of the website that
I want and then my website will be available in that container and hence
I’ll be able to use it right this is one way second way is I can create a docker
file which will basically build this image for me without me having to do all
these things all these manual things which I just told you all right so let’s
see how we can do that so let me just exit this container and let me remove
the container which are just running inside my system right now okay federber
docker PS now it’s clean you now what I want to do is I want to run
this particular container docker run – 90 – P I want to basically expose the
port 83 to this containers port 80 and I want to run it as a demon so that it
runs in the background and there is it okay so I have the container running
which is this and what I want to do is I want to basically copy the website into
this container so let me do a docker exact into this container – 90 this is
the container ID and then container ID and then bash so I want to go inside
this particular folder so if I do an LS over here you can see that there is an
index for HTML and then an extra PHP which are running right so it’s on
exposed on port 83 which basically means if I go to a browser and if I quote with
the source IP address on port 83 I should be able to see this Apache page
and this is basically the container which I just ran over here okay what I
want to do is inside this particular directory I would be basically copying
the code of my website now let’s see how we can do that so let me just exit this
container let me do a droplet PS let me do a dock
talk to this particular container so what would those two so basically if my
Apache was running over here it should stop once I have stopped this container
okay so it’s stopped so if I do a refresh over here you can see the
suicide can’t be reached this is exactly what we want okay now let me do a git
clone of my github and do a get loan all right awesome
now I’ll go inside this folder and basically I want to copy this particular
folder inside the container all right so for doing that let’s
contain a docker file and what I want to say is in the image Sh har slash web app
I want to add the folder DevOps IQ and where do I want to add it inside the
container I want to add it in this particular directory okay this is where
I’m gonna add it and inside dev ops IQ okay fair enough and that is it that is
all you have to do I’ll just come out of this editor and I’ll now do a docker
build of this talk of file with the name test so it says successfully built an
image and it has been tagged as test great now if I’d run this image now
docker run – I T – PC I run it on port 80 for run it as a daemon and run the
image ok great so if I could port 80 for now let’s see if the container is
working first so yes the container is working now if I
go inside DevOps IQ what do I see great so I can
see the web so basically my website is now available inside the container by
simply writing a docker file to do that and this is exactly what we wanted
awesome guys so what is the docker file use for it is basically used for
creating an image without having to do all the manual stuff of adding your
files and everything alright now once this image of yours is ready you can
push it to docker hub and anybody in the world can download it and can basically
use your website on the local system great now the next question is explain
container orchestration okay so for so till now we have seen that you know we
can deploy a container we can use it we can probably deploy an application on it
and we can use it on the web browser right but it is not that simple
when we talk about a website like Amazon or a website like Google right it has a
lot of components with it for example on Amazon you would see that you have a
comment section then on the home page you see that there are a lot of products
which have the prices the ratings now each and every component the prices the
ratings the name of the product the image of the product the comment section
each and everything is basically a micro source it is a small part of an
application which is running independently of all the other parts of
the website right and all of this is possible using containers so basically
what they would have done is they would have run each and every component inside
a container now the problem over here is now when you have a website like Amazon
you would be dealing like you will be dealing with minimum like 10 or 11
containers for one particular copy of that website or one particular instance
of that website right now when you’re dealing with ten eleven containers these
containers have to be working in conjunction to each other they should be
in sync with each other they should be able to communicate with each other
right and they should also be able we should also be able to scale a
particular container in in case it goes down for example the comment section
container it goes down for some reason now if it
goes down we have to keep a watch on it and we have to redeploy it if it goes
down and all of these activities which I just told you comes under container
orchestration right if you were to manually deploy these containers on
docker you will have to keep a manual check on all these containers but
imagine when you have thousands or ten thousands of containers that you are
dealing with in those kind of scenarios you need container orchestration now
container orchestration can be done using various software so you have a
software called cuban at ease and before that there was
a software called docker swarm which was which basically made a life easier by
doing all the manual work for us that is it will check the containers health it
could scale them in case they become unhealthy they could always also notify
you know the administrators by an email in case something happens right they can
also run a monitoring software for you or average which basically gives you a
report or the health status of all the containers which are running inside that
software so this is what this is a very small part of what a container
orchestration tool can do right and basically if you were to understand what
container orchestration is there is like I said when you work with multiple
containers you have to take take in note a lot of things and that is possible
using the container orchestration tools like humanities and doc respond ok so
the next question is what is the difference between Dorcas warm and Cuban
ADIZ now they’re both container orchestration tools we just saw that but
why do we have to or if I were to choose between cuban at ease and aqus one which
should I choose all right so let’s look at the differences between each one of
them so the first difference which is probably the most important difference
or probably I’ll say is the deciding factor whether you know you should go
ahead with this tool given that you have a short deadline and you have to deploy
a project so installing docker swarm is very easy it comes prepackaged with
the docker software so if you installed dhoka dhoka swarm is already installed
on your system you don’t have to worry about anything on the other hand
installing cuban ”tis is a very tough job right there are a lot of
dependencies for cuban at ease you’ll have to see the system you’ll have to
see the operating system on which it is running and a host of other things right
it has a lot of dependencies and hence it is very tough to install but the
moment you install it it becomes a very helpful that as Humanities becomes very
helpful because of the features that it offers which brings us to our second
point docker swamp is faster than Cuban it is reason being that it has less
features than Cuban at ease and therefore making it a very light
software and hence faster than Cuban at ease so if you want to use docker swarm
you should be reading about what docker suam does not offer and what Cuba
Nettie’s offers and if you feel you do not need all the features that Cuban
”tis is offering you can go ahead with Dorcas worm and deploy your application
in a faster manner but like I said Cuban –’tis it is is complex and has a lot of
services and features because of which it is its deployments are a little
slower when we compare it to Dockers one third point which is most important
point is dr. swarm does not give you the functionality of water scaling meaning
if your containers go down or if your containers are basically performing at
their peak capacity there is no option in Dhaka swamp to scale those containers
on the other hand because of Cuba nineties monitoring services and the
host of other features you have that option of providing auto scaling to your
containers which basically means you can automatically scale the containers up
and down as and when they are required and this is an amazing thing that Cuban
–’tis handles for us alright guys so these were the questions around the
domain virtualization and containerization so moving ahead now our
next domain is continuous integration so let’s shed a light on what can
integration is so a quest first question itself is what is continuous integration
so continuous integration is basically a development practice or I’ll say it’s a
stage which basically connects all the other stages of the DevOps lifecycle for
example you you you push a code to get like we took an example when you push
the code to get you might have provisions which might allow you that
the the moment the code is pushed on to the remote repository it automatically
gets deployed on the servers as well well if that is the case basically that
would be possible using integration tools that would integrate your git
repository with your remote server and that is exactly what Jenkins runs it’s a
continuous integration tool which helps you which helps us integrate different
like devops lifecycle stages together so that they worked like an organism right
this is what continuous integration means so because we discussed about what
continuous integration is an expression says create a CI CD pipeline using Karen
Jenkins to deploy a website on every comment on the main branch so on every
push that you make to the remote repository the code should automatically
get deployed on a remote server alright so this is something that we’re gonna do
just now alright but before going ahead let’s see what is the architecture for
this kind of a thing alright so this is how the whole thing is going to work
basically the developer is going to commit the code to his github the github
basically once it sees a change in the branch that we mentioned it is going to
trigger Jenkins which in turn will integrate or will take the website from
the github repository and push it on to the build server on which we want the
website to be deployed all right sounds awesome great now let’s go ahead and do
this demo so for that we will have to ssh into our server so let us do that
okay so I’m in now let me clear the screen so first let’s check if I Jenkins
is running on this so so let me check the status for Jenkins so if I do a
service Jenkins status I can see that the Jenkins service is active awesome so
I’ll just go here and I’ll go to the Jenkins website which is basically
available on 8080 alright so I’ll enter my credentials and
this is how the dashboard for Jenkins look like now our questioners or our aim
is to create a job which basically will push a website that we are uploading to
get up on a particular server all right so let’s create a new job first so let’s
call our job as demo job okay and let’s name it as a freestyle project and click
OK so this will create a job in Jenkins for us all right so our job has now been
created so what we want to do is I want to take code from my github so I’ll have
to specify the github repository over here ok and similarly I will have to say
that I want to trigger the build the moment my anything is pushed on my
remote repository alright and this should be it great so I mentioned that anything that
is pushed on to my master should trigger a build on Jenkins okay and what should
this builder what set of commands do I want to run once build is triggered so
first I want to remove all the containers which are running inside my
system so I’m going to clean up right so for that I’ll say sudo doc RM hyphen F
then our taller this basically is going to clean all the containers which are
running currently in the system once this is done I want to build my website
which is going or build my container which is going to have my website all
right now how can we do that for that I’ll have to push the code to my github
which will have the docker file as well okay so we created a docker file inside you okay so here it is so we have the our
docker file created in the DevOps IQ folder which was there in my home
directory now what I want to do is I want to push so what is there inside
this dropper file we saw that we could create a docker file using if we write
something like this in our docker file and this would basically create an image
with our code which is there on github alright so what we do we’ll just push
this code to our remote repository and let’s add the message that we have
ordered a taco file great and now let’s push it to our remote repo great so it hasn’t pushed to my remote
repo and now if I just go here and check if my changes have been done or not let
me just quickly refresh it so yes I have doctor file in my camp gate repository
right now which was committed 42 seconds ago awesome great so now what I’ll do is
I’ll come to my champions and I will say that build pseudo docker build the
docker file now where is that document the doc of I will basically be
downloaded in the Jenkins wake workspace so that is in where Lib Jenkins
workspace and then the name of the job which is demo show up and that is it so
inside this I will have my dhaka file and it will basically build it and name
it as say Jenkins Jenkins it’ll name it as Jenkins in the next
step what I’ll do I’ll do a sudo docker run – IT and then – P and say I want to
deploy it on 84 or say 87 port ok and what do I want to deploy I want to
deploy Jenkins okay so this should do these that this should basically do all
the stuff so in the first command basically we’re removing any container
which is running on the system in the second command what we’re gonna do is
we’re gonna build the docker file which is available in this workspace and this
workspace will basically have my greater project and
the link I’ve specified over here so it’ll basically just copy or it will
pull the project and save it in the workspace of demo job so indeed sama job
there is it aqua file so we are building this Dhaka file and we’re naming this
created image as Jenkins and then we are running this image and exposing it to
port 87 okay so let’s save it awesome now what
we have to do is I’ll have to go inside so if you want to configure a web hook
the way to do that when I say Aweber basically you want
your github to interact with your champions whenever there is a push to a
particular repository so in your repository go to settings and then go to
web hooks so this is a web hook that I created for my Jenkins right so let me
create it again for you so all you have to do is click on add web book right and
enter the URL for your Jenkins over here so in my case it is this I’ll just enter
it over here followed by this keyword which is github – web hook and that is
it once you specify that and just go down click on add web hook and this
should basically send a request to Jenkins and if everything goes well it
will say last delivery was successful okay so any changes that I make to my
github now should trigger a change over here now let me delete this job because
I think even this job gets triggered when my github any changes made to my
github right so let me delete this project okay great so I just have this
job now awesome now let us see how it actually works
so what I’m going to do is I come back to
terminal do an LS and let mean coincide DevOps
IQ and let me do some changes in the cold so today I’ll go into nano index.html
so the first thing that I do is I change the title of the website so I called it
as Jenkins test website okay and I change the image from two to one dot g
PG and that is it let us see if I just push or if I just push this website onto
my server what will happen so I do a git push sorry first I’d have to add these
changed files into my repository git push origin master sorry get comment and
let me label this commit as test push okay done now let’s push this to a
remote repository git push origin master and let’s give the credentials awesome
now if you wait here it should basically start a job so as you can see there is a
job queued which is for demo job and this called automatically triggered by
my github okay so let me refresh this okay so the moment it gives you a red
that basically means that your job has been filled so let’s see what has just
happened why our job God failed so if you go here you can see the console
output just like this okay so basically we is forward to add a sudo here
and that is causing us a problem okay so we can fix this by just going down and
our Dingle pseudo here save it and again we’ll have to change the code let’s call
it us Jenkins test2 website we’ll do a control XY and now let’s add our files
to our local repository get add now let’s commit it test push – and now
let’s push this to our master I’ll enter the credentials and this should be it
okay so let’s see so our second job got triggered automatically and it gives us
a blue now blue means that your job was executed successfully so let’s check
what happened so we were deploying it on port 85 so let’s check if it has been
indeed deployed so it was on 45 and the folder was DevOps IQ okay so let’s check
I’m not sure if it was 485 let’s check or the port that we have specified the port is 87 okay so let’s go to port
87 okay so it’s giving it a nursing unsafe port so for our troubleshooting
let’s check if the container is running so yes the container is running on port
87 but it says and unsafe port or what we can do is let us change it to say 82
and now let’s just try to build the job from here we’ll just click on bed now
job has been completed and the port was 82 yes Apache is working now let’s try
going inside a DevOps IQ folder and there you go you have your website with
this title which you pushed on github now for one more time for testing
purposes let us push our code once more and see what happens so I will say that
this website is test 3 website and say the I’ll change the image as well – 2
dot jpg okay save it do a git add to a commit
and say call it test push 3 and now let’s get push origin master enter the
credentials great now let’s check what will happen
okay so our build has been started and it has been completed great so if i
refresh just now it says Jenkins test3 website and the background also has been
changed so congratulation guys we have successfully completed the demo so
basically if you change anything in your github the website is automatically
getting deployed on your build server right and on top of this just for making
it more interesting what we can do is we can do a get log and we can revert on
this commit that we saw earlier okay so let’s do a git revert and then paste it
agree to everything and then push to master and other credentials everything has
been pushed job is getting triggered job is completed and if I go here again my
website got reverted to a particular previous version awesome guys so we have
completed the demo but basically asked us to create a CDC ICD pipeline using
git and Jenkins to deploy a website on every commit on the main branch so
you’ve done it successfully awesome let’s move on to our next domain which
talks about configuration management and continuous monitoring awesome so what is
configuration management and what is continuous monitoring let’s understand
it so what is the difference between ansible chef and puppet now before
understanding the difference between ansible chef and puppet these are
basically configuration management tools what is configuration management if you
have say around 200 servers and you want to install a particular software on each
of these servers what will you do but one way what you can do is you can
basically go to each and every of these servers run a script and that basically
will install software on that on the only source right the other way to do it
is install a configuration management software using which you can deploy or
you can install all these software’s or you can control the configuration of
these all these servers from one central place and that is exactly what
configuration management means right now in configuration management you have
many tools like ansible chef puppet etc but these are the three top tools which
are used in the industry now the question is what is the difference
between ansible chef and puppet alright so let’s go ahead and see the
differences all right so let’s first talk about ansible so ansible is very
easy to learn because it is based in Python so you don’t have to sweat a lot
or you don’t have to sweat much on learning
the commands for ansible because it is based in Python so if you know Python
and Sybil is going to be a cakewalk for you it is preferred for environments
which are designed to scale rapidly basically with ansible the thing is that
you don’t have to install the ansible client software on top on the on the
systems on which you want to basically deploy the configuration and Sybil just
has to be installed on the master and that is it no other configuration
required you can directly control the configuration of the client server given
you have the access to it so it offers simplified orchestration reason being
like I just told you that you don’t have to worry about installing software’s on
the client machines ansible can stand alone take care of all the complications
that come forward when you are dealing with deploying configurations without
installing a particular software on the client machines this is a basically a
disadvantage of ansible that it has very underdeveloped GUI that is you only get
the CLI to work with right and it has very limited features when we compare it
with puppet and chef now let’s talk about chef what Wow is chef different
from ansible so it is Ruby based and hence it’s difficult loss now ruby is a
language that not many people are acquainted with and hence people might
find it difficult to get versed with the command of chef the initial setup is
complicated when I compare it to the ansible the setup was very easy because
I just had to install ansible on the host machine and on the client machine I
didn’t have to install any software so but with chef you have to do that and
hence it becomes a little complicated but once all the setup and everything is
done chef is very stable right it has it has been since it’s a community product
and it has been well contributed to it’s a very stable product and it offers you
resiliency so so of course if when you are working on production servers
probably work and chef would be a better idea dan
ansible because ansible does not have that has that create
community when you compare it with chef and of course chef is the most flexible
solutions for whis and middleware management no middleware basically means
the software management part chef offers to be a great choice for configuration
management reason being it can it is very reliable and is very mature because
it was probably among the first configuration management tools to come
out and because community has contributed a lot to this project it is
very mature in its development stages as well now let’s talk about puppet so
puppet can be tough for people who are beginners in the DevOps
world right because the it uses its own language called puppet DSL right the
setup part is smooth when you compare it with chef but it’s a little harder than
ansible because when you’re using puppet you use we use a master and an agent as
well so you will have to install puppet agent on the client machine and only
then puppet will be able to interact with the client software right now it
has a strong support for automation so if you are planning to do some
configuration management that you want to automate puppet is very compliant in
that part you can easily do the automation part using puppet and it is
not suitable for scaling any deployment so if you have say around 50 or 60
servers and you plan to add more in the future probably puppet would not be the
right choice for that kind of an architecture it is good good to have
when you have a stable infrastructure very probably not adding servers now and
then but if you are working on cloud and you do not know the capacity that you
would be running probably puppet would not be the good would not be a good idea
to manage your configuration on your clan
it’s okay our next question is what is the difference between asset management
and configuration management so asset management basically deals with
resources and deals with hardware which will have to plan so that our IT
workforce can work with maximum efficiency right so we’ll have to plan
the planning of your Hardware of how many resources a particular team might
need giving the right resources to the right people is what asset management
counts in when we talk about configuration management it basically
employs not the hardware but the software component of what all
software’s are required by a particular employee of a team or a particular
person in the team what trophies are required by that person and for other
person what software is required I mean rather than taking the radical approach
of installing every software and every machine which should not be done because
some software’s are licensed so configuration management basically means
installing the right software on the right system on which a particular
person or on which a particular workload is going to run so our next question is
what are n RPE plugins in nag yours okay so NRP plugins are basically
extensions to Nagios which help you monitor the local resources of the
client machines right so you don’t have to SSH into the client machines to see
how much of memory or how much of CPU is being used nag yours being a monitoring
tool you just have to install and the NRP extension on the client machine and
it will give you a real-time data of the resources that are being consumed on
that particular client machine and obviously when you’re working in
production environment you will be monitoring multiple machines and with
the NRP plugins installed on each of those machines you can easily monitor
the resources of them at one central place and that is exactly what NRP
plug-in is and expression is what are the difference
between an active check and a passive check in Nagios okay so in a goose
if the the data the monitoring log that you’re getting from your clients if it’s
being delivered by an agarose agent in that case it is called an active check
reason being nag use is actively involved in taking all the data oral or
in collecting all the data from your clients but in case when you’re dealing
with systems wherein it does not allow you to install any other software or
probably the software itself can generate monitoring logs in those cases
what happens is rather than Nagios the software component pushes the logs to
the Nagios master where it can take the logs and probably create a graph or
create the metric for you in the dashboard so basically using those logs
which are being published by some other software Nagios will create a report of
the health monitoring part of your client systems right and that is why it
is called a passive check reason being nag knows is not involved on the client
side at all it is basically the software’s own services which are
basically pushing the log to nag your master and hence it’s called a passive
check all right but if you talk about the architecture or the working of the
lifecycle of how this actually works between on the master itself the logs
which are published are actually published to a queue right and whether
it’s an active check or a passive check the logs have to be published to that
queue so that the Nagios master can pick them up and create the monitoring metric
which is required all right so in a passive check and in an active
check the queue is going to be there but it’s only difference between the agents
that is in an active check the Nagios agents are involved but in a passive
check third party software tools are involved which publish the log to the
NAG use master all right so our next question says create
the playbook to deploy Apache on a client/server so basically we have to do
configuration management so as to without I mean going inside the declined
system we’ll have to install a particular piece of software inside it
okay so let me quickly do an ssh into my AWS machine and what i’m gonna do is i
have a slave machine which i have already configured which can interact
with my master that is if i’d were ansible pink oil you can see that there
is a server one that has i have configured which has successfully
responded to my masters request okay now let me show you the server which is
basically working so this is the server which is configured with my master right
this is my client machine and on this machine i will have to install apache so
if i right now go to the IP address of this machine it says connection refused reason being
that there is nothing installed on this server or there is no apache software
installed on this particular server right now all right so let’s install
apache now to do that you will have to write a playbook
now what is the playbook a playbook looks something like this so it’s
basically a Yama file that you’ll have to create so I have created one for me
so where do you want to install the apache software is part where you will
have to specify an hosts so basically my the my client machine is a part of a
group called servers that i’ve created right so the hosts are service and where
can you actually specify what part is your machine of
which group is your machine a part of so that you can specify over here so it’s
in / EDC / ansible and / host okay so as you can see over here this is the group
name that is so us and inside servos have specified a server one client
machine which has the IP address this so this is the IP address of my slave so if
you can compare its 18.2 to 3 101 172 and if I compare it with my slave this
is the IP address of my slave right and this has been configured over here so I
can refer to my server 1 as servers or I can refer it to where’s server 1 right
so if I do let me do clear over here I can say ansible – am pink and i can
say so one is with it will reach out to my
server 1 or I can say service as well because it’s part of the group service
ok so this is how it works now I want to install Apache so for installing Apache
and have to write a playbook which looks like this basically it’s a yam L file so
you start with the 3 dashed lines and then you specify hosts so host I’ve
specified every sword every machine which is inside the server’s group
should install Apache on it right and what is the task I want to install
Apache – this is basically a name you can specify anything over here then I’ve
specified apt basically I want to use the apt package to install Apache to the
latest software ok now what I can will do I will type in ansible – playbook and
I will type in Apache not Yemen I’ll hit enter and now it has
started to install everything so it has it is installing on the server’s group
it is gathering the facts and it saw that it is able to communicate with
server one and now it is accomplishing the task of installing Apache
alright so it has been done successfully so if I go to my Chrome browser now and
if i refresh the address you can see that Apache is installed on this server
automatically so I didn’t have to basically SSH into the server
it all happened automatically and if I had like five six computers which were
running on which were running on AWS and if I wanted to install the software on
it using Apache or using ansible this would have been the same way it’s only
that in the server group I would have specified more IP addresses which my
ansible could talk to okay so this was tasks of basically deploying an
answerable playbook on a client server without SSH doing an ssh into that
client server and doing it from a central location right so this is done
now let’s move on to our next domain which is continuous testing now what is
continuous testing so we talked about continuous development which is done
using github we talked about continuous integration which is possible using
Jenkins who talked about continuous configuration management which can be
done using ansible and next online is we have continuous testing so once the code
has been deployed it has been integrated with Jenkins it has been deployed on a
server the next thing is automated testing that we discuss in the best
practices before right and it can be done using a tool called selenium a
software called selenium webdriver right so the first question is list out the
technical challenges with selenium so the selenium tool is used widely for
automatic testing or automated testing but what are the problems that you get
with selenium so if you’re using selenium mobile
testing cannot be done so if you have developed an application for your mobile
you cannot test it using selenium the reporting capabilities of selenium are
very limited if there is if the if your application or your web application
deals with pop-up windows or it gives pop-up windows selenium would not be
able to recognize those pop-up windows and work on them way back oh and again
selenium is only limited to web applications so if you have an
application that probably runs on desktop probably it’s a software that
you have designed you cannot test that software using selenium selenium is only
for those applications which can run inside a browser and if your if you want
to check whether there is some image in your web page and that image should have
some particular content it is a little difficult to implement it in selenium
all although it is not impossible it is possible you’ll have to import some
libraries and other things like that but natively selenium does not support image
testing you’ll have to work around we will have to work around with selenium
to import some libraries which could do it for you but like I said natively
selenium does not support image testing so a next question is what is the
difference between verify and assert commands so let us see the differences
so if you’re using assert in selenium the if the command fails the whole
execution comes to a halt whereas in verify it does not come to a
halt it keeps on continuing the rest of the lines which are written in the code
now why how can it be helpful how is it helpful to basically put execution at
halt whenever there is an error which occurs for a particular line it is
helpful when you’re dealing with critical cases like for example if if
there are five cases and say if case three fails case four and five can not
execute because they have a dependency on case three in those cases I would say
that you would have to use assert with the case three but in in the
same example if you talk about case one and case two they do not have a
dependency or they do not create a dependency for any other test cases that
have to run right for example case three case four case five and not dependent in
case one in case two in those cases we can run the verify command which will
not stop even if the the test case fails right and this basically is done to
basically see what all is working and what all is not in one shot in our
testing program and for those cases you would use verify but in the cases where
you are testing critical cases and you do not want to waste your time testing
other things if one of your case fails in those cases you will use the assert
test so like I said it is used to validate critical functionality assert
command and verify is used to very validate functionality which is of the
normal behavior kind of kind of sinner which which comes into the normal
behavior that that is it does not create a dependency for other things to not
work because it stopped working all right so a next question is what is
the difference between set speed and sleep methods so set speed is basically
used for executing tasks at a particular interval that we specify for example say
I want to echo hello world at intervals of 5 seconds in that case I can specify
it using set speed but sleep method basically suspends the execution of the
whole program for a particular interval for example if you’re doing a selenium
web test and the webpage takes around 3 seconds to load and you don’t want
testing to happen just after each line you can specify a sleep method of say
around 3 seconds where it it waits for 3 seconds for the website to load and only
then it will start executing the tests which follow that particular line right
so this is the difference between set speed and sleep okay guys a quick an
for if you want to become a certified DevOps professional in telepods offers a
complete course on dough ops which covers all the major concepts and also
all the tools which a professional should know so for further details you
can check the link in the description so now let us continue with this session ok
guys we’ve come to the end of this session
I hope this session on DevOps source information for you and if you have any
doubts feel free to comment it below and we’ll out alp out thank you

9 Replies to “Devops Tools | DevOps Training | Intellipaat”

  • Guys, which technology you want to learn from Intellipaat? Comment down below and let us know so we can create in depth video tutorials for you.:)

  • ๐Ÿ“Following topics are covered in this video:

    01:16 – what led to devops

    04:46 – what is devops

    06:24 – phases of devops

    08:09 – devops tools

    08:34 – jenkins

    08:53 – ansible

    09:32 – docker

    10:14 – puppet

    10:40 – chef

    11:16 – nagios

    12:14 – git

    13:08 – maven

    13:17 – advantages of devops

    17:12 – devops career

    19:45 – devops project

    43:04 – devops interview questions and answers

  • ๐Ÿ‘‹ Guys everyday we upload high quality in depth tutorial on your requested topic/technology so kindly SUBSCRIBE to our channel๐Ÿ‘‰( ) & also share with your connections on social media to help them grow in their career.๐Ÿ™‚

Leave a Reply

Your email address will not be published. Required fields are marked *