Skip to main content

Terraform info and installation

 

Rreferred from :Learnitguide.net - Learn Linux, DevOps and Cloud

SELVA - Tech & ITOps --> youtube

Infrastructure as Code ( IaC):

                      IaC primarily manages the underlying infrastructure components that an application relies on. This includes:

  • Virtual machines (VMs) (Virtualization is the process where all the physical components are abstracted into the abstraction of the software and provides the complete isolation from the host OS )
  • Networks
  • Storage
  • Databases
  • Load balancers(Load balancing is the process of distributing network traffic across multiple servers. This ensures no single server bears too much demand. By spreading the work evenly, load balancing improves application responsiveness. It also increases availability of applications and websites for users. Modern applications cannot run without load balancers)

IaC tools like Terraform enable you to:

Create: Provision and configure these infrastructure resources.

Use: Maintain and update the configuration of these resources.

Destroy: Decommission and remove these resources.

 






List of IaCs tool:

1. terraform 
2. Arzue resource Manager 
3.AWS Cloud foramtion
4. Google cloud Deployment Manager
5. Anisble 
6.chef
7.Pappet
8. SaltStack
9.Vagrant


Terraform :( open-source tool )

we need to create the manifest file to provision, create, and manage the infrastructure.

 uses the language HCL or optionally JSON 





Consistency: we can create a single configuration file used in different environment like Dev , Test , stage and production 
Iacs : It  created as the purpose  for so it can interconnect all the cloud resources and even the on-perm without any human related errors or syntax related errors 
Automation: it will be easy to automata or to create any infrasture for any resources.
Version Control : it has the in-build version control as its saves the  metadata of the code if is modified .
Collaboration: it can be easily collaborated within the team to create the infrasture .
Scalability : we can scale up or scale down depending upon the requirement of the project.


-->To create the infrastructure, we need to know the entire target system ( without knowing the manual process, you cannot create the automation one )
-->Terraform files should have an extension as .tf 


Terraform: Deploying Infrastructure

  • Write .tf files:
    • Define your desired infrastructure as code.
  • terraform init:
    • Initialize the working directory and download necessary provider plugins.
  • terraform plan:
    • Preview the changes Terraform will make to your infrastructure.
  • terraform apply:
    • Execute the planned changes and deploy your infrastructure to the target system.

    • Terraform Workflow:
      • Plan & Apply:
        • Developers define infrastructure in .tf files.
        • terraform plan previews the changes to be made (a "dry run").
        • terraform apply executes those changes on the target system.

      Terraform Setup:

      • Providers & Provisioners:
        • The core requires provider and provisioner configurations when a Terraform project is initialized.
        • Terraform automatically downloads the necessary provider APIs into the .terraform directory, enabling interaction with target systems.

    Terraform installation :
     Simply go to google, download terraform for Windows 
    then unzip the downloaded file go inside the folder it has only one exe file 
    set up the environment variables and run in Cd 

    or 

    Simply open the Docker, in Docker Hub search for the Terraform image and in the cmd 
    To pull the image of docker, docker pull hashicorp/terraform 
    To run the terraform , docker run hashicorp/terraform

















               

    Comments

    Popular posts from this blog

    session 19 Git Repository

      🔁 Steps to Create a Branch in Databricks, Pull from Git, and Merge into a Collaborative Branch Create a New Branch in Databricks: Go to the Repos tab in your workspace. Navigate to the Git-linked repo. Click the Git icon (or three dots ⋮) and choose "Create Branch." Give your branch a name (e.g., feature-xyz ) and confirm. Pull the Latest Changes from Git: With your new branch selected, click the Git icon again. Select “Pull” to bring the latest updates from the remote repository into your local Databricks environment. Make Changes & Commit: Edit notebooks or files as needed in your branch. Use the "Commit & Push" option to push changes to the remote repo. Merge into the Collaborative Branch: Switch to the collaborative branch (e.g., dev or main ) in Git or from the Databricks UI. Click "Pull & Merge" . Choose the branch you want to merge into the collaborative branch. Review the c...

    Session 18 monitering and logging - Azure Monitor , Log analytics , and job notification

     After developing the code, we deploy it into the production environment. To monitor and logging the jobs run in the real time systems in azure  we have scheduled the jobs under the workflow , we haven't created any monitoring or any matrics . After a few times, the job failed, but we don't know because we haven't set up any monitoring, and every time we can't navigate to workspace-> workflows, under runs to see to check whether the job has been successfully running or not and in real time there will be nearly 100 jobs or more jobs to run  In real time, the production support team will monitor the process. Under the workflow, there is an option called Job notification. After setting the job notification, we can set a notification to email . if we click the date and time its takes us to the notebook which is scheduled there we can able to see the error where it happens . order to see more details, we need to under Spark tab, where we have the option to view logs ( tha...

    Transformation - section 6 - data flow

      Feature from Slide Explanation ✅ Code-free data transformations Data Flows in ADF allow you to build transformations using a drag-and-drop visual interface , with no need for writing Spark or SQL code. ✅ Executed on Data Factory-managed Databricks Spark clusters Internally, ADF uses Azure Integration Runtimes backed by Apache Spark clusters , managed by ADF, not Databricks itself . While it's similar in concept, this is not the same as your own Databricks workspace . ✅ Benefits from ADF scheduling and monitoring Data Flows are fully integrated into ADF pipelines, so you get all the orchestration, parameterization, logging, and alerting features of ADF natively. ⚠️ Important Clarification Although it says "executed on Data Factory managed Databricks Spark clusters," this does not mean you're using your own Azure Databricks workspace . Rather: ADF Data Flows run on ADF-managed Spark clusters. Azure Databricks notebooks (which you trigger via an "Exe...