Skip to main content

Terraform (overview )

 If  we want to create anything in the cloud, we can just navigate and go to click, and then we can create its ok in the dev environment and prod environment, but what about UAT and PROD env that where we need to create any reasoures by using code 

A form automation tool for Iaac is Terraform


And then why do we need a service account in GCP instead of an individual account? because if we leave the team, the work which I left or incase of one person absence other person and able to do that.

The service account is designed in the Principle of Least Privilege, so it means its automatically restricts the user by their role.

The resources:

For example, if we want to create the GCP bucket just need to create one resource for it, likewise to any biquery , composer, etc.. we can access through terraform using resource, likewise in airflow DAGS .

resource "google_storage_bucket" "my_bucket" {

    name     = "my-unique-bucket-name"

    location = "US"

  }

Always refer Terraform official documentation to know the syntax for creating any resources for anything.

To install Terraform, just use CMD in Mac, use brew(package installer) like wise Choco in Windows 

if we want to connect from our local machine to GCP using gcloud commands instead of logging into the cloud and giving these commands in google Cloud Shell. 

examples:

gcloud compute instances create my-instance --zone=us-central1-a --machine-type=e2-medium --image-family=debian-11 --image-project=debian-cloud

  • my-instance: Name of the instance.
  • --zone: Specifies the zone.
  • --machine-type: Sets the machine type.
  • --image-family and --image-project: Specify the OS image.
  • remaining refer terraform

  • Comments

    Popular posts from this blog

    session 19 Git Repository

      🔁 Steps to Create a Branch in Databricks, Pull from Git, and Merge into a Collaborative Branch Create a New Branch in Databricks: Go to the Repos tab in your workspace. Navigate to the Git-linked repo. Click the Git icon (or three dots ⋮) and choose "Create Branch." Give your branch a name (e.g., feature-xyz ) and confirm. Pull the Latest Changes from Git: With your new branch selected, click the Git icon again. Select “Pull” to bring the latest updates from the remote repository into your local Databricks environment. Make Changes & Commit: Edit notebooks or files as needed in your branch. Use the "Commit & Push" option to push changes to the remote repo. Merge into the Collaborative Branch: Switch to the collaborative branch (e.g., dev or main ) in Git or from the Databricks UI. Click "Pull & Merge" . Choose the branch you want to merge into the collaborative branch. Review the c...

    Session 18 monitering and logging - Azure Monitor , Log analytics , and job notification

     After developing the code, we deploy it into the production environment. To monitor and logging the jobs run in the real time systems in azure  we have scheduled the jobs under the workflow , we haven't created any monitoring or any matrics . After a few times, the job failed, but we don't know because we haven't set up any monitoring, and every time we can't navigate to workspace-> workflows, under runs to see to check whether the job has been successfully running or not and in real time there will be nearly 100 jobs or more jobs to run  In real time, the production support team will monitor the process. Under the workflow, there is an option called Job notification. After setting the job notification, we can set a notification to email . if we click the date and time its takes us to the notebook which is scheduled there we can able to see the error where it happens . order to see more details, we need to under Spark tab, where we have the option to view logs ( tha...

    Transformation - section 6 - data flow

      Feature from Slide Explanation ✅ Code-free data transformations Data Flows in ADF allow you to build transformations using a drag-and-drop visual interface , with no need for writing Spark or SQL code. ✅ Executed on Data Factory-managed Databricks Spark clusters Internally, ADF uses Azure Integration Runtimes backed by Apache Spark clusters , managed by ADF, not Databricks itself . While it's similar in concept, this is not the same as your own Databricks workspace . ✅ Benefits from ADF scheduling and monitoring Data Flows are fully integrated into ADF pipelines, so you get all the orchestration, parameterization, logging, and alerting features of ADF natively. ⚠️ Important Clarification Although it says "executed on Data Factory managed Databricks Spark clusters," this does not mean you're using your own Azure Databricks workspace . Rather: ADF Data Flows run on ADF-managed Spark clusters. Azure Databricks notebooks (which you trigger via an "Exe...