Check the Top 100 DevOps Interview Questions and Answers for Experienced that will help you to clear the Interview:

Q. What are the goals of Maven?

A. integration-test: run integration tests.

verify: verify all integration tests passed.

Maven clean goal (clean: clean) is bound to the clean phase in the clean    lifecycle. Its clean: clean goal deletes the output of a build by deleting the build directory. Thus, when maven clean command executes, Maven deletes the build directory.

Q. what does pom.xml file contains?

A. POM is an acronym for Project Object Model. The pom. xml file contains information of project and configuration information for the maven to build the project such as dependencies, build directory, source directory, test source directory, plugin, goals etc. Maven reads the pom. xml file, then executes the goal.

Q. What is repository in maven?

A. Repository is a directory where all the project jars, library jar, plugins or any other project specific artifacts are stored and can be used by Maven easily.

Q. What is stateFile in Terraform?

A. StateFile is used to keep track of the resources that terraform manages. It stores information about the current state of infrastructure and helps terraform plan and apply changes accurately.

How to improve speaking English Skills

Q. What is meta-arguments in terraform module?

A. Meta-arguments are special arguments in Terraform that are used to control how resources

     are created, updated, or destroyed.

Q. Meta-Arguments in Terraform are as follows:

A. depends_on: Specifies dependencies between resources.

Ex: – # The web_security_group resource depends on the web_server resource

# The web_elb resource depends on the web_sg resource

count: – The “count” meta-argument is used to create four instances, each with the specified AMI and instance type. The “tags” block is used to assign a name to each instance, with a unique index based on the count. Run terraform init to initialize the Terraform project. Run terraform apply to create 4 instances.

for_each: Allows creating multiple instances of a resource based on a map or set of strings.

lifecycle: Defines lifecycle rules for managing resource updates, replacements, and deletions.

provider: Specifies the provider configuration for a resource. It allows selecting a specific provider or version for a resource.

AWS Interview Questions and Answers for Beginners and Experienced

Q. What is the difference between a variable and data source in Terraform?

A. Data sources provide dynamic information about entities that are not managed by the current Terraform and configuration. 

Variables provide static information.

Q. which module called child or parent in terraform?

A. A module that has been called by another module is often referred to as a child module.

Q. What is the root module name in Terraform?

A. Terraform has two types of modules; the top-level module is always called the “root” module and the modules that are called from the root module are called “child” modules.

Q. What is Playbook in Ansible?

A. Playbook in ansible is yaml file that defines a set of tasks and configurations to be executed on remote servers. It is used for automation and configuration management.

Q. What is configuration management in DevOps?

A. DevOps configuration is the evolution and automation of the systems administration role, bringing automation to infrastructure management and deployment.

Installation of software, patches, hardware operations in multiple machines at a time instead of manually so we configuration management is necessary.

Q. Why ansible is better than chef and puppet?

A. Chef offers flexibility and a strong focus on infrastructure as code, Puppet excels in managing large-scale environments, while Ansible stands out for its simplicity and agentless approach.

Only requirement is password less authentication in ansible. Other tools have nodes criteria like master slave architecture and ansible is agentless. Ansible uses rm protocol for windows and ssh protocol for Linux machines. Ansible is written in python and backed up by Redhat. Ansible scripts written in yaml language that is simple and widely used language.

Q. How Ansible helped your organisation?

A. Earlier we were using manual, basic shell or PowerShell script to execute the application or for the patching or regular updates, for enabling or disabling any rule or in all the machines, then We started using Ansible to install the same application in all that machines. Ansible is very effective and reduces much time.

Q. What is Ansible dynamic inventory?

A. Ansible dynamic inventory, in simple terms, is a way for Ansible to automatically discover and collect information about the hosts (computers or servers) in your infrastructure, rather than defining them manually in static inventory files. It provides flexibility in managing and configuring large and dynamic IT environments.

Here’s an analogy to help you understand:

Static Inventory: Imagine a paper phonebook where you list all the phone numbers and addresses of your friends. You have to manually update it whenever there’s a change, like a new friend or someone moving.

Dynamic Inventory: Now, think of a smartphone with a contacts app that automatically syncs with your address book. It fetches the contact information as you add new friends or as your friends change their contact details.

In the context of Ansible:

Static Inventory: You manually list the IP addresses or hostnames of all the servers you want to manage in an Ansible inventory file.

Dynamic Inventory: Instead of manually listing servers, Ansible can use dynamic inventory scripts or plugins to fetch server information from various sources such as cloud providers, virtualization platforms, databases, or custom scripts. This information is used to determine which servers to target for tasks and configurations.

Q. What is Ansible Tower and have you used it.?

A. Ansible tower is Enterprise model or version of ansible that it provides graphical user interface. There is support system for ansible as well if we need any type of help from ansible.

We use Ansible CLI that is open-source version of Ansible.

In simple language, Ansible Tower is a web-based interface and automation tool that makes it easier to use Ansible for managing and automating IT infrastructure and applications. It adds a user-friendly layer on top of Ansible, providing the following benefits like: –

Centralized Control, Role-Based Access Control, Job Scheduling, Logging and Auditing, API Access, Self-Service Portals

Q. How do you manage RBAC of users for Ansible Tower?

A. Ansible support Role based Access control for team like developer, tester or admin, read only access type of access given to teams.

In simple way Role base access control managing access to the Ansible Tower. We can integrate with external sources like IAM of AWS or LDAP or Active Directory.

Ansible Tower provides a flexible and granular RBAC system to help you define roles and permissions for your users. Here’s how you can manage RBAC for users in Ansible Tower:

a. Log into Ansible Tower: Access the Ansible Tower web interface by navigating to its URL in a web browser and logging in.

b. Access the “Organizations” or “Teams” section: In Ansible Tower, RBAC is managed at the organization or team level. Organizations are often used to group resources and permissions logically.

c. Create or Edit Organizations or Teams

To edit an existing organization or team, select it from the list of organizations or teams.

d. Manage Users and Permissions

e. Assign Permissions

f. Test the RBAC Setup

g. Regularly Review and Update RBAC

h. Integrate with External Authentication Sources (Optional) like LDAP, Active Directory

Basic Linux Commands for Beginners with Example

Q. What is Ansible Galaxy command and why is it used for?

A. This is basically used for bootstrap. Just take an example of writing Ansible playbook for one infra configuration. Instead of writing of our own we use Ansible Galaxy to bootstrap to create structure like required folder and files and use it with the help of ansible galaxy command.

Q. Please explain the structure of ansible playbook using playbook using roles

A. Ansible has its standard structure. It has its handlers, templates, metadata, task in Ansible default structure. We can say that We use Ansible Galaxy command to create Ansible structure and mentioned folders are created by galaxy command

When you create structure an Ansible playbook using roles, you are organizing your automation code in a more modular and reusable way. Roles provide a structured approach to organizing tasks, variables, templates, and other components within your playbook.

Roles Directory: First, create a directory structure for your playbook that includes a directory for roles. This is where you will organize your roles. The directory structure might look something like this:

Q. What is Light house stage?

A. Lighthouse is an open-source tool from Google that is used to audit and improve the quality of web pages.

In the context of a Jenkins pipeline, you can use Lighthouse as a step to automatically audit your web application during the build or deployment process to ensure it meets certain quality standards.

Q. What is Zap scan stage?

A. ZAP is an open-source security testing tool for finding vulnerabilities in web applications

In a Jenkins pipeline, a “ZAP scan” would typically involve running security scans on your web application using ZAP to check for common security vulnerabilities like cross-site scripting (XSS), SQL injection, and more.

Q. What is bastion host in devops

A. It’s a special server that stands between the public internet and your private network. When you want to access the computers inside your network from the outside, you first go through this doorman (bastion host). It checks who you are, ensures you have permission to enter, and then allows you to access the computers inside. This helps protect your network from unauthorized access and keeps it safe.

Q. How to attach other ec2 instance in our bastion host

A. To attach other EC2 instances to your bastion host, you typically need to set up SSH tunnelling or port forwarding from the bastion host to the target instances.

Configure the Bastion Host:

Make sure your bastion host is properly configured, and you can SSH into it securely using your SSH key.

Ensure that your bastion host’s security group allows incoming SSH connections (port 22) from your IP address or a specific range of IP addresses.

Identify the Target Instance:

Determine the private IP address of the target EC2 instance that you want to connect to via the bastion host.

SSH Tunnelling or Port Forwarding:

Use the ssh command on your local machine to set up an SSH tunnel or port forwarding. Here’s the basic syntax:

ssh -i your-bastion-key.pem -L local-port: private-Ip-of-target: target-port ec2-user@bastion-public-ip

Replace the placeholders with the actual values:

your-bastion-key.pem: The SSH key file for the bastion host.

local-port: A local port on your machine (e.g., 2222) that you will use to connect to the target instance.

private-ip-of-target: The private IP address of the target EC2 instance you want to connect to.

target-port: The port on the target instance you want to access (e.g., 22 for SSH).

bastion-public-ip: The public IP address or hostname of your bastion host.

Connect to the Target Instance:

After setting up the tunnel, you can use an SSH client (e.g., the ssh command or an SSH client like PuTTY) to connect to the target instance via the local port you specified in the tunnelling command. For example:

ssh -i your-target-key.pem -p local-port ec2-user@127.0.0.1

Replace your-target-key.pem with the key for the target instance and local-port with the local port you used in the tunnelling command.

This SSH tunnelling approach allows you to securely connect to the target EC2 instance through the bastion host. Make sure your security groups and network configurations allow the necessary traffic.

How to Earn Money From YouTube Channel

Q. What are handlers in Ansible and why are they used?

A. It’s a conditional task or tasks that run only during special notification. We can use handlers to notify when any failure done.

Q. I would like to run the specific set of tasks only on windows VMs not Linux VMs. Is it possible?

A. Yes, we can do this. We can use tags ansible support system or we can find out the environment variables or on conditional basis and we can run the task on specific VMs like 5 on linux and 5 on Windows VMs to achieve this use case.

Q. Does Ansible support parallel execution of tasks?

A. Yes, Ansible support this. Ansible executes one task in all VMs parallelly then goes for another task in all VMs if we have list of application to install on multiple VMs.

Q. What is the protocol that Ansible uses to connect to windows VMs?

A. Ansible uses the Windows Remote Management (WinRM) protocol to connect to Windows virtual machines (VMs) for managing and automating tasks and SSH to connect to Linux.

Q. Can you place them in order of precedence?

Playbook group_vars, role vars and extra vars

A. this is in correct order.

Q. How do you handle secrets in Ansible?

A. HarshiCorp Vault.

Q. Can we use Ansible as IaC? If yes, can we compare with other infrastructure as code like Terraform?

A. Yes Ansible supports in some way. It can create ec2 instance but it is used as configuration management tool.

Q. can you tell me how Ansible helped your organisation?

A. It reduces time to install or perform setup manually in multiple VMs or devices.

It reduces human error as well.

Ansible galaxy is there that help us in multiple way.

Q. How do you think Ansible can improve?

A. There should be increase in Verbosity or on task level.

Ansible windows support can be increased. As support for Linux is much better than Windows.

Ansible can support multiple IDs like there are lot more IDs are for auto correction, suggestions for ansible can work on some plugins etc, like for python we can PyCharm or visual studio and all support auto correct and suggest as well for the errors. So, it will help for configuring.

Q. What is default Transport Protocol used by Ansible to connect to remote hosts?

A. Default transport protocol used by Ansible is “smart” which selects the best ssh method (SSH client or paramiko) depending on controller OS (Like WinRM) and ssh version.

Here paramiko is connection mechanism which we can use in python and other language to make the connection.

#ansible-config list: – to check which protocol we have selected.

Q. Explain Ansible workflow?

A. Controller node: – This is the node on which Ansible software is running. Default is Linux

Ansible Client or manage host: – They can be windows, mac, Linux or VMware, network devices.

Ansible configuration file: – Ansible take help of its configuration file that is ansible.cfg. definition and parameter defined in ini format and with the help of parameters ansible make the connection and deploy the stuff in the form of key and value.

Transport protocol: – This is used between Controller node and Client or managed host. Default are ssh, paramiko, WinRM.

Become method will help connect between remote user and become user and it will create a temporary structure.

Ansible first create a temporary structure and try to validate the code is viable or not. If not then it’s going to abort that if it has proper code to be execute then it will send the report to controller node.

Q. What are the types of inventories offered by Ansible?

A. If anyone is asking how ansible is identifying the manged host then its answer is by the help of inventories.

Ansible provides two types of inventories to parse the nodes of information and their parameter

  1. Static: – We can provide the list of hosts; those are hard coded and provide the manually.
  2. Dynamic: – We can write any script in shell, python, pearl and output will be in Json format, ansible will read it and

Q. What is Terraform remote state backend?

A. Terraform remote state is a mechanism that allows terraform to store its state information in a centralised location such as object storage bucket or remote key value store.

Q. Ansible work on ‘push’ or ‘pull’ CM strategy?

A. Configuration Management is widely used and famous tool to deploy application on remote servers. Ansible works on Push method and Puppet works on Pull mechanism. Ansible will push the changes to clients or remotely.

Q. Can improper indentation cause failure in playbook execution?

A. Indentation between different objects plays a vital role in Ansible. Improper indentation error not only cause code to look ugly but also causes a different failures like syntax and indentation errors during playbook execution.

Q. Can we manage existing resources from Terraform?

A. Yes with the help of terraform import command, that will capture terraform state file, we have to give resource id and name that we want to give in the terraform control. Then we can do.

Q. Explain the workflow of how docker container created?

A. Docker Container Creation workflow: –

 a. Dockerfile: – Create a dockerfile with application instructions.

 b. Build Image: – Use docker build to build an image from docker file/

 c. Run Container: – Run a container from the image using docker run.

d. Container: – The container runs your application isolated and self-contained.

Q. How do you manage multiple containers?

A. I manage multiple containers using Kubernetes ensuring high availability, scalability and load balancing.

Q. What is Istio used for in Kubernetes?

A. Istio is a Kubernetes-native mesh made by three companies working together — IBM, Google, and Lyft. It helps manage deployments, makes systems more resilient, and improves security. It uses open-source services such as Envoy, a high-performance proxy that handles all service traffic coming in and going out.

Q. What is blue green deployments?

A.  Blue green deployment is deployment strategy where you have two identical environments (Blue and Green) and you switch traffic between them when deploying new versions. This minimises downtime and allows quick rollback if issue arises.

Blue-green deployments are typically used for larger, less frequent releases or updates.

Q. What is Canary Deployment?

A. Canary deployment typically involves making changes in the existing environment, initially affecting a subset of users or servers.

Canary deployments are typically used for more frequent, smaller changes or updates.

Q. How do you scale your applications?

A. I use autoscaling application in aws or Kubernetes horizontal pod scaling to dynamically adjust resources based on traffic pattern and resource utilisation.

Q. How to setup HPA (Horizontal Pod Autoscaling)?

A. We need to setup Matrix server in our Kubernetes Cluster. We need to install the application. Matrix server is going to monitor the resources and limit that we have configured. Matrix server is going to give the matrix input to HPA like CPU usage, Memory Usage and according to the usage and if threshold will reach then HPA will increase and if usage decrease then decrease it.

Steps: –

1. Install Matric server.

2. Deploy Sample App.

3. Create Service

4. Deploy HPA.

5. Increase Load.

6. Monitor HPA Events.

To verify if Matric server is available or not. Commands: – 

kubectl top pods    and   kubectl top nodes

Q. How application will run if node is not available in Kubernetes?

A. If a node becomes unavailable in a Kubernetes cluster, the platform is designed to handle such scenarios and maintain the availability of applications.
Pod Rescheduling: When a node becomes unavailable or fails, Kubernetes automatically reschedules the affected pods to other healthy nodes in the cluster. This rescheduling is part of the platform’s effort to ensure high availability. The scheduler, a component of the Kubernetes master, is responsible for determining where to place pods based on resource constraints and other policies.

ReplicaSets and Replication Controllers: If you have defined replication controllers or replica sets in your Kubernetes deployment configurations, these controllers ensure that a specified number of replicas (pod instances) are maintained, even if some nodes fail. If a pod on a failed node is part of a replication controller or replica set, Kubernetes will automatically create a replacement pod on another healthy node.

Node Auto-Scaling: If you have set up auto-scaling for your node pool, the cluster can dynamically adjust its size by adding or removing nodes based on the demand for resources. This helps to ensure that there are enough resources available to run your applications even in the face of varying workloads or node failures.

Node Repair and Replacement: Depending on the underlying infrastructure and configuration, some Kubernetes clusters may be integrated with infrastructure tools that automatically detect and replace failed nodes. For example, cloud providers often have mechanisms for replacing failed virtual machines or instances.

Self-Healing: Kubernetes has built-in mechanisms for self-healing. The control plane continuously monitors the state of the cluster, and if it detects that a pod or node is not in the desired state, it takes corrective actions to bring the system back to the desired state.

In summary, Kubernetes is designed to handle node failures gracefully and automatically recover from such situations. Through rescheduling, replication controllers, auto-scaling, and other mechanisms, the platform ensures that applications continue to run with minimal disruption, even if nodes become unavailable.

Q. How do you rollback if something fails?

A. I will roll back to previous version with the help of Kubernetes. I will ensure regular monitoring and alerts to detect failures early. 

Q. What kind of issue that SonarQube will identify?

A. Code Coverage, Duplicacy in code.

Q. Feature of SonarQube?

A. Improve quality, grow developer skills., Continuous quality management., Reduce risk., Scale with ease.

Q. What is branches in GIT?

A. In Git, branches are a part of your everyday development process. Git branches are effectively a pointer to a snapshot of your changes. When you want to add a new feature or fix a bug—no matter how big or how small—you spawn a new branch to encapsulate your changes.

Q. Purpose of tagging in git?

A. Tags help in identifying different commits that are important enough to be recognized.

    A tag is an object referencing a specific commit within the project history

Q. Difference between merge and rebase

A. git merge is a way of combining changes from one branch (source branch) into another branch (target branch) whereas git rebase is a way of moving the changes from one branch onto another branch.

Q. Difference between git commit and git push

A. git commit saves repository changes on local but not remote repository. Contrarily, Git push then updates your git commit changes and sends it to remote repository.

Q. What is GitHub Webhooks?

A. Webhooks can be triggered whenever specific events occur on GitHub.

    GitHub allows you to set an API secret when creating your webhook. GitHub will use this secret to encrypt the webhook payload into a signature and put it in the X-Hub-Signature-256 header sent along with your webhook request.

Q. How do I create a webhook for GitHub?

A. i. On GitHub.com, navigate to the main page of the repository.

ii. Under your repository name, click Settings.

iii. In the left sidebar, click Webhooks.

iv. Click Add webhook.

v. Under “Payload URL”, type the URL where you’d like to receive payloads.

Q. What is master and slave in Jenkins?

A. The Jenkins master acts to schedule the jobs, assign slaves, and send builds to slaves to execute the jobs. It will also monitor the slave state (offline or online) and get back the build result responses from slaves and the display build results on the console output.

All the jobs to be executed by master will not be good way in case of failure so slave acts as to complete the task. Jenkins master acts as to distribute the workload to Jenkins slave. For this we need to configure the node with the time of pod duration. Idle time need to set and ideal pod collapse time.

Q. How many types of roles in Jenkins?

A. The number and type of roles can vary depending on the organization’s specific needs and the complexity of the Jenkins setup.

Administrator: The Jenkins administrator is responsible for configuring, maintaining, and securing the Jenkins server. This role often includes tasks such as plugin management, security configuration, server maintenance, and user management.

Developer: Developers use Jenkins to build, test, and deploy their code. They configure and manage their own build jobs and pipelines, ensuring that their applications are built and tested automatically.

Build Engineer: A build engineer is responsible for defining and maintaining build configurations, including creating and configuring Jenkins jobs, pipelines, and build scripts. They work closely with developers to ensure proper build processes.

Quality Assurance (QA) Engineer: QA engineers use Jenkins to run automated tests, conduct regression testing, and validate the quality of software builds. They may create and manage test jobs and pipelines.

Release Manager: The release manager is responsible for coordinating the deployment and release of software into different environments, such as development, testing, staging, and production. They may use Jenkins to trigger and manage deployment jobs.

DevOps Engineer: DevOps engineers are responsible for the overall automation and integration of the software development and deployment processes. They configure, optimize, and maintain Jenkins pipelines, and they may use Jenkins to orchestrate and manage the entire CI/CD process.

Security Administrator: The security administrator defines and enforces access control policies within Jenkins, ensuring that users have appropriate levels of access based on their roles. They manage security configurations and user access.

Q. How to allow a person with specific role-based accessibility

A. In Jenkins, you can control and manage role-based access control using the Role-Based Access Control (RBAC) plugin. This plugin allows you to define roles and assign specific permissions to users or groups based on their roles.

Install the RBAC Plugin:

If you haven’t already, install the Role-Based Access Control (RBAC) plugin. You can do this from the Jenkins plugin manager.

Configure Global Roles:

In the Jenkins dashboard, go to “Manage Jenkins” > “Configure Global Security.”

Under “Access Control,” select “Role-Based Strategy” or any other security realm that supports role-based access control.

Define Roles:

Under “Role-based Authorization Strategy,” click on the “Manage and Assign Roles” link.

Define roles based on the specific responsibilities in your organization. For example, you might have roles like “Developer,” “QA Engineer,” “Administrator,” etc.

Assign Permissions to Roles:

For each role, assign specific permissions or accesses by configuring what users with that role can and cannot do. Jenkins provides a list of permission settings that you can configure for each role.

Assign Users to Roles:

You can assign individual users or groups to specific roles.

Under “Role-based Authorization Strategy,” click the “Assign Roles” link, and then assign users or groups to roles by specifying their usernames or group names.

Test and Verify:

After configuring the roles and assigning users or groups, test the setup by logging in with various user accounts and ensuring that they have access to the Jenkins features and actions according to their roles.

Regularly Review and Update:

As your Jenkins environment evolves, you may need to adjust roles and permissions. Regularly review and update your RBAC configuration to ensure it aligns with your organization’s requirements.

Q. How to integrate artifactory with Jenkins?

A. Integrating Artifactory with Jenkins allows you to manage and store artifacts produced during your Jenkins builds. Artifactory is a popular artifact repository manager that works seamlessly with Jenkins to improve build efficiency and artifact management. Here are the steps to integrate Artifactory with Jenkins:

Prerequisites:

You should have a Jenkins server installed and running.

You should have Artifactory installed and configured.

Integration Steps:

Install Jenkins Artifactory Plugin:

Go to your Jenkins dashboard.

Click on “Manage Jenkins” in the left sidebar.

Select “Manage Plugins.”

In the “Available” tab, search for “Artifactory” and install the “Artifactory” plugin.

Configure Artifactory Server in Jenkins:

In the Jenkins dashboard, click on “Manage Jenkins” > “Configure System.”

Scroll down to the “Artifactory” section.

Click on “Add Artifactory Server Configuration” and provide the following details:

Name: A name to identify your Artifactory server.

URL: The URL of your Artifactory server.

Default Deployment Repository: The default repository in Artifactory where artifacts will be deployed.

Credentials: You can provide Artifactory API key or username/password credentials.

Click “Save” to save the configuration.

Configure Build Job to Deploy Artifacts:

Create or open a Jenkins build job.

In the job configuration, add a post-build step to deploy artifacts to Artifactory.

Select “Artifactory Generic Configuration.”

Choose the Artifactory server configuration you created in the previous step.

Define the deployment details, including the target repository and the artifacts to deploy.

Configure Build Job to Resolve Dependencies:

To resolve dependencies from Artifactory during the build process, add a build step to resolve artifacts.

Select “Artifactory Generic Resolution.”

Choose the Artifactory server configuration.

Define the resolution details, including the artifacts to resolve and the repository to resolve from.

Trigger the Jenkins Build:

Save the job configuration.

Trigger the build either manually or through a webhook, VCS trigger, or other methods.

View and Manage Artifacts in Artifactory:

After the build completes, the artifacts will be deployed to Artifactory.

You can access and manage the artifacts in the Artifactory web interface

Q. difference between traditional and dynamic firewall?

A. Traditional firewalls operate at the network layer (Layer 3) and make decisions based on source and destination IP addresses, as well as port numbers.

Dynamic firewalls, also known as Next-Generation Firewalls (NGFW), operate at multiple layers, including the application layer (Layer 7)

Traditional firewalls require manual configuration and updates. When network changes occur, administrators must update the rule sets to accommodate the new requirements.

Dynamic firewalls can enforce policies based on user and group identities, providing granular control. For example, they can restrict access to social media sites for specific user groups.

Dynamic firewalls integrate intrusion detection and prevention systems (IDPS) and antivirus capabilities. They can detect and block known threats and suspicious behaviours.

Q. How to check if the firewall is running or not in linux?

A. The two most common firewall management tools in Linux are iptables and firewalld.

A. Using iptables (Traditional Firewall):

To check if the iptables firewall is running, you can use the following command:

sudo systemctl is-active iptables

This command will return one of the following statuses:

active: The iptables firewall is running.

inactive: The iptables firewall is not running.

Additionally, you can check the status of the iptables service using:

sudo systemctl status iptables

This command will provide more detailed information about the service, including whether it’s running or not.

Using firewalld (Dynamic Firewall):

To check if the firewalld firewall is running, use the following command:

sudo systemctl is-active firewalld

It will return either active or inactive to indicate the firewall’s status.

For more detailed information about the firewalld service and its configuration, use:

sudo systemctl status firewalld

This command will provide the status and other details about the firewalld service.

Q. What is the directory path of HTTPD in Linux?

A. The directory path for the Apache HTTP Server (often referred to as httpd) in Linux can vary depending on the Linux distribution you are using.

Configuration Files and Directories:

Main configuration file: /etc/httpd/conf/httpd.conf (On some distributions, it may be /etc/apache2/httpd.conf or /etc/apache2/apache2.conf).

Configuration files for virtual hosts: /etc/httpd/conf.d/ or /etc/httpd/conf-available/ (This location can vary depending on the distribution).

Apache modules configuration: /etc/httpd/conf.modules. d/ (or a similar path).

Website Document Root:

The default document root where web content is stored: /var/www/html/. On some distributions, this directory might be /var/www/, /var/www/html/, or /var/www/apache2/default/.

Logs:

Apache log files are typically stored in /var/log/httpd/ or /var/log/apache2/. The main error log is often in a file named error log.

Q. How to change TLS v2 to V1 in Jenkins node? (TLS= Transport Layer Security, Port number- 443)

A. It’s a good practice to use the latest TLS versions that provide the highest level of security.

In Jenkins, you can configure the use of specific TLS versions for network communication in different ways, depending on your use case and the components involved. To change the TLS version from v2 to v1 for a Jenkins node, you may need to adjust the TLS configuration of various components, including Jenkins itself, the Java runtime, and any web server or reverse proxy in front of Jenkins. Here’s a general guideline:

Jenkins Configuration:

a. Go to the Jenkins web interface.

b. Click on “Manage Jenkins” and then “Manage Nodes and Clouds.”

c. Select the node you want to configure.

d. Click on “Configure” for that node.

e. If your node uses a Java-based agent, you might need to configure the Java Virtual Machine (JVM) options to set the desired TLS version. This is typically done by adding command-line options to the agent’s startup. You can set the system properties like this:

-Dhttps.protocols=TLSv1

f. Save your configuration.

Java Runtime Configuration:

The TLS version used by Java applications depends on the JVM configuration. You can specify the TLS version by setting system properties in Jenkins. In the previous step, you set the -Dhttps.protocols property to specify the TLS version.

Make sure you restart the Jenkins agent or node for the configuration to take effect.

Web Server or Reverse Proxy Configuration:

If you have a web server or reverse proxy (e.g., Nginx, Apache) in front of Jenkins, you may need to configure the TLS settings there as well. The method for configuring TLS versions can vary depending on the web server or reverse proxy you are using.

For example, in Apache, you can set the SSL Protocol directive to specify the TLS version. In Nginx, you can set the ssl protocols directive.

Check Your Jenkins Master Configuration:

Ensure that your Jenkins master is also configured to use the desired TLS version. You can do this by modifying the Jenkins master’s Java runtime options and adjusting the system properties, similar to the Jenkins node configuration.

Test and Verify:

After making the necessary changes, test your Jenkins node to ensure that it is now using TLS v1 instead of v2.

You can use tools like OpenSSL to verify the TLS version being used when connecting to your Jenkins node.

Q. What kind of server you have supported in aws devops interview questions

A. EC2 Instances (Virtual Servers):

Understanding of Amazon Elastic Compute Cloud (EC2) instances, their instance types, and how to launch and manage them.

Web Servers:

Configuration and management of web servers like Apache, Nginx, and IIS on EC2 instances.

Application Servers:

Setting up and maintaining application servers such as Tomcat, WildFly, or WebSphere for hosting web applications.

Database Servers:

Management of database servers like Amazon RDS (Relational Database Service) for MySQL, PostgreSQL, SQL Server, etc.

Q. Can we have two masters in Jenkins?

A. You can have multiple Jenkins masters configured with the same connection details (ensure you select distinct remote paths mind) and connect them to the same machine. The only issue you may see is if all the Jenkins masters try to run builds on that node at the same time.

Q. What happens when master in Jenkins goes down?

A. Whenever there is a problem with the active master and it goes down, the other master will become active and requests will resume.
If the Jenkins master is lost or destroyed, there may be a crippling impact on your organization’s ability to build, test, or release. Let’s address this and create a disaster-recovery plan for Jenkins to ensure a high level of availability and quick turnaround time for any failures that may occur.

Q. what are Jenkins pipeline stages?

A. Build”, “Test”, and “Deploy”. Apart from these three stages there are few additional stages involved in this like. Artifactory publish, install, Lighthouse, zap scan, SonarQube, Nexus-iq

Example: –

Jenkinsfile (Declarative Pipeline)

pipeline {

agent any

stages {

stage(‘Build’) {

steps {

//

}

}

stage(‘Test’) {

steps {

//

}

}

stage(‘Deploy’) {

steps {

//

}

}

}

}

Q. What is the use of Ansible?

A.  It helps automate repetitive tasks, improve efficiency, and ensure consistency in infrastructure and application configurations.

   Ex: – We can write the playbook in my system and install GIT in multiple host machines.

Q. Playbook in ansible?

A. Ansible Playbooks offer a repeatable, re-usable, simple configuration management and multi-machine deployment system, one that is well suited to deploying complex applications. If you need to execute a task with Ansible more than once, write a playbook and put it under source control.

Vault will help to secure our playbook.

Ansible Playbook example: –

Here become: – Yes is acts as root user and no need to use sudo.

Play1 – Webserver related tasks

  – name: Play Web – Create Apache directories and username in web servers

    hosts: webservers

    become: yes

    become_user: root

    tasks:

      – name: create username apacheadm

        user:

          name: apacheadm

          group: users,admin

          shell: /bin/bash

          home: /home/weblogic

      – name: install httpd

        yum:

          name: httpd

Q. What is docker and why we are using docker?

A. Docker is containerisation tool that used to create container which is called virtual machine. Docker is a tool that allows developers, sys-admins etc. to easily deploy their applications in a sandbox (called containers) to run on the host operating system i.e., Linux.

Q. What is Dockerfile?

A. A Dockerfile is a script that uses the Docker platform to generate containers automatically.

It is text file which contains set of instructions which is used to build the images automatically.

Dockerfile is a text document containing all the commands the user requires to call on the command line to assemble an image. With the help of a Dockerfile, users can create an automated build that executes several command-line instructions in succession.

Q. What is Docker image?

A. This is template to create docker container. A Docker image is a template file used to execute code in a Docker container. An image is comparable to a snapshot in virtual machine (VM) environments.

Q. Docker Container?

A. Running instance of Docker image. Containers hold entire package to run application.

Note: – First, we need to add instruction to create dockerfile and build to make docker image and from image we can create docker container

Q. Command To stop the docker images

A. docker stop [images-name]

Q. Command to remove container

A. Docker rm container-id

Q. Command to restart the container

A. docker restart

Q. Command To push image to docker hub: –

A. docker login

Q. Command To save data locally

A. docker commit

Q. Command To push data to repository or docker hub

A. docker push

Q. Command to copy file from docker to local system

A. docker local

Q. Command to check the logs

A. docker logs [container-name]

Q. Command If we want docker container to store the data so we can create

A. docker container

Q. Command to logout from docker hub

A. docker logout

Q. what is docker compose?

A. Docker Compose is used to run multiple containers as a single service. For example, suppose you had an application which required NGNIX and MySQL, you could create one file which would start both the containers as a service without the need to start each one separately.

Q. what is docker inspect do?

A. docker inspect is a command that returns detailed, low-level information on Docker objects. Those objects can be docker images, containers, networks, volumes, plugins, etc.

Q. What is the advantage of Docker?

A. Fast deployment, ease of creating new instances, and faster migrations.

Docker File Example: –

FROM docker.io/centos

MAINTAINER admin

RUN yum update && yum -y install httpd

RUN echo “Welcome to our homepage created using dockerfile” > /var/www/html/index.html

EXPOSE 80

CMD apachectl -D FOREGROUND

Q. What is difference between run and cmd in docker?

RUN is used to execute the command while we build the image. CMD is used to execute the command while we RUN the image.

A. RUN is an image build step, the state of the container after a RUN command will be committed to the container image. A Dockerfile can have many RUN steps that layer on top of one another to build the image.

CMD is the command the container executes by default when you launch the built image. CMD command is to launch the software required in a container.

Q. How do you check logs in a container?

A. The docker logs command shows information logged by a running container.

Command: – Docker logs container id

Q. Command to run the docker run A. docker run -d -p <image name> Here p= publisher or port mapping.

docker run [OPTIONS] IMAGE [: TAG|@DIGEST] [COMMAND] [ARG...]

Q. What is Kubernetes?

A. Kubernetes is an open-source container orchestration system for automating software deployment, scaling, and management.

Q. Steps to create EKS Cluster?

A. Set Up AWS Account: If you don’t already have an AWS account, sign up for one. You’ll need this account to create and manage your EKS cluster.

Install AWS CLI: Install the AWS Command Line Interface (CLI) on your computer. This tool allows you to interact with AWS services, including EKS, from the command line.

Create an IAM Role: In AWS, create an IAM (Identity and Access Management) role with the necessary permissions to manage EKS. This role will be used by your EKS cluster to interact with other AWS services.

Install kubectl: Kubectl is a command-line tool for interacting with your Kubernetes clusters. You’ll need it to manage your EKS cluster.

Install eksctl: Eksctl is a command-line tool provided by AWS to simplify the process of creating and managing EKS clusters. You can install it to make cluster creation easier.

Configure AWS CLI: Use the AWS CLI to configure your AWS credentials by providing your access key, secret key, region, etc.

Create a VPC: Create a Virtual Private Cloud (VPC) in AWS. This is your private network environment where your EKS cluster will run.

Create Subnets: Inside your VPC, create two or more subnets in different Availability Zones. These subnets provide the network infrastructure for your EKS nodes.

Create a Security Group: Define a security group that specifies rules for inbound and outbound traffic for your EKS cluster. This is like a firewall for your cluster.

Create a Key Pair: If you plan to connect to your EKS nodes using SSH, create an EC2 key pair to securely access your worker nodes.

Create the EKS Cluster: Use eksctl or the AWS Management Console to create your EKS cluster. Provide information such as the cluster name, VPC, subnets, and the IAM role you created.

Wait for Cluster Creation: It may take a few minutes for AWS to create your EKS cluster. You can check the status using the AWS Management Console or CLI.

Configure kubectl: After the cluster is ready, configure kubectl to connect to your EKS cluster. This involves setting up the Kubernetes configuration to point to your EKS cluster.

Launch Worker Nodes: Create worker nodes (EC2 instances) for your EKS cluster. These nodes are where your containers will run. You can use AWS Auto Scaling groups to manage these nodes.

Deploy Your Applications: Now that your EKS cluster is up and running, you can start deploying your containerized applications to the cluster using Kubernetes manifests or Helm charts.

Q. What is difference between docker container and Kubernetes?

A. Docker is a container runtime; Kubernetes is a platform for running and managing containers from many container runtimes.

The difference between the two is that Docker is about packaging containerized applications on a single node and Kubernetes is meant to run them across a cluster.

Q. What is pod in Kubernetes?

A.  A pod is the smallest execution unit in Kubernetes. If a pod (or the node it executes on) fails, Kubernetes can automatically create a new replica of that pod to continue operations. In pod we can create multiple containers.

Q. How to create the POD? A. apiVersion: v1kind: Podmetadata:name: nginxspec:containers:- name: nginximage: nginx:1.14.2ports:- containerPort: 80

Q. What is service in Kubernetes?

A. A Kubernetes service is a logical abstraction for a deployed group of pods in a cluster.

In Kubernetes, there are several types of services that you can use to expose and manage your applications. Here are three commonly used service types:

ClusterIP: ClusterIP services provide internal, cluster-wide networking within the Kubernetes cluster. They make a service accessible only from within the cluster, typically used for communication between different parts of your application. This type of service is not exposed to the external world.

NodePort: NodePort services expose a service on a static port on each worker node in the cluster. This means the service can be accessed externally using the node’s IP address and the defined port number. It’s often used when you need to access a service from outside the cluster, such as for web applications.

LoadBalancer: LoadBalancer services are used to expose a service to the external world, typically through cloud providers’ load balancers. This type of service is especially useful for high-availability and scaling, as it can distribute incoming traffic across multiple pods or replicas of your application.

Q. Difference between RTO and RPO?

A. Recovery Time Objective (RTO): RTO is the maximum amount of time a business or system can afford to be down (unavailable) after a disaster or outage occurs. It represents the time it takes to recover and restore the system to a functional state. In simple words, RTO answers the question, “How quickly do we need to get back up and running?”

Recovery Point Objective (RPO): RPO is the maximum tolerable amount of data loss that a business can accept in the event of a disaster or system failure. It defines the point in time to which data must be recovered to resume normal operations. In simple words, RPO answers the question,

“How much data are we willing to lose?”

Q. What is difference between deployment and statefulset in Kubernetes

A. Kubernetes Deployment and Kubernetes StatefulSet are two powerful resources for managing containerized applications on Kubernetes. Deployments are useful for managing stateless applications, while Stateful-Sets are useful for managing stateful applications that require stable network identities and persistent storage.

Q. Difference between Ingress and egress?

A. Ingress: Think of “ingress” as the entry point or gateway into your Kubernetes cluster. It manages incoming traffic from external sources, such as users accessing a web application, and directs that traffic to the appropriate services or pods inside the cluster. It acts like a traffic cop, routing requests to the right destinations based on rules and configurations you define.

Egress: On the other hand, “egress” is all about the outgoing traffic from your cluster. It controls how your pods or containers communicate with external resources, like accessing data from a remote database, making API calls to external services, or even accessing the internet. Egress rules help manage and control this outbound traffic, often for security or policy reasons.

Q. What are the most used Terraform commands?

A. Terraform init: – To initialise the terraform module.

     Terraform Plan: – To check the config before the deployment

      Terraform apply: – To apply the changes.

     Terraform destroy: – TO destroy the config created by terraform.

     Terraform Format: – To make the indentation of the configuration file.

    Terraform validate: – To validate the code indentation.

     Terraform show: – To show the state file configuration.

     Terraform import: – To use resources created other than terraform or manually.

Q. Difference between SSL and TLS?

SSL (Secure Socket Layer): –

A. SSL was the original technology developed to secure internet communication.

It has several versions, including SSL 2.0 and SSL 3.0, but these versions are now considered insecure due to vulnerabilities.

SSL is no longer recommended for use because of these security flaws. It has been largely replaced by TLS.

TLS (Transport Layer Security): –

TLS is an improved and more secure version of SSL.

It was created as an upgrade to SSL and addresses the security issues found in SSL versions.

TLS has multiple versions (e.g., TLS 1.0, TLS 1.1, TLS 1.2, TLS 1.3), with each version being more secure and efficient than the previous one.

TLS is the modern and recommended technology for securing data transmission on the internet.

Q. Dynamic blocks and Standard Configuration Blocks in terraform?

A. Dynamic blocks: – In simple terms, dynamic blocks allow you to generate and manage a variable number of similar configurations without repeating the same code multiple times.

This can be useful when you have a variable number of resources to create or when you want to reuse the same resource type with different attributes.

Standard Configuration Blocks: -In Terraform, you typically define resource configurations with static blocks

Q. What is Terraform D?

A. This is in built plugin that we use to execute terraform command from external source code editor itself.

Q. What is Terraform backend and its types?

A. A backend defines where terraform stores its state data files.

There are two types of Terraform backends: local and remote.

1. Local Backend. A local backend stores the state file on the machine where terraform is running. This is the default backend that is used if you don’t specify a backend in your Terraform configuration.

Q. What is blueprint deployment in aws devops?

A. Blueprints automatically generate source code and a continuous integration and delivery (CI/CD) pipeline to deploy common patterns to your AWS account without requiring extensive programming knowledge.

Blueprints are a declarative way to orchestrate the deployment of various resource templates and other artifacts

Q. What is DaemonSet in Kubernetes?

A. DaemonSet is a Kubernetes feature that lets you run a Kubernetes pod on all cluster nodes that meet certain criteria. Every time a new node is added to a cluster, the pod is added to it, and when a node is removed from the cluster, the pod is removed.

Q. What to do when I forget the Git password of my local?

A. In local We can directly generate the token from GitHub in that case and use it.

Q. Difference between wget and curl command?

A. Curl and wget are both command-line tools used to retrieve data from internet. They use different protocols to perform this task, with curl supporting a variety of protocols, including HTTP, HTTPS, FTP, FTPS, SCP, SFTP, and more. Wget, on other hand, primarily supports HTTP and FTP protocols.

With curl commands option O will be used while wget command will be used without any option. The file will be saved in the current directory.

Q. What is group in the Linux?

A. In Linux, groups are collections of users. Creating and managing groups is one of the simplest ways to deal with multiple users simultaneously, especially when dealing with permissions. The /etc/group file stores group information and is the default configuration file.

Q. What to do if code quality gate is failed in SonarQube?

A.  There are multiple by default quality gate and we have option to have create custom quality gate to check the code quality. If any Jenkins builds are failing then we check in SonarQube as well if builds failing due to low code coverage. Then we connect to dev team for that.

The built-in Sonar way, quality gate.

How to start blogging on Instagram and make money for free Click Here

Q. How do I change my Sonar quality gate?

A. Open your project in SonarQube.

Go to the Administration > Quality Gate menu for project.

Choose the quality gate you want to use for that project.

Q. If build failed in first stage, then how can be build progressed for next stage check?

A. We can use post filled action by using always. Three results failed, success and always, one more is unstable if build is having any nexus IQ vulnerabilities.

Q. What are the plugins we use in Jenkins?

A. 1. For backup we use thinBackup plugin,

2. Pipeline Plugin for CI

3. SonarQube Plugin for code quality check

4. Docker plugin to create Docker containers and automatically run builds on them,

5. Kubernetes Plugin for creating individual Kubernetes Pods for each agent on the Docker image., Kubernetes plugin also terminates the Kubernetes Pods automatically once you finish the build, this plugin integrates Jenkins with Kubernetes, this plugin, you can automatically scale the running process of Jenkins agents in the Kubernetes environment.

6. Git Plugin for Git operations like polling, branching, merging, fetching, tagging, listing, and pushing repositories., It helps you to schedule your builds and automatically triggers each build after each commit.

7. Jenkins JFrog Plugin for storage of build image

Q. Explain the AWS Shared Responsibility Model.

Answer: The AWS Shared Responsibility Model defines the division of security responsibilities between AWS and the customer. AWS is responsible for the security of the cloud (e.g., infrastructure, global network), while the customer is responsible for security in the cloud (e.g., configuring and securing their own applications and data).

DevOps:

Q. What is the difference between Continuous Integration (CI) and Continuous Deployment (CD)?

Answer: Continuous Integration (CI) is the practice of frequently integrating code changes into a shared repository and automatically running tests. Continuous Deployment (CD) is the automatic deployment of code changes to production or staging environments after passing CI tests.

Q. Explain the purpose of a CI/CD pipeline and its components.

Answer: A CI/CD pipeline automates the software delivery process. Components include version control, build automation, testing (unit, integration, and acceptance), artifact repositories, deployment automation, and monitoring.

Q. What is Docker, and how does it facilitate containerization in DevOps?

Answer: Docker is a platform for developing, shipping, and running applications in containers. Containers are lightweight, isolated environments that encapsulate an application and its dependencies. Docker simplifies application deployment and ensures consistency across different environments.

Q. How do you secure sensitive data in AWS, such as access keys and passwords?

Answer: Secure sensitive data by using AWS Identity and Access Management (IAM) for access control, AWS Key Management Service (KMS) for encryption, and parameter stores or secrets managers for storing credentials securely.

Q. How do you handle a critical production incident, and what steps would you take to resolve it?

Answer: In the event of a critical incident, I follow an incident response process that includes identifying the issue, notifying the relevant teams, containing the impact, troubleshooting and diagnosing the problem, implementing a fix, and conducting a post-incident review for lessons learned and prevention.

Q. What are microservices, and what are their advantages and challenges in software architecture?

Answer: Microservices are a software architectural pattern where applications are composed of small, loosely coupled services. Advantages include scalability and independent development. Challenges include increased complexity and inter-service communication.

Q. Describe your experience with automated testing and its importance in DevOps.

Answer: I have extensive experience with automated testing, including unit, integration, and acceptance testing. Automated testing ensures code quality, reduces manual effort, and supports the CI/CD pipeline by providing fast feedback on code changes.

Q. Can you explain the principles of immutable infrastructure and how it relates to DevOps practices?

Answer: Immutable infrastructure is the concept of not modifying running infrastructure but rather replacing it with a new, updated version. This approach ensures consistency, eliminates drift, and simplifies rollbacks, making it compatible with DevOps practices.

Q. Explain how you would scale a web application to handle a sudden traffic surge or load spike.

Answer: Scaling a web application involves vertical or horizontal scaling of resources, auto-scaling groups, and load balancing. I would ensure proper monitoring and alerting to trigger scaling actions and optimize the application for performance.

Q. What are the key considerations for securing containerized applications, and what tools or practices have you used for container security?

Answer: Key considerations include image scanning, least privilege principles, and runtime protection. I’ve used tools like Docker Bench Security, container vulnerability scanners, and Kubernetes RBAC for container security.

Q. Describe your experience with continuous monitoring and observability in a DevOps environment.

Answer: I have implemented continuous monitoring using tools like Prometheus, Grafana, and ELK Stack. Observability includes metrics, logs, and traces to gain insights into application performance and troubleshoot issues efficiently.

Q. How do you stay up-to-date with AWS and DevOps best practices and trends?

Answer: I regularly read AWS blogs, attend webinars, and participate in online communities. I also experiment with new tools and technologies in personal projects to gain hands-on experience.

Q. Can you provide an example of a challenging problem you encountered in a previous role and how you resolved it using AWS or DevOps practices?

Answer: In my previous role, we faced a scaling issue during a Black Friday sale. I implemented auto-scaling and load balancing in AWS, optimized database queries, and used caching to handle the increased load, resulting in a successful and uninterrupted sale.

Q. How would you handle a database schema migration in a production environment?

Answer. Handling a database schema migration in a production environment is a critical operation that should be carefully planned and executed to minimize downtime, ensure data integrity, and prevent potential issues. Here is a step-by-step guide on how to perform a schema migration in a production environment:

1. Backup the Database:

Before making any changes, take a complete backup of your production database. This backup serves as a safety net in case anything goes wrong during the migration.

2. Version Control:

Ensure that your database schema changes are under version control. Use a tool like Flyway, Liquibase, or manual SQL scripts managed in a version control system like Git.

3. Test the Migration:

Prior to applying the migration to the production database, thoroughly test it in a development or staging environment that closely mirrors the production setup. Test to ensure the migration works as expected, doesn’t cause data loss or corruption, and doesn’t negatively impact performance.

4. Schedule a Maintenance Window:

Coordinate with your operations team to schedule a maintenance window during a period of low traffic or when downtime is acceptable to your users. Communicate the maintenance window to your users in advance.

5. Deploy the Migration:

Deploy the database schema migration to the production environment. Depending on your database system and migration tool, you may need to run migration scripts manually or use an automated process.

6. Monitor and Verify:

Monitor the migration process in real-time to catch any issues as they occur. Automated monitoring and alerting can help detect problems early.

After the migration, perform thorough verification to ensure that the database is in the expected state. This includes running tests, queries, and validation scripts.

7. Rollback Plan:

Have a well-defined rollback plan in case the migration encounters critical issues. This plan should allow you to revert to the previous database schema and data state quickly.

8. Data Migration Strategies:

If your migration involves data transformations, consider strategies like blue-green deployments or canary releases to minimize the risk of data-related issues.

Use transactional SQL statements to ensure consistency and atomicity in data updates.

9. Graceful Downtime:

If downtime is unavoidable, display a user-friendly maintenance page to inform users of the scheduled maintenance and expected downtime duration.

10. Post-Migration Testing:

After the migration is complete, run extensive post-migration tests to ensure the application functions correctly with the new schema. Verify that all features work as expected.

11. Monitoring and Cleanup:

Set up ongoing monitoring to watch for any issues that may arise after the migration. Continue to monitor performance and stability.

Clean up any temporary objects or scripts used during the migration process.

12. Documentation and Communication:

Update documentation to reflect the changes made to the database schema.

Communicate the successful completion of the migration to relevant stakeholders and users.

Q. Difference between Agile and DevOps?

A. The key difference between Agile versus DevOps is that Agile is a philosophy about how to develop and deliver software, while DevOps describes how to continuously deploy code through the use of modern tools and automated processes.

Q. How do you handle secrets and sensitive information in Terraform configurations?

A. Handling secrets and sensitive information in Terraform configurations is a crucial aspect of infrastructure management. Terraform provides several mechanisms for managing secrets and sensitive data:

Environment Variables:

Store sensitive information like API keys, access tokens, or database passwords in environment variables outside of your Terraform configuration.

Terraform Input Variables:

Use input variables to pass sensitive data into your Terraform modules and configurations without storing them in plain text.

Encourage users to provide these variables from external sources (e.g., command-line flags, environment variables) when running Terraform.

HarshiCorp Vault:

HarshiCorp Vault is a dedicated tool for managing secrets and sensitive data. Terraform can integrate with Vault to securely retrieve secrets at runtime.

Use Terraforms Vault provider to access secrets stored in Vault and use them in your configurations.

External Data Sources:

If you need to retrieve sensitive data from external sources (e.g., AWS Secrets Manager, Azure Key Vault), use Terraforms data sources to fetch the data dynamically at runtime.

This approach ensures that sensitive data is not stored in Terraform configuration files.

THANKS

One thought on “Top 100 DevOps Interview Questions and Answers for Beginners & Experienced”

Leave a Reply

Your email address will not be published. Required fields are marked *