Cloud technologies have come a long way ever since the initial release of public cloud platforms, AWS, Azure and GCP. This computing offering has greatly disrupted the IT Industry and have transformed the overall IT platforms spend from CAPEX to OPEX investment. Fortunately, in these exciting times Cloud is the new normal and this brings-in both opportunities for new business models as well as a new challenge to manage the IT Infrastructure in the Cloud. Enterprises are increasingly investing on upscaling their workforce to be “cloud-ready”. We have also witnessed increased DevOps adoption in mainstream application management, and this has greatly influenced how administrators and IT managers want to manage their Infrastructure. We live in the API world now and this empowers cloud administrators and users to envision their Infrastructure management by Infrastructure as a Code including Infrastructure management by tools like terraform, arm template, cloud formation, configuration management tools like ansible, chef, puppet. Primary aim is to have a single source of truth for the Infrastructure. Network and Security Administrators are also increasingly asking for API based next-gen solutions with ability to track and manage changes as a code. Security and compliance management automation using Chef Inspec is becoming increasingly famous
A. Cloud Infrastructure management options:
We will look at the possible ways of managing the Cloud Infrastructure and what challenges do they pose for the IT Management teams.
- Hyperscaler portal
- Infrastructure as a Code approach: Terraform, Cloud Management portal & Hyperscaler APIs, ARM Template with PowerShell or CLI
The de-facto option for managing your cloud environment is by using the Hyperscaler portal for example the AWS, Azure or GCP user portal. While this is perfectly fine for smaller organizations and by using IAM roles you can follow the principle of least privilege but you loose track of the changes made to your Infrastructure over a period. This becomes an essential feature in case of large distributed teams where multiple teams have the access to modify resources in the cloud portal.
Most cloud services like Azure VMs, Load Balancers, VNETs offer rich-API library and Hyperscaler portal invokes these API to modify resources based on customer inputs. We are seeing more and more enterprises locking down access to their Hyperscaler portals and adopting the IaaC approach.
Infrastructure as a Code approach:
- Cloud Management portal & Hyperscaler APIs
- ARM Templates with PowerShell or CLI
Terraform by HashiCorp offers a robust framework for IT Administrators and Cloud Infrastructure Managers develop and maintain and manage their complex Infrastructure on all major Cloud and Infrastructure platforms like Azure, AWS, GCP, Alibaba Cloud, VMware etc. IT teams are relying on terraform for implementing Infrastructure as a Code and are embracing Policy as a Code, Monitoring as a Code and Security as a Code to not just maintain a single source of truth for their Infrastructure but also maintain historical state of the Infrastructure while having the ability to revert back to a previous ‘stable state’ should there be any mis-configurations or errors.
One of the other benefits of using terraform over Hyperscaler specific IaaC toolkits is the simplicity and platform-agnostic benefit it offers. As most organizations move towards multi-cloud strategy, upscaling IT teams to manage multiple teams becomes critical. Terraform offers a ‘train once, deploy to any’ platform by offering multiple providers with all major Hyperscaler platforms. Terraform is part of the overall Infrastructure Management strategy and broader strategy involves multiple components for configuration management as well as secrets management system to say the least. The solution offers broadest option for scalability and can be adopted by Infrastructure of any size.
Cloud Management portal & Hyperscaler APIs
Another popular method of managing Hybrid Cloud Infrastructure is by using a sophisticated Cloud Management Platform which invoke Hyperscaler APIs to provision and modify the Cloud Infrastructure.
This method is generally popular with customers that are just starting towards their cloud journey, are used to managing their Infrastructure platform via vSphere or Hyper-V portal or follow a very detailed inter-dept. process internally involving Infrastructure teams, approval process, have invested in sophisticated events management solutions or want a unified experience in managing both their On-Premises and Cloud Infrastructure. Most customers choose to adopt a product like NetApp Cloud Manager, IBM Cloud Management Platform, RedHat Cloud Forms etc while few customers prefer developing be-spoke cloud management platform to manage their Infrastructure. The core components and features that customers are looking for in their Cloud Management Platforms are the same irrespective of their platform selection. Most Cloud Management Platforms do not record the state of the Infrastructure and there is limited to no possibility to roll-back the Infrastructure to a previous healthy state in case of any disaster. A lot of customers are developing the Cloud Management Platform for their organization as a be-spoke development project.
ARM Templates with PowerShell or Azure CLI
Resources in Azure Cloud are accessible either via Azure Portal or Azure CLI or Azure PowerShell or through REST APIs, each of these access options utilize the underlining Azure ARM Framework for managing the underlining Azure resources. Azure ARM Framework then interacts with the resource provider for each of the azure service which is integrated with the service delivery layer that manages the physical deployment and placement of resources on the physical hardware in an Azure Datacentre.
Azure provides a collection of declarative tools called ARM templates that offer a simple way to manage your azure infrastructure environment. Azure resource requests are encapsulated in these template files and we can either deploy them via azure-cli or PowerShell.
Let’s look at a sample ARM template for setting up your first Azure NetApp Files Account. To begin with, make sure you’ve sufficient privileges on your Azure user to create Azure resources in the desired regions/subscription/resource groups and you can initialize or authenticate your user with your command prompt or bash shell or PowerShell client either via browser login or with service principal.
Make sure you choose the correct subscription and set it as your default deployment option. Now you’ve an authenticated session and in the current session you can directly manage and modify resources in Azure as per your user privilege. A very simple ARM Template example for setting up Azure NetApp Account in your subscription would be(in an authenticated session):
We can modify the parameters based on our configuration. Next, we can save this file in .json format and either deploy it using azure-cli or PowerShell. The above ARM template will deploy a new Azure NetApp Account in our RG and join the account to our Domain Controller server. We can add more resources in our ARM template to create capacity pools, volumes, snapshot policies, snapshots, backup policies and backups. Follow the Microsoft official documentation here.
B. Why Infrastructure as a Code is the new normal and why enterprises are preferring Terraform?
Terraform offers a powerful platform for managing your Cloud-IT Infrastructure and supports multiple providers like VMware, AWS, Azure, GCP, Alibaba Cloud etc. This becomes very powerful when customers are looking to upgrade from Private Infrastructure to a Hybrid Cloud Infrastructure. Same team can manage multiple Infrastructure environments and the whole process can be simplified and automated using a DevOps pipeline solution like Jenkins or Azure DevOps. Customers can break their terraform code into different modules and use it to modify their resources as required.
Once you execute a terraform apply command against your .tf file that contains information on Hyperscaler resources, terraform generates an output file called terraform.tfstate file which records the state of the current Infrastructure. Upon subsequent execution of terraform apply command, terraform refers to this file state file and compares the changes being requested in the new .tf file. Only diversion from captured Infrastructure state in the .tfstate file will be triggered for deployment, modification.
Another use case is to be able to revert the Infrastructure to a previously known healthy state when used with a version control system.
Single Source of Truth
As a best practice, we always use a version control system while committing changes to our .tf file and the periodic Infrastructure refresh is executed by making changes in the .tf file and these changes are tracked and approved post multiple reviews. In large organizations with distributed teams, it becomes increasingly important to have a single source of truth for the Infrastructure in order to ensure we are in control of the Infrastructure state at all times and we have the ability to revert to a previously known good state.
Integration, Ecosystem and Community support
Hyperscaler Infrastructure is never isolate. It is mostly surrounded by an ecosystem of 3rd party solutions and systems like firewall, security appliances, docker/Kubernetes, configuration management system, secrets management system etc. A lot of customers are looking at Hybrid Cloud or Multi-Cloud strategy and are looking at using virtual appliances for many of these tools in the public cloud.
Terraform is empowered by a rich list of plugins for 3rd party services integration that allow users to manage their Infrastructure as well as the complementary components using other tools of their choice. Terraform also fits-in the DevOps ecosystem smoothly. Terraform originally being an open-source project(which is still widely adopted) is also very popular among DevOps Engineers, System Administrators and IT Managers. This further strengths the case for terraform as the IaaC framework of choice as Hybrid or Multi-Cloud Environments. IT managers need to train their teams once on terraform and the engineering team can manage multiple Hyperscaler environments.
C. What is Azure NetApp Files?
Azure NetApp Files (ANF) makes it super-easy for enterprise LOB and storage professionals to migrate and run complex, performance-intensive and latency-sensitive applications with no code-change. ANF is widely used as the underlying shared file-storage service in these scenarios: Migration (lift-and-shift) of POSIX compliant Linux and Windows applications, SAP HANA, Databases, HPC infra and apps, and enterprise web-applications.
- Support for multiple protocols enables “lift & shift” of both Linux & Windows applications to run seamlessly in Azure
- Multiple performance tiers allow for close alignment with workload performance requirements.
- Deep integration into Azure enables a seamless & secure Azure experience, with no learning or management overhead
- Leading certifications including SAP HANA, GDPR, and HIPPA enables migration of the most demanding workloads to Azure
Get Extreme File Performance
Migrate and run your most demanding Linux & Windows file-workloads in Azure, powered by NetApp’s industry-leading technology. Get bare-metal performance, sub-millisecond latency, and integrated data management for your complex enterprise workloads including SAP HANA, HPC, LOB applications, High-performance file-shares, and VDI
Simplify Storage Management
Set up in minutes, and manage seamlessly, like any other Azure service, using the familiar Azure portal experience, CLI, PowerShell, or REST API. Support for multiple file-storage protocols in a single service, including NFSv3, v4.1 and SMB 3.1.x, enables a wide range of application lift-and-shift scenarios, with no need for code changes. ANF comes with 3 performance tiers: Standard, Premium, and Ultra that can be provisioned with a simple click, allowing unmatched flexibility
Migrate with Confidence
Get the peace of mind with the industry leading security and compliance portfolio of Azure. Critical capabilities in this area include FIPS-140-2 compliant data encryption at rest, RBAC, AD authentication, and export policies for network-based ACLs. ANF also complies with leading industry certifications like HIPPA, and GDPR. This, along with a default 99.99 percent availability for ANF means that you can migrate & securely run industry applications in Azure, with confidence
D. How can you use terraform to manage your ANF environments?
Terraform supports deployment of Azure NetApp Files stack using IaaC approach and provides support for Azure NetApp files resources – ANF Account, ANF Capacity Pool, ANF Volumes and ANF Snapshots.
A sample example of terraform code to deploy Azure NetApp Files stack in an existing Azure Account using existing resources where:
- ANF is the delegated subnet for ANF
- We want to deploy ANF stack in an existing resource group: anfdeploymenttfrg
- ANF subnet is part of the eu-north-hub VNET and this VNET is part of the eu-north-core resource group
- We use a remote active directory server in Azure which resides in same region as our ANF resources, service account with domain administrator rights credentials are: jointoaduser and XXXXXXXXXXXXX
- IP address of the DNS server is A.B.C.D
We intend to deploy an ANF account ‘example-netapp’ with a capacity pool ‘example-netapppool’ with 4 TiB size and ‘Standard’ performance tier. We will also deploy an ANF volume ‘example-netappvolume’ with 900GB allocated storage and NFS V3 protocol. We will also create a snapshot for this volume ‘example-snapshot-01’
Interested to learn more on what you can do with terraform on ANF, check out the step-by-step process for setting up ANF in an existing Azure environment using terraform in one of my upcoming articles.