I had a need to take a full SQL Database backup from a virtual machine with SQL Server hosted on Azure. This is done via an Azure Automation account, executing a runbook on a hybrid worker. This is a great way to take a offline copy of your production SQL and store it someplace safe.
To accomplish this we will use the PowerShell module ‘sqlps‘ that should be installed with SQL Server and run the command Backup-SqlDatabase.
Tagging becomes a huge part of your life when in the public cloud. Metadata is thrown around like hotcakes, and why not. At cloudstep.io we preach the ways of the DevOps gods and especially infrastructure as code for repeatable and standardised deployments. This way everything is uniform and everything gets a TAG!
I ran into an issue recently where I would build an EC2 instance and capture the operating system into an AMI as part of a CloudFormation stack. This AMI would then be used as part of a launch configuration and subsequent auto scaling group. The original EC2 instance had every tag needed across all parts that make up the virtual machine including:
EBS root volume
EBS data volumes
Elastic Network Interfaces (ENI)
EC2 Instance itself
When deploying my auto scaling group all the user level tags I’d applied had been removed from the volumes and ENI. This caused a few issues:
EBS volumes couldn’t be tagged for billing.
EBS volumes couldn’t be snapped based on tag level policies in Lifecycle Manager.
Objects didn’t have a ‘Name’ tag which made it hard in the console to understand which virtual machine instance the object belonged too.
There are two methods I derived to add my tags back that I’ll share with you. The tags needed to be added upon launch of the instance when the auto scaling group added a server. The methods I used were:
The auto scaling group has a Launch Configuration where the ‘User data’ field runs a script block at startup.
Initiate a Lambda whenever CloudTrail logged an API reference of a launch event of an instance using CloudWatch.
Tagging with the User Data property and PowerShell
User data is simply:
When you launch an instance in Amazon EC2, you have the option of passing user data to the instance that can be used to perform common automated configuration tasks and even run scripts after the instance starts. You can pass two types of user data to Amazon EC2: shell scripts and cloud-init directives.
Try {
# Use the metadata service to discover which instance the script is running on
$InstanceId = (Invoke-WebRequest '169.254.169.254/latest/meta-data/instance-id').Content
$AvailabilityZone = (Invoke-WebRequest '169.254.169.254/latest/meta-data/placement/availability-zone').Content
$Region = $AvailabilityZone.Substring(0, $AvailabilityZone.Length -1)
$mac = (Invoke-WebRequest '169.254.169.254/latest/meta-data/network/interfaces/macs/').content
$URL = "169.254.169.254/latest/meta-data/network/interfaces/macs/"+$mac+"/interface-id"
$eni = (Invoke-WebRequest $URL).content
# Get the list of volumes attached to this instance
$BlockDeviceMappings = (Get-EC2Instance -Region $Region -Instance $InstanceId).Instances.BlockDeviceMappings
$Tags = (Get-EC2Instance -Region $Region -Instance $InstanceId).Instances.tag
}
Catch{Write-Host "Could not access the AWS API, are your credentials loaded?" -ForegroundColor Yellow}
$BlockDeviceMappings | ForEach-Object -Process {
$volumeid = $_.ebs.volumeid # Retrieve current volume id for this BDM in the current instance
# Set the current volume's tags
$Tags | ForEach-Object -Process {
If($_.Key -notlike "aws:*"){
New-EC2Tag -Resources $volumeid -Tags @{ Key = $_.Key ; Value = $_.Value } # Add tag to volume
}
}
}
# Set the current nics tag
$Tags | ForEach-Object -Process {
If($_.Key -notlike "aws:*"){
New-EC2Tag -Resources $eni -Tags @{ Key = $_.Key ; Value = $_.Value } # Add tag to eni
}
}
This script block is great and works a treat with newly created instances from an Amazon Marketplace AMI’s e.g. a vanilla Windows Server 2019 template. The launch configuration would apply the script as a part of the cfn-init function at startup. Unfortunately I’d already used the cfn-init function as part of the original image customisation and capture, the cfn-init would not re-run and didn’t execute this script block. So back to the drawing board in my scenario.
Tagging with CloudWatch and Lambda Function
The second solution was to create a Lambda function and trigger it using an Amazon CloudWatch Events rule. The Instance ID is parsed from the CloudWatch event in JSON to the Lambda function.
Here is the Lambda function that is written in python2.7 and leverages the boto3 and JSON modules.
from __future__ import print_function
import json
import boto3
def lambda_handler(event, context):
print('Received event: ' + json.dumps(event, indent=2))
ids = []
try:
ec2 = boto3.resource('ec2')
items = event['detail']['responseElements']['instancesSet']['items']
for item in items:
ids.append(item['instanceId'])
base = ec2.instances.filter(InstanceIds=ids)
for instance in base:
ec2tags = instance.tags
tags = [n for n in ec2tags if not n["Key"].startswith("aws:") ]
print(' original tags:', ec2tags)
print(' applying tags:', tags)
for volume in instance.volumes.all():
print(' volume:', volume)
if volume.tags != ec2tags:
volume.create_tags(DryRun=False, Tags=tags)
for eni in instance.network_interfaces:
print(' eni:', eni)
eni.create_tags(DryRun=False, Tags=tags)
return True
except Exception as e:
print('Something went wrong: ' + str(e))
return False
Amazon Web Services is a well established cloud provider. In this blog, I am going to explore how we can interface with the orange cloud titan programmatically. First of all, lets explore why we may want to do this. You might be thinking “But hey, the folks at AWS have built a slick web interface which offers all the capability I could ever need.”Whilst this is true, repetitive tasks quickly become onerous. Additionally, manual repetition introduces the opportunity to introduce human error. That sounds like something we should avoid, right? After all, many of the core tenets of the DevOps movement is built on these principles (“To increase the speed, efficiency and quality of software delivery”– amongst others.)
From a technology perspective, we achieve this by establishing automated services. This presents a significant speed advantage as automated processes are much faster than their manual counterparts. The quality of the entire release process improves because steps in the pipeline become standardised, thus creating predictable outcomes.
Here at cloudstep, this is one of our core beliefs when operating a cloud infrastructure platform. Simply put, the portal is a great place to look around and check reporting metrics. However, any services should be provisioned as code. Once again, to realise efficiency and improve overall quality.
“How do we go about this and what are some example use cases?”
AWS provide an open source CLI bundle which enables you to interface directly with their public API’s. Typically speaking, this is done using a terminal of your choice (Linux shells, Windows Command Line, PowerShell, Puty, Remotely.. You name it, its there.) Additionally, they also offer SDK’s which provide a great starting point for developing applications on-top of their services in many different languages (PowerShell, Java, .NET, JavaScript, Ruby, Python, PHP and GO.)
So lets get into it… The first thing you’ll want to do is walk through the process of aligning your operating environment with any mandatory prerequisites, then you can get install the AWS CLI tools in a flavour of your choice. The process is well documented, so I wont cover it off here.
Once you have the tools installed, you will need to provide the CLI tools with a base level of configuration which is stored in a profile of your choice. Running “AWS Configure” from a terminal of your choice is the fastest way to do this. Here you will provide IAM credentials to interface with your tenant, a default region and an output format. For the purpose of this example I’ve set my region to “ap-southeast-2” and my output format to “JSON.”
aws configure example
From here I could run “aws ec2 describe-instances” to validate that my profile had been defined correctly within the AWS CLI tools. The expected return would be a list of EC2 instances hosted within my AWS subscription as shown below.
aws ec2 describe-instances example
This shouldn’t take more than 5 minutes to get you up and running. However, don’t stop here. The AWS CLI supports almost all of the capability which can be found within the management portal. Therefore, if you’re in an operations role and your company is investing in AWS in 2019. You should be spending some time to learn about how to interface with services such as DynamoDB, EC2, S3/Glacier, IAM, SNS and SWF using the AWS CLI.
Lets have a look at a more practical example whereby automating a simple task can potentially save you hours of time each year. As a Mac user (you’ve probably already picked up on that) I often need to fire up a windows PC for Visual Studio or Visio. AWS is a great use case for this. I simply fire up my machine when I need it and shut it down when I’m done. I pay them a couple of bucks a month for some storage costs and some compute hours and I’m a happy camper. Simple right?
Lets unpack it further. I am not only a happy camper. I’m also a lazy camper. Firing up my VM to do my day job means:
Opening my browser and navigating to the AWS management console
Authenticating to the console
Navigating to the EC2 service
Scrolling through a long list of instances looking for my jumpbox
Starting my VM
Waiting for the network interface to refresh so I can get the public IP for RDP purposes.
This is all getting too hard right? All of this has to happen before I can even do my job and sometimes I have to do this a few times each day. Maybe its time to practice what I preach? I could automate all of this using the AWS tools for PowerShell, which would allow me to automate this process by running a script which saves me hours each year (employers love that.) Whilst this example wont necessarily increase the overall quality of my work, it does provide me with a predictable outcome every single time.
For a measly 20 lines of PowerShell I was able to define an executable script which authenticates to the AWS EC2 service, checks the power state of my VM in question. If the VM is already running it will return the connectivity details for my RDP client. If the VMis not running, it will fire up my instance, wait for the NIC to refresh and then return the connectivity details for my RDP client. I then have a script based on the same logic to shutdown my VM to save money when I’m not using the service. All of this takes less than 5 seconds to execute.
PowerShell Automation Example
The AWS CLI tools provide an interface to interact with the cloud provider programmatically. In this simple example we looked at automating a manual process which has the potential to save hours of time each year whilst also ensuring a predictable outcome for each execution. Each of the serious public cloud players offer similar capability. If you are looking to increase your overall efficiency, improve the quality of your work whilst automating monotonous tasks, consider investing some effort into learning a how to interface with your cloud provider of choice programmatically. You will be surprised how many repetitive tasks you can bowl over when you maximise the usage of the tools you have available to you.
If you have implemented a VM-Series firewall in Azure, AWS or on-premises but don’t have a Panorama Server for your configuration backups. Here is a solutions for getting the firewall configuration into an Azure Blob Storage, this could be done similarly with Lambda and S3 using python and the boto3 library.
Why Do This?
If there are multiple administrators of the firewall and configuration changes are happening frequently you may want a daily/hourly backup of the configuration to restore in the event that a recent commit has caused unwanted disruption to your network.
Azure Automation is a great place to start, we will have to interact with the API interface of the firewall to ask for a copy of the XML. Generally speaking we don’t want to expose the API interface to the internet, nor is it easy to allow on the Azure Automation public IPs, so in this case a Hybrid Worker (VM inside your trusted network) can execute the code against the internal trusted interface that has the API listening.
I’ve talked before about my passion for automation. I loath doing repetitive tasks and fear inconsistency whilst undertaking them. Its not that I’m lazy, I recognise that people are generally busy and sometimes its hard to maintain focus on repetitive tasks, its easy to forget a step here and there amongst everything else that’s going on in your day.
Software has evolved over the years to the point where all decent software includes a public Application Programming Interface (API), that provides consumers with access to functions and procedures to obtain, manipulate data and generally perform useful tasks. If you are thinking, yeah this is awesome, but I’m not a developer, I don’t know how to invoke an API, this all sounds too difficult. . . Let me introduce you to Microsoft Flow.
What is Flow?
Flow isn’t a new concept, its been around for a while, Zapier and IFTTT are both awesome mature products in this space and do much the same sort of thing. What makes Flow stand out is that its included as part of a Office 365 subscription. Its something that you likely already have access to. . This is awesome, cause you don’t need to ask for permission to purchase another app or subscription. The barriers of entry that stifle innovation likely aren’t there. . . You can get started and experiment right away.
To a degree, this is really only limited by your imagination and the quality of the software products you interact with. The good news is that as I’m writing this it’s the year 2018 and most organisations I interact with use modern software and cloud services that will definitely work with Flow. Further more, its not limited to just the Microsoft stack. You can use Flow with third party software.
I like to think of Flow as a means to glue otherwise disparate software together. The concept is pretty simple, you choose a starting point to be your trigger, the trigger then results in actions somewhere else. Put simply an action in one place lets you trigger a sequence of events somewhere else.
Here at cloudstep we use Flow with our WordPress blog, every time we publish a new post, Flow detects this and posts a link to it on our LinkedIn company page and sends a tweet on Twitter. Sounds like magic. . .its not really, its just using the API’s behind the scenes, no coding required. . . Painless awesomeness.
If you think about your daily activities, there are likely several workflows just like this. Just remember automation doesn’t have to be elaborate to make a real difference.
Flow provides a nice dashboard view with the state of your connections and traffic light status on the run history.
Another nice feature of Flow, is ‘Team Flows’ which allows you to share your automated workflows with others inside your organisation, removing another pet hate of mine, single points of failure within a workflow.
So as the year draws to a close, if you are fortunate enough to have some idle time, have a play with Flow, get automating and put some time in the bank for next year!
Mobile apps have become an essential part of our daily lives, with over 6 billion smartphone users worldwide. As a result, there has been a significant increase in demand for mobile app development. There are two main approaches to mobile app development: In this blog, I’m going to talk about the leading cross-platform framework, Flutter, … Read more
Everyone’s had this recently. Organisations they partner with are becoming (justifiably) more stringent about their security. It creates some thorny problems though: Born in the Cloud When we’re talking about a Born in the Cloud Business (BITC) we’re talking about this sort of company: Larger organisations like working with businesses like these. They’re small, agile … Read more
Today, we’re going to talk about a hotly debated topic in the tech industry – whether to pick a single cloud provider or go for a multi-cloud strategy. As someone who’s been in the industry for a while, I’ve seen companies go back and forth on this topic, and I think it’s time to weigh … Read more
Today I want to talk to you about an important topic that can make or break a company’s success in the digital age: migrating infrastructure to the public cloud. As the world becomes increasingly digital, businesses must adapt to survive. And one of the most significant changes a company can make is moving their infrastructure … Read more
2 / 2 Public cloud migration for a while has been the the buzzword on everyone’s lips. Often described as a no brainer for organisations where their core business is not managing IT systems. Sure, there are plenty of good reasons to take your organisation’s applications to the cloud: lower costs, better scalability, and increased … Read more