I recently wanted to create an Amazon EventBridge rule that will schedule an SSM Automation document.
A rule watches for certain events (cron in my case) and then routes them to AWS targets that you choose. You can create a rule that performs an AWS action automatically when another AWS action happens, or a rule that performs an AWS action regularly on a set schedule.
EventBridge needs permission to call SSM Start Automation Execution with the supplied Automation document and parameters. The rule will offer the generation of a new IAM role for this task.
In my case I received an error like below:
Error Output
The Automation definition for an SSM Automation target must contain an AssumeRole that evaluates to an IAM role ARN.
If you recieving this error you can create the role manually using the following CloudFormation Template.
While performing a lift and shift migration of Windows SQL Server using the AWS Application Migration Service I was challenged with wanting the newly migrated instance to have a Windows OS license ‘included’ but additionally the SQL Server Standard license billed to the account. The customer was moving away from their current hosting platform where both licenses were covered under SPLA. Rather then going to a license reseller and purchasing SQL Server it was preferred to have all the Windows OS and SQL Server software licensing to be payed through their AWS account.
In the Application Migration Service, under launch settings > Operating System Licensing. We can see all we have is OS licence options available to toggle between license-included and BYOL.
Choose whether you want to Bring Your Own Licenses (BYOL) from the source server into the Test or Cutover instance. This defines whether the launched test or cutover instance will include the license for the operating system (License-included), or if the licensing will be based on that of the migrated server (BYOL: Bring Your Own License).
If we review a migrated instance where ‘license-included’ was selected during launch, using Powershell on instance itself we see only a singular ‘BillingProduct = bp-6ba54002’ for Windows:
Leverage the AWS Database Migration Service (DMS) to migrate on-premises Windows SQL Server to AWS EC2 Instance provisioned from a Marketplace AMI which includes SQL licensing.
Leverage SQL Server native tooling between an on-premises Windows SQL Server to AWS EC2 Instance provisioned from a Marketplace AMI which includes SQL licensing. Use either
Native backup and restore
Log shipping
Database mirroring
Always On availability groups
Basic Always On availability groups
Distributed availability groups
Transactional replication
Detach and attach
Import/export
The only concern our customer had with all the above approaches was that there was technical configuration on the source server that wasn’t well understand. The risk of reimplementation on a new EC2 instance and missing configuration was perceived to be high impact.
Solution
The solution was to create a new EC2 instance from a AWS Marketplace AMI that we would like to be billed for. In my case I chose ‘Microsoft Windows Server 2019 with SQL Server 2017 Standard – ami-09ee4321c0e1218c3’.
The procedure is to detach all the volumes (including root) from the migrated EC2 instance that has all the lovely SQL data and attach it to the newly created instance with the updated BillingProducts of ‘bp-6ba54002′ for Windows and ‘bp-6ba54003′ for SQL Standard assigned to it.
If we review a Marketplace EC2 instance where SQL Server Standard was selected using Powershell on the instance:
This process will require a little outage as both EC2 Instances will have to be stopped to detach the volumes and re-attach. This all happens pretty fast so only expect it to last a minute.
NOTE: The primary ENI interface cannot be changed so there will be an IP swap, so be aware of any DNS updates you may need to do post to resolve the SQL Server being available via hostname to other servers.
The high level process of the script:
Get Original Instance EBS mappings
Stop the instances
Detach the volumes from both instances
Add the Original Instance’s EBS mappings to the New Instance
Tag the New Instance with the Original Instance’s tags
Tag the New Instance with the tag ‘Key=convertedFrom’ and ‘Value=<Original Instance ID>’
Update the Name tag on the Original Instance with ‘Key=Name’ and ‘Value=<OldValue+.old>
Update the Original Instance tags with its original BlockMapping for reference e.g. ‘Key=xvdc’ and ‘Value=vol-0c2174621f7fc2e4c’
Start the New Instance
After the script completes the Original Instance will have the following information:
The New Instance will have the following information:
Note: Assumed knowledge of AWS Cognito backend configuration and underlying concepts, mostly it’s just the setup from an application integration perspective that is talked about here.
Recently we have been working on a Django project where a secure and flexible authentication system was required, as most of our existing structure is on AWS we chose Cognito as the backend.
Below are the steps we took to get this working and some insights learned on the way.
Django Warrant
The first attempt was using django_warrant, this is probably going to be the first thing that comes up when you google ‘how to django and cognito’.
Django_warrant works by injecting an authentication backend into django which does some magic that allows your username/password to be submitted and checked against a configured user pool, on success it authenticates you and if required creates a stub django user.
The basics of this were very easy to get working and integrated but had a few issues such as:
We still see username/password requests and have to send them on.
By default can only be configured for one user pool.
Does not support federated identity provider workflows.
Github project did not seem super active or updated.
Ultimately we chose not to use this module, however inspiration was taken from its source code to do some of the user handling stuff we implemented later on.
Custom authorization_code workflow implementation
This involves using the cognito hosted login form, which does both user pool and connected identity provider authentication (O365/Azure, Google, Facebook, Amazon) .
The form can be customised with HTML, CSS, images and put behind a custom URL, other aspects of the process and events can be changed and reacted upon using triggers and lambda.
Once you are authenticated in cognito it redirects you back to the page of your choosing (usually your applications login page or custom endpoint) with a set of tokens, using these tokens you then grab the authenticated users details and authenticate them within the context of your app.
The difference between authorization code grant and implicit grant are:
Implicit grant
Intended for client side authentication (javascript applications mostly)
Sends both the id_token (JWT) and acccess_token in the redirect response
Sends the tokens with an #anchor before them so it is not seen by the web server
Code + secret get turned into id_token token and access_token via oauth2/token endpoint
We chose to use the authorization code grant workflow, it takes a bit more effort to setup but is generally more secure and alleviates any hacky javascript shenanigans that would be needed to get implicit grant working with a django server based backend.
After these steps you can use boto3 or helpers to turn those tokens into a set of attributes (email, name, other custom attributes) kept by cognito. Then you simply hook this up to your internal user/session logic by matching them with your chosen attributes like email, username etc.
I was unable to find any specific library support to handle some aspects of this, like the token handling in python or the django integration so i have included some code which may be useful.
Code
This can be integrated into a view to get the user details from Cognito based on a token, this will be sitting at the redirect URL that cognito returns from.
import warrant
import cslib.aws
def tokenauth(request):
authorization_code = request.GET.get("code")
token_grabber = cslib.aws.CognitoToken(
<client_id>
<client_secret>
<domain>
<redir>
<region>?
)
id_token, access_token = token_grabber.get(authorization_code)
if id_token and access_token:
# This uses warrant (different than django_warrant)
# A helper lib that wraps cognito
# Plain boto3 can do this also.
cognito = warrant.Cognito(
<user_pool_id>
<client_id>
id_token=id_token,
access_token=access_token,
)
# Their lib is a bit broken, because we dont supply a username it wont
# build a legit user object for us, so we reach into the cookie jar....
# {'given_name': 'Joe', 'family_name': 'Smith', 'email': 'joe@jtwo.solutions'}
data = cognito.get_user()._data
return data
else:
return None
Class that handles the oauth/token2 workflow, this is mysteriously missing from the boto3 library which seems to handle everything else quite well…
from http.client import HTTPSConnection
from base64 import b64encode
import urllib.parse
import json
class CognitoToken(object):
"""
Why you no do this boto3...
"""
def __init__(self, client_id, client_secret, domain, redir, region="ap-southeast-2"):
self.client_id = client_id
self.client_secret = client_secret
self.redir = redir
self.token_endpoint = "{0}.auth.{1}.amazoncognito.com".format(domain, region)
self.token_path = "/oauth2/token"
def get(self, authorization_code):
headers = {
"Authorization" : "Basic {0}".format(self._encode_auth()),
"Content-type": "application/x-www-form-urlencoded",
}
query = urllib.parse.urlencode({
"grant_type" : "authorization_code",
"client_id" : self.client_id,
"code" : authorization_code,
"redirect_uri" : self.redir,
}
)
con = HTTPSConnection(self.token_endpoint)
con.request("POST", self.token_path, body=query, headers=headers)
response = con.getresponse()
if response.status == 200:
respdata = str(response.read().decode('utf-8'))
data = json.loads(respdata)
return (data["id_token"], data["access_token"])
return None, None
def _encode_auth(self):
# Auth is a base64 encoded client_id:secret
string = "{0}:{1}".format(self.client_id, self.client_secret)
return b64encode(bytes(string, "utf-8")).decode("ascii")
Tagging becomes a huge part of your life when in the public cloud. Metadata is thrown around like hotcakes, and why not. At cloudstep.io we preach the ways of the DevOps gods and especially infrastructure as code for repeatable and standardised deployments. This way everything is uniform and everything gets a TAG!
I ran into an issue recently where I would build an EC2 instance and capture the operating system into an AMI as part of a CloudFormation stack. This AMI would then be used as part of a launch configuration and subsequent auto scaling group. The original EC2 instance had every tag needed across all parts that make up the virtual machine including:
EBS root volume
EBS data volumes
Elastic Network Interfaces (ENI)
EC2 Instance itself
When deploying my auto scaling group all the user level tags I’d applied had been removed from the volumes and ENI. This caused a few issues:
EBS volumes couldn’t be tagged for billing.
EBS volumes couldn’t be snapped based on tag level policies in Lifecycle Manager.
Objects didn’t have a ‘Name’ tag which made it hard in the console to understand which virtual machine instance the object belonged too.
There are two methods I derived to add my tags back that I’ll share with you. The tags needed to be added upon launch of the instance when the auto scaling group added a server. The methods I used were:
The auto scaling group has a Launch Configuration where the ‘User data’ field runs a script block at startup.
Initiate a Lambda whenever CloudTrail logged an API reference of a launch event of an instance using CloudWatch.
Tagging with the User Data property and PowerShell
User data is simply:
When you launch an instance in Amazon EC2, you have the option of passing user data to the instance that can be used to perform common automated configuration tasks and even run scripts after the instance starts. You can pass two types of user data to Amazon EC2: shell scripts and cloud-init directives.
Try {
# Use the metadata service to discover which instance the script is running on
$InstanceId = (Invoke-WebRequest '169.254.169.254/latest/meta-data/instance-id').Content
$AvailabilityZone = (Invoke-WebRequest '169.254.169.254/latest/meta-data/placement/availability-zone').Content
$Region = $AvailabilityZone.Substring(0, $AvailabilityZone.Length -1)
$mac = (Invoke-WebRequest '169.254.169.254/latest/meta-data/network/interfaces/macs/').content
$URL = "169.254.169.254/latest/meta-data/network/interfaces/macs/"+$mac+"/interface-id"
$eni = (Invoke-WebRequest $URL).content
# Get the list of volumes attached to this instance
$BlockDeviceMappings = (Get-EC2Instance -Region $Region -Instance $InstanceId).Instances.BlockDeviceMappings
$Tags = (Get-EC2Instance -Region $Region -Instance $InstanceId).Instances.tag
}
Catch{Write-Host "Could not access the AWS API, are your credentials loaded?" -ForegroundColor Yellow}
$BlockDeviceMappings | ForEach-Object -Process {
$volumeid = $_.ebs.volumeid # Retrieve current volume id for this BDM in the current instance
# Set the current volume's tags
$Tags | ForEach-Object -Process {
If($_.Key -notlike "aws:*"){
New-EC2Tag -Resources $volumeid -Tags @{ Key = $_.Key ; Value = $_.Value } # Add tag to volume
}
}
}
# Set the current nics tag
$Tags | ForEach-Object -Process {
If($_.Key -notlike "aws:*"){
New-EC2Tag -Resources $eni -Tags @{ Key = $_.Key ; Value = $_.Value } # Add tag to eni
}
}
This script block is great and works a treat with newly created instances from an Amazon Marketplace AMI’s e.g. a vanilla Windows Server 2019 template. The launch configuration would apply the script as a part of the cfn-init function at startup. Unfortunately I’d already used the cfn-init function as part of the original image customisation and capture, the cfn-init would not re-run and didn’t execute this script block. So back to the drawing board in my scenario.
Tagging with CloudWatch and Lambda Function
The second solution was to create a Lambda function and trigger it using an Amazon CloudWatch Events rule. The Instance ID is parsed from the CloudWatch event in JSON to the Lambda function.
Here is the Lambda function that is written in python2.7 and leverages the boto3 and JSON modules.
from __future__ import print_function
import json
import boto3
def lambda_handler(event, context):
print('Received event: ' + json.dumps(event, indent=2))
ids = []
try:
ec2 = boto3.resource('ec2')
items = event['detail']['responseElements']['instancesSet']['items']
for item in items:
ids.append(item['instanceId'])
base = ec2.instances.filter(InstanceIds=ids)
for instance in base:
ec2tags = instance.tags
tags = [n for n in ec2tags if not n["Key"].startswith("aws:") ]
print(' original tags:', ec2tags)
print(' applying tags:', tags)
for volume in instance.volumes.all():
print(' volume:', volume)
if volume.tags != ec2tags:
volume.create_tags(DryRun=False, Tags=tags)
for eni in instance.network_interfaces:
print(' eni:', eni)
eni.create_tags(DryRun=False, Tags=tags)
return True
except Exception as e:
print('Something went wrong: ' + str(e))
return False
I was working with a customer recently who had trouble deploying the AWS Discovery Connector to their VMware environment. AWS offer this appliance as an OVA file. For those who aren’t aware, OVA (Open Virtualisation Archive) is an open standard used to describe virtual infrastructure to be deployed on a hypervisor of your choice. Typically speaking, these files are hashed with an algorithm to ensure that the contents of the files are not changed or modified in transit (prior to being deployed within your own environment.)
At the time of writing, AWS currently offer this appliance hashed in two flavours… MD5 or SHA256. All sounds quite reasonable right?
Download the OVA with a hash of your choice
Deploy to VMware.
Profit???
Wrong! I was surprised to receive an email from my customer stating that their deployment had failed (see below.)
There’s a small clue here…
The Solution
My immediate response was to fire up google and do some reading. Surely someone had blogged about this before? After all…. I am no VMware expert. I finally arrived at the VMware knowledge base, where I began sifting through supported ciphers for ESX/ESXi and vCenter. The findings were quite interesting, you can find them summarised below:
If your VMware cluster consists of hosts which run ESX/ESXi 4.1 or less (hopefully no one) – MD5 is supported
If your VMware cluster consists of hosts which run ESX/ESXi 5.x or 6.0 – SHA1 is supported
If your VMware cluster consists of hosts which run ESX/ESXi 6.5 or greater – SHA256 is supported
In the particular environment I was working in, the customer had multiple environments with a mix of 5.5 and 6.0 physical hosts. As I was short on time, I had no real way of telling if the MD5 hashed image would deploy on a newer environment. I also don’t have a VMware development environment to test this approach on (by design.)
After a few more minutes of googling, I was rewarded with another VMware knowledge base article. VMware provide a small utility called “OVFTool.” This applications sole purpose in life is to convert OVA files (you guessed it) ensuring that they are hashed with supported cipher of your choice. In my scenario, the file was re-written using the supported SHA1 cipher. All of this was triggered from a windows command line by executing:
You can read more about VMware supported ciphers – here
Finally, I should call out that this solution is not specific to deploying the AWS Discovery Connector. Consider this approach if you are experiencing similar symptoms deploying another OVA based appliance in your VMware environment.
I ran into an interesting issue when building a new ECS Cluster using CloudFormation. The CloudFormation stack would fail on Type: AWS::ECS::Service with error:
Unable to assume the service linked role. Please verify that the ECS service linked role exists. (Service: AmazonECS; Status Code: 400; Error Code: InvalidParameterException; Request ID: beadf3d5-3406-11e9-828d-b16cd52796ef)
Okay google, what’s this service linked role thingy?
A service-linked role is a unique type of IAM role that is linked directly to Amazon ECS. Service-linked roles are predefined by Amazon ECS and include all the permissions that the service requires to call other AWS services on your behalf.
The first few times I ran my stack I assumed that this was for an IAM role that I was needing to assign to the AWS::ECS::Service to perform tasks much like a IamInstanceProfile of Type: AWS::EC2::Instance. When reviewing the available properties for Type: AWS::ECS::Service there was a Role definition:
Cluster
DeploymentConfiguration
DesiredCount
HealthCheckGracePeriodSeconds
LaunchType
LoadBalancers
NetworkConfiguration
PlacementConstraints
PlacementStrategies
PlatformVersion
Role
SchedulingStrategy
ServiceName
ServiceRegistries
TaskDefinition
Role - The name or ARN of an AWS Identity and Access Management (IAM) role that allows your Amazon ECS container agent to make calls to your load balancer.
I had some well defined Type: AWS::IAM::Role objects in my YAML for ECS execution and task roles but none of them were helping me with service linked account issue no matter how far I took the IAM policies.
Solution
To cut a long story and much googling short, the issue was nothing to do with my IAM policies but rather that the very first ECS cluster you create in the console using the getting started wizard creates the linked account in the backend. If your unlike me and read the full article about service linked roles you would have read:
when you create a new cluster (for example, with the Amazon ECS first run, the cluster creation wizard, or the AWS CLI or SDKs), or create or update a service in the AWS Management Console, Amazon ECS creates the service-linked role for you, if it does not already exist.
No mention in the above statement about CloudFormation. As per usual I jumped straight into a CloudFormation template without a test drive of the service and this time my attempt at being clever had given me a few moments of madness.
The easiest fix is to open up AWS CLI and run the following against your account once, then jump back into CloudFormation for YAML fun:
aws iam create-service-linked-role --aws-service-name ecs.amazonaws.com
Here at cloudstep, we love to help our customers achieve their goals. We believe that the cloud is a tool in the toolbox and we can use that multi-facet tool to help our customers realise success. Planning for success starts with goals, and goals come in many different shapes and sizes.
For any given solution, a customers goal may be focused on achieving financial or competitive advantage. Alternatively, they may be looking to realise operational efficiency by improving a day-to-day process using automation and orchestration. No matter your goal, you need a solid plan to ensure success. More often than not, that starts with validating that you have a sound understanding of the current state environment which will enable you to move forward towards achieving your goals.
Today I want to talk about a capability provided as part of the Migration Hub offering in AWS, the Application Discovery Service. This is a tool that we regularly use and encounter when meeting with customers. The core idea behind this capability (aptly named) is to help you discover critical details about your environment. This includes performance metrics and resource utilisation data which can be used for cost modelling, in our case… cloudstep.io. The tooling can also gather detailed network metrics to help you better understand the integrations and interfaces between applications in your environment. All of this data is at your disposal once you have decided upon which deployment model you would like to utilise.
AWS offer both an Agentless Discovery service and an Agent Based discovery service. Ordinarily, we typically use the Agentless discovery service. This is a great approach for organisations that operate entirely virtualised VMware infrastructure. Using this approach allows you to quickly inventory each of your VM’s that reside within your vCenter without the requirement of installing an agent on each guest VM. Choosing this path means that the agentless discovery service will query the VMware vCenter for performance metrics (irrespective of which OS the guest is running.) It can’t actually reach inside the virtual machine, therefore it is dependent on having a compatible version of the “VMware Tools” running inside each VM.
If you have a mixture of Physical and Virtual servers in your fleet, or you run another Hypervisor (such as Hyper-V) you may need to consider the Agent based deployment model. This approach is generally considered more labour intensive to get up and running due to the requirement to get hands on with each server. There are also some constraints around which OS’s it can fetch data from. So be mindful of this. You may even find that the best approach is to run a mix of the two deployment models. The outcome of both approaches is a series of performance data metrics which is shipped outbound using HTTPS to an S3 bucket. This bucket can then be queried by the AWS Migration Hub service. Alternatively you can export the data and analyse it using tooling of your choice.
For the remainder of the article, I will focus on our experience with the Agentless discovery approach. As I mentioned earlier, this is our preferred approach because it takes about an hour to get up and running and it generally produces more than enough quality data. In our experience, this provides an excellent baseline for commencing our cloudstep.io cost modelling engagement.
The AWS Agentless discovery connector operates as a VMware appliance within your vCenter environment. AWS provide a pre-canned OVA file which is around 2GB in size. You simply deploy this, the same way you would with any other open virtualisation archive. If you run multiple vCenters for different physical locations, you will need to deploy multiple instances of the appliance to service each stack.
If you experience issues deploying the OVA image within VMware, review my other blog – here
Deploying these appliances in enterprise environments often presents unique challenges. In our experience, this is where customers tend to have issues. Sometimes they deploy the appliances to management networks which don’t provide DHCP so they need to manually bind IP addresses, or there may be firewall rules which prevent connections from an access layer switch to perform the configuration process. The appliance does offer a terminal console (sudo setup.rb) where you can configure foundation services such as IP configs and DNS servers.
Another consideration you should make is “How will my appliance get outbound access to the internet?” After all, its sole purpose is to ship data outbound using HTTPS to an AWS S3 bucket via the Migration Hub. From a firewalling perspective, this is usually quite nice as outbound TCP443 generally doesn’t warrant a discussion with your security team. However, should your security team raise concern about corporate data being shipped off to the internet, AWS provide a detailed article on exactly what information is collected – here.
A final consideration you should make is proxy servers. If you utilise upstream proxy servers to police internet access, consider any rules you may need to define here. Typically speaking, the appliance will run headless in a “SYSTEM” context so you may need to allow it unauthenticated outbound internet access. Take a moment to think through any pitfalls you may encounter and also consider how you intend on interfacing with the appliance.
Once you have deployed your shiny new VM, you can fire up a web browser and configure it using the native web interface ( http://127.0.0.1 ) There are two things you will need:
Read-only credentials to the vCenter you will inventory.
AWS IAM Credentials to authenticate to the Migration Hub service.
Once you have completed the wizard, you will be greeted with a summary screen that presents instance specific configuration such as the appliances AWS connector ID.
The final step in the process is to to start the data collection process. You can action this by making API calls using the AWS CLI
Alternatively, you can also navigate to the Migration Hub console and manually approve the data collection process. If you have more than one appliance, you will have multiple connector ID’s registered here. You can validate that these line up by browsing to the appliance web interface where it will list its respective connector ID. The service polls the vCenter environment every 60 minutes, therefore it is reasonable to expect that you should be able to query your data within the AWS migration hub within an hour or two assuming everything is functioning as expected. Alternatively you can export the collected data to a CSV to commence your migration analysis.
In this blog I have explored the Application Discovery Service which is a capability provided by AWS’ Migration Hub. We have talked through common pitfalls that customers often experience when working with the agentless discovery service in effort to simply the deployment process. The data collected provides powerful insights into your environment which is crucial to success when planning a cloud migration. Should you need further assistance, do not hesitate to reach out to the team at cloudstep.io. We’d love to hear from you, and to help you on the road to success
I’ve blogged before about my passion for automation and the use of ARM templating in the Azure world to eradicate the burden of dull and mundane tasks from the daily routine of system administrators for whom I do consulting for.
I loath repetitive tasks, its in this space where subtle differences and inconsistency love to live. Recently I was asked to help out with a simple task, provisioning a couple of EC2 Windows servers in AWS. So in the spirit of infrastructure as code, I thought, there is no better time to try out AWS CloudFormation to describe my EC2 instances . I’ve actually used CloudFormation before in the past, but always describing my stack in JSON. CloudFormation also supports YAML, so challenge accepted and away I went. . .
So what
is YAML anyway. . .Yet Another Mark-up Language. Interestingly its described at
the official YAML website (https://yaml.org) as
a “YAML Ain’t Markup Language” rather, “human friendly data serialisation
standard for all programming languages”.
What
attracted me to YAML is its simplicity, there are no curly braces {} just
indenting. Its also super easy to read. So if JSON looks a bit to cody for your
liking, YAML may be a more palatable alternative.
With this as a starting point I was quickly able to build a EC2 instance and customise my YAML so as to do some extra things.
If you’ve
got this far and YAML is starting to look like it might be the ticket for you,
its worth familiarising yourself with the CloudFormation built-in functions.
You can use these to do things like assign values to properties that are not
available until runtime.
With a learning curve of a couple of hours including a bit of googling and messing around I was able to achieve my goal. I built an EC2 instance, applied tagging, installed some Windows features post build via a PowerShell script (downloaded from S3 and launched with AWS::CloudFormation::Init cfn-init.exe), all without having to logon to the server or touch the console. Here is a copy of my YAML. . .
Earlier this week Amazon Web Services made a statement, indicating that the battle of tier-one public cloud providers is still heating up. Yesterday Matthew Graham (AWS Head of Security Assurance for Australia and New Zealand) announced that The Australian Cyber Security Centre (ACSC) had awarded PROTECTED certification to AWS for 42 of their cloud services.
In what appears to be a tactical move that has been executed hot off the trail of Microsoft announcing their PROTECTED accredited Azure Central Regions in the back half of last year. This clearly demonstrates that AWS aren’t prepared to reduce the boil to a gentle simmer any time soon.
Graham announced “You will find AWS on the ACSC’s Certified Cloud Services List (CCSL) at PROTECTED for AWS services, including Amazon Elastic Compute Cloud (Amazon EC2), Amazon Simple Storage Service (Amazon S3), AWS Lambda, AWS Key Management Service (AWS KMS), and Amazon GuardDuty.”
He continued to state “We worked with the ACSC to develop a solution that meets Australian government security requirements while also offering a breadth of services so you can run highly sensitive workloads on AWS at scale. These certified AWS services are available within our existing AWS Asia-Pacific (Sydney) Region and cover service categories such as compute, storage, network, database, security, analytics, application integration, management and governance. “
Finally, delivering a seemingly well orchestrated jab “Importantly, all certified services are available at current public prices, which ensures that you are able to use them without paying a premium for security.”
It is no secret that the blue team currently charges a premium for entry into their PROTECTED level facility (upon completion of a lengthy eligibility assessment process) due to a finite amount of capacity available.
Both vendors state that consumers must configure services in line with the guidance in the respective ACSC certification report and consumer guidelines. This highlights that additional security controls must be implemented to ensure workloads are secured head to toe whilst storing protected level data. Ergo, certification is not implicit by nature of consuming accredited services.
AWS have released the IRAP assessment reports under NDA within their Artefact repository. For more information, review the official press release here.
Amazon Web Services is a well established cloud provider. In this blog, I am going to explore how we can interface with the orange cloud titan programmatically. First of all, lets explore why we may want to do this. You might be thinking “But hey, the folks at AWS have built a slick web interface which offers all the capability I could ever need.”Whilst this is true, repetitive tasks quickly become onerous. Additionally, manual repetition introduces the opportunity to introduce human error. That sounds like something we should avoid, right? After all, many of the core tenets of the DevOps movement is built on these principles (“To increase the speed, efficiency and quality of software delivery”– amongst others.)
From a technology perspective, we achieve this by establishing automated services. This presents a significant speed advantage as automated processes are much faster than their manual counterparts. The quality of the entire release process improves because steps in the pipeline become standardised, thus creating predictable outcomes.
Here at cloudstep, this is one of our core beliefs when operating a cloud infrastructure platform. Simply put, the portal is a great place to look around and check reporting metrics. However, any services should be provisioned as code. Once again, to realise efficiency and improve overall quality.
“How do we go about this and what are some example use cases?”
AWS provide an open source CLI bundle which enables you to interface directly with their public API’s. Typically speaking, this is done using a terminal of your choice (Linux shells, Windows Command Line, PowerShell, Puty, Remotely.. You name it, its there.) Additionally, they also offer SDK’s which provide a great starting point for developing applications on-top of their services in many different languages (PowerShell, Java, .NET, JavaScript, Ruby, Python, PHP and GO.)
So lets get into it… The first thing you’ll want to do is walk through the process of aligning your operating environment with any mandatory prerequisites, then you can get install the AWS CLI tools in a flavour of your choice. The process is well documented, so I wont cover it off here.
Once you have the tools installed, you will need to provide the CLI tools with a base level of configuration which is stored in a profile of your choice. Running “AWS Configure” from a terminal of your choice is the fastest way to do this. Here you will provide IAM credentials to interface with your tenant, a default region and an output format. For the purpose of this example I’ve set my region to “ap-southeast-2” and my output format to “JSON.”
aws configure example
From here I could run “aws ec2 describe-instances” to validate that my profile had been defined correctly within the AWS CLI tools. The expected return would be a list of EC2 instances hosted within my AWS subscription as shown below.
aws ec2 describe-instances example
This shouldn’t take more than 5 minutes to get you up and running. However, don’t stop here. The AWS CLI supports almost all of the capability which can be found within the management portal. Therefore, if you’re in an operations role and your company is investing in AWS in 2019. You should be spending some time to learn about how to interface with services such as DynamoDB, EC2, S3/Glacier, IAM, SNS and SWF using the AWS CLI.
Lets have a look at a more practical example whereby automating a simple task can potentially save you hours of time each year. As a Mac user (you’ve probably already picked up on that) I often need to fire up a windows PC for Visual Studio or Visio. AWS is a great use case for this. I simply fire up my machine when I need it and shut it down when I’m done. I pay them a couple of bucks a month for some storage costs and some compute hours and I’m a happy camper. Simple right?
Lets unpack it further. I am not only a happy camper. I’m also a lazy camper. Firing up my VM to do my day job means:
Opening my browser and navigating to the AWS management console
Authenticating to the console
Navigating to the EC2 service
Scrolling through a long list of instances looking for my jumpbox
Starting my VM
Waiting for the network interface to refresh so I can get the public IP for RDP purposes.
This is all getting too hard right? All of this has to happen before I can even do my job and sometimes I have to do this a few times each day. Maybe its time to practice what I preach? I could automate all of this using the AWS tools for PowerShell, which would allow me to automate this process by running a script which saves me hours each year (employers love that.) Whilst this example wont necessarily increase the overall quality of my work, it does provide me with a predictable outcome every single time.
For a measly 20 lines of PowerShell I was able to define an executable script which authenticates to the AWS EC2 service, checks the power state of my VM in question. If the VM is already running it will return the connectivity details for my RDP client. If the VMis not running, it will fire up my instance, wait for the NIC to refresh and then return the connectivity details for my RDP client. I then have a script based on the same logic to shutdown my VM to save money when I’m not using the service. All of this takes less than 5 seconds to execute.
PowerShell Automation Example
The AWS CLI tools provide an interface to interact with the cloud provider programmatically. In this simple example we looked at automating a manual process which has the potential to save hours of time each year whilst also ensuring a predictable outcome for each execution. Each of the serious public cloud players offer similar capability. If you are looking to increase your overall efficiency, improve the quality of your work whilst automating monotonous tasks, consider investing some effort into learning a how to interface with your cloud provider of choice programmatically. You will be surprised how many repetitive tasks you can bowl over when you maximise the usage of the tools you have available to you.
Mobile apps have become an essential part of our daily lives, with over 6 billion smartphone users worldwide. As a result, there has been a significant increase in demand for mobile app development. There are two main approaches to mobile app development: In this blog, I’m going to talk about the leading cross-platform framework, Flutter, … Read more
Everyone’s had this recently. Organisations they partner with are becoming (justifiably) more stringent about their security. It creates some thorny problems though: Born in the Cloud When we’re talking about a Born in the Cloud Business (BITC) we’re talking about this sort of company: Larger organisations like working with businesses like these. They’re small, agile … Read more
Today, we’re going to talk about a hotly debated topic in the tech industry – whether to pick a single cloud provider or go for a multi-cloud strategy. As someone who’s been in the industry for a while, I’ve seen companies go back and forth on this topic, and I think it’s time to weigh … Read more
Today I want to talk to you about an important topic that can make or break a company’s success in the digital age: migrating infrastructure to the public cloud. As the world becomes increasingly digital, businesses must adapt to survive. And one of the most significant changes a company can make is moving their infrastructure … Read more
2 / 2 Public cloud migration for a while has been the the buzzword on everyone’s lips. Often described as a no brainer for organisations where their core business is not managing IT systems. Sure, there are plenty of good reasons to take your organisation’s applications to the cloud: lower costs, better scalability, and increased … Read more