Skip to main content

Flutter for mobile app development

Mobile apps have become an essential part of our daily lives, with over 6 billion smartphone users worldwide. As a result, there has been a significant increase in demand for mobile app development.

There are two main approaches to mobile app development:

  • Cross-platform development
  • Traditional/Native development

In this blog, I’m going to talk about the leading cross-platform framework, Flutter, and why we should use it. I will also be recommending some packages that I found quite useful during the development process.

What is flutter

Flutter is an open-source mobile app development framework created and supported by Google that allows developers to build high-performance, natively compiled user interfaces (UI) for applications across different platforms. When Flutter launched in 2018, it mainly supported mobile app development for iOS and Android platforms. After the release of Flutter 3 in 2022, it now supports not only mobile platforms but also web and desktop app development on Windows, macOS, and Linux.

Why flutter?

Cross-platform development

Flutter allows developers to create apps for multiple platforms using a single codebase and programming language(Dart), saving time and effort while delivering a consistent user experience across different devices.

Sometimes, it does require extra effort to make things work on different platforms, for example, when implementing routing for the app.

Rich set of pre-built widgets

Flutter provides a wide range of customisable pre-built widgets and tools that make it easy for developers to create beautiful and responsive user interfaces without having to build everything from scratch. You can find a complete list of all the widgets and learn how to use them at https://docs.flutter.dev/ui/widgets.

Performance

Flutter uses Dart, which is a fast, object-oriented programming language that allows for ahead-of-time (AOT) compilation. This means that Dart code can be compiled into machine code, resulting in faster performance.

Some nice tools and great integration with vscode with extensions

Here’s the list of tools that are highly recommended, which make your development process faster and your life easier.

  • Flutter inspector: It helps you visualise your widget and diagnose layout issues.
  • Hot reload: It only updates the widget that has been changed instead of rerunning the whole main() function and rebuilding the app. This allows you to debug and view changes quickly without losing state.
  • Flutter (extension): It nicely integrates with the debugger and helps you easily refactor code, such as wrapping a widget around a block of code or removing a widget.
  • Flutter Widget Snippets (extension): It provides a set of snippets that help you complete boilerplate code that you use repeatedly.

Recommended packages

All Flutter developers should know Pub.dev (https://pub.dev), which is a central repository for all Flutter packages. It contains a vast collection of useful packages that can greatly simplify your development process. Here are some of my personal favorties:

Flutter Riverpod:

Flutter Riverpod is a state management library for Flutter, based on Provider, which simplifies the management of the app’s state. It provides an alternative to Flutter’s built-in state management system and offers a more modern approach to managing state in Flutter apps. It stores data or state globally and allows all widgets to use it without passing the state around, making your code cleaner. It’s quite similar to Redux if you have ever used React before.

Auto Route:


Flutter Auto Route is a package that simplifies the process of defining and generating navigation routes in Flutter apps. It provides a type-safe and boilerplate-free approach to generating routes, which can save time and effort during implementation and eliminates the need to worry about the routing.

http/dio


HTTP and Dio are two popular packages in the Flutter community that simplify handling HTTP requests and responses in Flutter apps. They provide intuitive APIs for making network requests, as well as support for features like interceptors, file uploading and downloading, and more.

Conclusion:

Flutter is a powerful and versatile mobile app development framework that offers many benefits, including fast development times, high-performance applications, and a large community with extensive resources. It provides a wide range of customizable pre-built widgets and tools that simplify the development process, and many great packages that make development easier and save time. Overall, Flutter is a great framework for developers to create beautiful and responsive apps across multiple platforms and can also help businesses save time and money when delivering projectss.

Securing Born in the Cloud Businesses

Everyone’s had this recently. Organisations they partner with are becoming (justifiably) more stringent about their security. It creates some thorny problems though:

  • How do we get the security without bludgeoning our business to death?
  • How do you improve data protection without making your staff rage quit?
  • How do we align initiatives I make with broader security standards.

Born in the Cloud

When we’re talking about a Born in the Cloud Business (BITC) we’re talking about this sort of company:

  • Not much in the way of legacy systems.
  • Mostly SaaS based tools.
  • A boat load of BYOD.
  • Loosey Goosey office security 🙂

Larger organisations like working with businesses like these. They’re small, agile and generally full of rock-star grade experts in their field. But large organisations are also terrified of working with these sorts of companies. The locked-down SOE based work day they’re used to which provide them with a measure of confidence isn’t present in these BITC businesses. The large org wants all the warm fuzzy security but wants to keep the innovation and glint in their partner’s eye.

Security Standards

In Europe this is lot more mature than it is in Australia. There are two different standards that get bandied about:

Essential 8

Here, there are a set of guidelines that the Australian Signals Directorate have adopted and provide as advice. This is called the Essential 8 Maturity Model. It covers several areas and each one has four levels of maturity and organisation can reach (0-3). It was originally envisaged as a straightforward, practical approach to data security but has been “beefed up” to be a lot more complex over time.

ISO:27001

Another standard is ISO 27001. This is a heavyweight standard to attain and can take 6-18 months depending on your complexity, maturity and size.

It covers a range of different technology and policy “controls” that should be applied. You an self-assert your compliance then have that audited externally.

Essential 8 Level 3 (the highest) is a sort of subset of the work you’d need to complete to get to ISO 27001. Essential 8 is used in Australian Federal and State Governments and ISO:27001 is a global standard.

What do I need to do?

We at jtwo have been on the journey of achieving both and we have some general advice on how to get going.

We aren’t security consultants and our professional indemnity doesn’t allow us to be so take this advice with a grain of salt. That should keep our insurers happy 🙂

So with that out of the way Its a big beast but here are some pointers on how to get started. We use Office365 with the E5 licensing so a lot of the tools we need to build this stuff out are there and we already pay for them.

Take it Seriously

You can’t fake this stuff. You have to embrace the idea of security in your bones or you won’t get anywhere. You have to think about the tools, processes and behaviours you use and think about them through a security lens. Once you’ve embraced the idea of security it all starts to look a bit more achievable.

Build Registers

In each of these security standards there are set of lists and registers you need to keep. They involve asset registers (physical and information based) and there’s lots of them. This is particularly the case with ISO27k1.

We use Office365 so we built each of these registers as SharePoint Lists. They are easy to use and they can be used in reporting too.

Embrace a SOE

Everyone hates them, they suck. They make it hard for you to be flexible and innovative. Developers hate them especially. But you should consider them part of your new world order. We use E5 licensing for Microsoft 365 and as part of this we get InTune and Defender. Rolling these out together can help you tick lots of boxes and actually be secure to boot.

MFA Everywhere, All at Once

You probably already do this, in fact if you don’t then do it as soon as you’ve read this. We use O365 and all the identities are in Azure AD. We’ve turned on MFA using Microsoft Authenticator and it does a lot of the heavy lifting.

Policies, Policies, Policies

You’ll need to write and maintain lots of policies. These are generally short (thankfully) but they need to be reviewed periodically and you need to record attestations that people have read, understood and agreed to the policies.

We build our policies as Word Documents and we built a PowerApp that lets people read and agree to the policies. The records for this go in our SharePoint lists for record keeping.

Enforcement

You need to enforce the use of policies, practices and tools. Consider making security compliance part of your staff meetings. Reward people for good behaviour and following policies. Gently (at first) nudge people towards good behaviour if they’re lagging behind.

Office365 and Purview are your friend

While many of the compliance activities you’ll need to do are policy and people based there’s a lot of technology stuff too. As a BITC business you have a lot of this at your fingertips. We use Microsoft 365 and Purview is part of the E5 licensing we have. Its got a bunch of great technology you can use to improve your security. It arranges it as a set of scores so you get the dopamine rush when you move the score up too. If you use M365 and have E5 you should definitely explore this. It will help greatly.

Data Classification

This is a big one and can be hard. Data classification is generally difficult but the Purview classification tools are able to use ML to do the classification work for you. Here’s what our Teams, email and other communication profile looks like…

We should probably tone down on the fruity language.

This is also what our data looks like from the perspective of sensitive information.

You can see that we use what might be considered sensitive information in the content of our comms. This will vary from org to org but you don’t have to do anything to get this, it works out of the box.

Standards Mapping

Another interesting capability is the standards mapping. You can choose a standard like E8L3 or ISO:27001 and apply that template to the controls you have in O365. This will give you a (probably massive) checklist of changes you need to make to meet those standarsd.

Microsoft also have their own standards for security which are applied to your controls. Here’s an example of how it provides a gauge on your security compliance:

Moving this score up will move you along with various standards at the same time.

Single Cloud or Multi-Cloud: The Ultimate Debate

Today, we’re going to talk about a hotly debated topic in the tech industry – whether to pick a single cloud provider or go for a multi-cloud strategy. As someone who’s been in the industry for a while, I’ve seen companies go back and forth on this topic, and I think it’s time to weigh in with some of my observations.

Let’s start with the basics. A single cloud provider means that your company uses one cloud provider to host its applications, services, and data. On the other hand, a multi-cloud strategy means that you use multiple cloud providers for the same purpose. Sounds simple, right? Well, not exactly.

While the idea of using multiple cloud providers might seem like a good way to hedge your bets, the reality is that it can quickly become a headache for your organisation. One of the biggest challenges is the overhead that comes with establishing a presence in more than one cloud provider. Each provider has its own set of tools, services, and pricing models, which means that you need to invest time, money, and resources in learning and maintaining all of them. Not to mention the added complexity of managing data across multiple clouds, which can result in increased latency, security risks, and compliance issues.

For smaller organisations, a keep-it-simple approach might be best. According to a recent survey by LogicMonitor, 87% of SMBs are using a single cloud provider, and only 13% are using a multi-cloud strategy. This is because smaller companies typically have limited resources and cannot afford to spread themselves too thin. By using a single cloud provider, they can focus on their core business and avoid the added complexity of managing multiple cloud environments.

But what about larger organisations with more resources? Surely, they can handle a multi-cloud strategy, right? Well, not so fast. A recent report by Flexera found that 93% of enterprises have a multi-cloud strategy, but only 16% of them have the expertise to manage it. This means that most organisations are struggling to keep up with the demands of a multi-cloud environment, which can lead to increased costs, downtime, and security risks.

So, what’s the solution? While it’s tempting to go for a multi-cloud strategy to take advantage of the best features of each provider, the reality is that it’s not always worth the overhead. Instead, companies should focus on finding the right cloud provider that meets their specific needs and invest in developing the skills to manage it effectively.

At cloudstep.io we created a simple three step ‘Business Case in a Box’ process that leverages our unique tooling to help organisations big or small answer these questions. Starting with a rapid assessment to provide lightweight, express validation of cloud intention through exploration and validation of different migration scenarios. The output of this assessment identifies any organisational knowledge gaps followed by focused analysis to prepare the organisation for a successful migration.

The decision to pick a single cloud provider or a multi-cloud strategy should not be taken lightly. While multi-cloud might seem like a good idea in theory, the overhead and skills requirements can quickly become overwhelming for most organisations. Like many things in life its not a simple case of one size fits all. Investing time upfront in validation of your requirements, assessment of candidate cloud providers and planning your migration could spare you a lot of sleepless nights. Thanks for reading!

Cloud Migration: Why Your Business Needs a Robust Business Case

Today I want to talk to you about an important topic that can make or break a company’s success in the digital age: migrating infrastructure to the public cloud. As the world becomes increasingly digital, businesses must adapt to survive. And one of the most significant changes a company can make is moving their infrastructure to the public cloud.

Now, I know what you’re thinking. “But, why should I move my data to the cloud? Isn’t it just another buzzword that’ll fade away in a few years?” I’m here to tell you that not only is the cloud here to stay, but it can also be a game-changer for your business. In this article, I’ll explain why establishing a business case for cloud migration is crucial, how to choose the right cloud provider, and why cost shouldn’t be the only factor to consider.

First things first, let’s talk about why you should even bother migrating to the cloud. The answer is simple: scalability and flexibility. The cloud offers a level of agility that on-premise solutions simply can’t match. With the cloud, you can scale your resources up or down as needed, pay only for what you use, and access your data from anywhere in the world. This level of flexibility can be a game-changer for businesses of all sizes, allowing them to respond quickly to changing market conditions, improve operational efficiency, and reduce costs.

Now that we’ve established why the cloud is important, let’s talk about choosing the right cloud provider. There are plenty of cloud providers out there, from big names like AWS, Azure, and Google Cloud to smaller private cloud players. But how do you decide which one is right for you? There are many factors to consider when choosing a provider, including cost, security, reliability, supportability, and ease of use. However, I want to stress that cost should not be the only factor you consider. While it’s important to stay within your budget, choosing the cheapest provider could end up costing you more in the long run if the provider doesn’t meet your needs. Instead, focus on finding a provider that can offer the right mix of features, support, and security that your business requires.

Of course, simply choosing a cloud provider isn’t enough. You need to validate your choice to ensure that it truly aligns with your business case. So, what exactly does a business case entail? Essentially, it’s a comprehensive analysis of your current infrastructure, your business needs, and your goals for the future. It involves exploration of different migration scenarios, identification and comparison of costs to identify the validity and viability of one choice vs another. This will help you identify the areas of your IT landscape that could benefit the most from a cloud migration and determine which cloud provider can best meet your needs.

A robust business case is essential to secure the buy-in of key stakeholders in the organisation. This includes executives, investors, and board members. The business case should outline the ongoing financial operational benefits of migrating to the public cloud, in addition to the softer benefits such as improved scalability, and increased agility. By presenting a solid business case, you can effectively communicate the value proposition of the migration and gain the support of those who hold the purse strings.

At cloudstep.io we created a simple three step ‘Business Case in a Box’ process that leverages our unique tooling to explore different migration scenarios and build a business case. Starting with a rapid assessment to provide lightweight, express validation of cloud intention. Our tooling allows you to develop a A board-ready business case, comprised of the capital and operational costs that are important and specific to your organisation. Once you’ve identified the optimum business case, the output of this assessment identifies any organisational knowledge gaps followed by focused analysis to prepare the organisation for a successful migration.

Establishing a business case for your cloud migration and choosing the right provider are crucial to the success of your cloud journey. Don’t rush into any decisions without first conducting thorough analysis. Remember, cost is just one piece of the puzzle. Keep your business goals and needs in mind, and you’ll be well on your way to a successful cloud migration.

The Cloud Migration Pitfalls You Need to Know: Why Understanding Your Applications is Critical

2 / 2

Public cloud migration for a while has been the the buzzword on everyone’s lips. Often described as a no brainer for organisations where their core business is not managing IT systems. Sure, there are plenty of good reasons to take your organisation’s applications to the cloud: lower costs, better scalability, and increased flexibility. But here’s the thing – it’s not all sunshine and rainbows, and there are definitely some pitfalls you need to be aware of.

One of the most critical factors to consider when migrating applications to the cloud is having a solid understanding of those applications, their relationships with one another, and the infrastructure that underpins them. This is essential if you want to avoid a disruptive migration that could have significant impacts on your organisation’s operational performance.

According to a recent study conducted by Harvard Business Review, a poor understanding of applications and infrastructure is one of the leading causes of disruption during a cloud migration. The study found that only 38% of IT leaders had a clear understanding of their organisation’s applications, while only 26% understood the relationships between applications and infrastructure. These statistics are worrying, especially when you consider that a failed cloud migration can have real and lasting consequences.

For instance, a poorly planned migration can result in application downtime, data loss, and security breaches, all of which can lead to significant financial losses and damage to your organisation’s reputation. These consequences can be particularly devastating for small and medium-sized enterprises (SMEs), which may not have the resources to recover quickly from such disruptions.

So, what can you do to avoid these pitfalls? Well, first and foremost, you need to ensure that you have a thorough understanding of your organisation’s applications and infrastructure. Sounds easy right? This means conducting a comprehensive inventory of your applications, documenting their dependencies and relationships, and mapping out your infrastructure architecture. Where do you start? How do you know where to focus your attention? How do you make this a cost effective exercise?

At cloudstep.io we created a simple three step ‘Business Case in a Box’ process that leverages our unique tooling to answer these questions. Starting with a rapid assessment to provide lightweight, express validation of cloud intention. The output of this assessment identifies any organisational knowledge gaps followed by focused analysis to prepare the organisation for a successful migration.

As is with anything that is outside the scope of your core business, It’s wise to also consider working with a trusted cloud service advisor that can provide your organisation with expert guidance and support throughout the migration process. This will help ensure that your migration is seamless and that your applications and data are migrated securely and efficiently.

As a wrap, migrating your organisation’s applications to the public cloud can be a fantastic way to save costs and increase flexibility. However, it’s essential to recognise that this process comes with its own set of challenges and pitfalls. To avoid disruption of your business, it’s critical to have a solid understanding of your applications and infrastructure, as well as to work with a trusted cloud service advisor. With careful planning and execution, you can ensure a successful migration and reap the benefits of the cloud without putting your organisation’s operational performance at risk.

AWS EventBridge Triggering SSM Automation IAM Role Error

I recently wanted to create an Amazon EventBridge rule that will schedule an SSM Automation document.

A rule watches for certain events (cron in my case) and then routes them to AWS targets that you choose. You can create a rule that performs an AWS action automatically when another AWS action happens, or a rule that performs an AWS action regularly on a set schedule.

EventBridge needs permission to call SSM Start Automation Execution with the supplied Automation document and parameters. The rule will offer the generation of a new IAM role for this task.

In my case I received an error like below:

Error Output

The Automation definition for an SSM Automation target must contain an AssumeRole that evaluates to an IAM role ARN.

If you recieving this error you can create the role manually using the following CloudFormation Template.

AWSTemplateFormatVersion: '2010-09-09'
Description: AWS CloudFormation template IAM Roles for Event Bridge | SSM Automation

Resources:
  AutomationServiceRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
        - Effect: Allow
          Principal:
            Service:
            - events.amazonaws.com
          Action: sts:AssumeRole
      ManagedPolicyArns:
      - arn:aws:iam::aws:policy/service-role/AmazonSSMAutomationRole
      Path: "/"
      RoleName: EventBridgeAutomationServiceRole

Migrate to AWS EC2 with SQL licensing included

While performing a lift and shift migration of Windows SQL Server using the AWS Application Migration Service I was challenged with wanting the newly migrated instance to have a Windows OS license ‘included’ but additionally the SQL Server Standard license billed to the account. The customer was moving away from their current hosting platform where both licenses were covered under SPLA. Rather then going to a license reseller and purchasing SQL Server it was preferred to have all the Windows OS and SQL Server software licensing to be payed through their AWS account.

In the Application Migration Service, under launch settings > Operating System Licensing. We can see all we have is OS licence options available to toggle between license-included and BYOL.

Choose whether you want to Bring Your Own Licenses (BYOL) from the source server into the Test or Cutover instance. This defines whether the launched test or cutover instance will include the license for the operating system (License-included), or if the licensing will be based on that of the migrated server (BYOL: Bring Your Own License).

If we review a migrated instance where ‘license-included’ was selected during launch, using Powershell on instance itself we see only a singular ‘BillingProduct = bp-6ba54002’ for Windows:

((Invoke-WebRequest http://169.254.169.254/latest/dynamic/instance-identity/document).Content | ConvertFrom-Json).billingProducts

bp-6ba54002 

AWS Preferred Approach

There are a lots of options for migrating SQL Server to AWS, so we weren’t without choices.

  1. Leverage the AWS Database Migration Service (DMS) to migrate on-premises Windows SQL Server to a Relation Database Services (RDS).
  2. Leverage the AWS Database Migration Service (DMS) to migrate on-premises Windows SQL Server to AWS EC2 Instance provisioned from a Marketplace AMI which includes SQL licensing.
  3. Leverage SQL Server native tooling between an on-premises Windows SQL Server to AWS EC2 Instance provisioned from a Marketplace AMI which includes SQL licensing. Use either
    1. Native backup and restore
    2. Log shipping
    3. Database mirroring
    4. Always On availability groups
    5. Basic Always On availability groups
    6. Distributed availability groups
    7. Transactional replication
    8. Detach and attach
    9. Import/export

The only concern our customer had with all the above approaches was that there was technical configuration on the source server that wasn’t well understand. The risk of reimplementation on a new EC2 instance and missing configuration was perceived to be high impact.

Solution

The solution was to create a new EC2 instance from a AWS Marketplace AMI that we would like to be billed for. In my case I chose ‘Microsoft Windows Server 2019 with SQL Server 2017 Standard – ami-09ee4321c0e1218c3’.

The procedure is to detach all the volumes (including root) from the migrated EC2 instance that has all the lovely SQL data and attach it to the newly created instance with the updated BillingProducts of ‘bp-6ba54002′ for Windows and bp-6ba54003′ for SQL Standard assigned to it.

If we review a Marketplace EC2 instance where SQL Server Standard was selected using Powershell on the instance:

((Invoke-WebRequest http://169.254.169.254/latest/dynamic/instance-identity/document).Content | ConvertFrom-Json).billingProducts

bp-6ba54002
bp-6ba54003 

How will it work?

This process will require a little outage as both EC2 Instances will have to be stopped to detach the volumes and re-attach. This all happens pretty fast so only expect it to last a minute.

NOTE: The primary ENI interface cannot be changed so there will be an IP swap, so be aware of any DNS updates you may need to do post to resolve the SQL Server being available via hostname to other servers.

The high level process of the script:

  1. Get Original Instance EBS mappings
  2. Stop the instances
  3. Detach the volumes from both instances
  4. Add the Original Instance’s EBS mappings to the New Instance
  5. Tag the New Instance with the Original Instance’s tags
  6. Tag the New Instance with the tag ‘Key=convertedFrom’ and ‘Value=<Original Instance ID>’
  7. Update the Name tag on the Original Instance with ‘Key=Name’ and ‘Value=<OldValue+.old>
  8. Update the Original Instance tags with its original BlockMapping for reference e.g. ‘Key=xvdc’ and ‘Value=vol-0c2174621f7fc2e4c’
  9. Start the New Instance

After the script completes the Original Instance will have the following information:

The New Instance will have the following information:

The volumes connected on the New Instance:

$orginalInstanceID = "i-0ca332b0b062dbe76"
$newInstanceID = "i-0ce3eeadfa27e2f64"
$AccessKey = ""
$Secret = ""
$Region = "ap-southeast-2"

If (!(get-module -ListAvailable | ? {$_.Name -like "*AWS.Tools.EC2*"}))
{                
    Write-Output "WARNING: EC2 AWS Modules Not Installed Yet..." 
    Exit
}
$getModuleResults = Get-Module "AWS.Tools.EC2"
If (!$getModuleResults) 
{
    Write-Output "INFO: Loading AWS Module..."
    Import-Module AWS.Tools.Common -ErrorAction SilentlyContinue -Force
    Import-Module AWS.Tools.EC2 -ErrorAction SilentlyContinue -Force
}
else{
    Write-Output "INFO: AWS Module Already Loaded"
}

Set-AWSCredential -AccessKey $AccessKey -SecretKey $Secret -ProfileLocation $Region
Write-Output "INFO: Getting details $($orginalInstanceID)"
$originalInstance = (Get-EC2Instance -InstanceId $orginalInstanceID).Instances
$orginalBlockMappings = $originalInstance.BlockDeviceMappings
$originalVolumes = @()
Write-Output "INFO: Getting EBS volumes from $($orginalInstanceID)"
ForEach($device in $orginalBlockMappings){
    $Object = New-Object System.Object
    #Get EBS volumes for the machine
    $Object | Add-Member -type NoteProperty -name "DeviceName" -Value $device.DeviceName
    $Object | Add-Member -type NoteProperty -name "VolumeId" -Value $device.ebs.VolumeId
    $Object | Add-Member -Type NoteProperty -name "Status" -Value $device.ebs.Status
    $volume = Get-EC2Volume -VolumeId $device.ebs.VolumeId
    $Object | Add-Member -Type NoteProperty -name "AvailabilityZone" -Value $volume.AvailabilityZone
    $Object | Add-Member -Type NoteProperty -name "Iops" -Value $volume.Iops
    $Object | Add-Member -Type NoteProperty -name "CreateTime" -Value $volume.CreateTime
    $Object | Add-Member -Type NoteProperty -name "Size" -Value $volume.Size
    $Object | Add-Member -Type NoteProperty -name "VolumeType" -Value $volume.VolumeType
    $originalVolumes += $Object
}
Write-Output $originalVolumes | Format-Table
$tempInstance = (Get-EC2Instance -InstanceId $newInstanceID).Instances
$tempBlockMappings = $tempInstance.BlockDeviceMappings
$tempVolumes = @()
Write-Output "INFO: Getting details $($newInstanceID)"
ForEach($device in $tempBlockMappings){
    $Object = New-Object System.Object
    #Get EBS volumes for the machine
    $Object | Add-Member -type NoteProperty -name "DeviceName" -Value $device.DeviceName
    $Object | Add-Member -type NoteProperty -name "VolumeId" -Value $device.ebs.VolumeId
    $Object | Add-Member -Type NoteProperty -name "Status" -Value $device.ebs.Status
    $volume = Get-EC2Volume -VolumeId $device.ebs.VolumeId
    $Object | Add-Member -Type NoteProperty -name "AvailabilityZone" -Value $volume.AvailabilityZone
    $Object | Add-Member -Type NoteProperty -name "Iops" -Value $volume.Iops
    $Object | Add-Member -Type NoteProperty -name "CreateTime" -Value $volume.CreateTime
    $Object | Add-Member -Type NoteProperty -name "Size" -Value $volume.Size
    $Object | Add-Member -Type NoteProperty -name "VolumeType" -Value $volume.VolumeType
    $tempVolumes += $Object
}
Write-Output $tempVolumes | Format-Table
#Lets do the work
Write-Output "INFO: Stop the instance $($orginalInstanceID)...."
try{
    Stop-EC2Instance -InstanceId $originalInstance -ErrorAction Stop
}catch{
    Write-Output "ERROR: $_"
    exit
}
While((Get-EC2Instance -InstanceId $orginalInstanceID).Instances[0].State.Name -ne 'stopped'){
    Write-Verbose "INFO: Waiting for instance to stop..."
    Start-Sleep -s 10
}
Write-Output "INFO: Stop the instance $($newInstanceID)...."
try{
    Stop-EC2Instance -InstanceId $newInstanceID -Force -ErrorAction Stop
}catch{
    Write-Output "ERROR: $_"
    exit
}
While((Get-EC2Instance -InstanceId $newInstanceID).Instances[0].State.Name -ne 'stopped'){
    Write-Verbose "INFO: Waiting for instance to stop..."
    Start-Sleep -s 10
}

Write-Output "INFO: detaching the EBS volumes from $($orginalInstanceID)...."
ForEach($volume in $originalVolumes){
    try{
        Dismount-EC2Volume -VolumeId $volume.VolumeId -InstanceId $orginalInstanceID -Device $volume.DeviceName -ErrorAction Stop
    }catch{
        Write-Output "ERROR: $_"
        exit
    }
}

Write-Output "INFO: detaching the EBS volumes from $($newInstanceID)...."
ForEach($volume in $tempVolumes){
    try{
        Dismount-EC2Volume -VolumeId $volume.VolumeId -InstanceId $newInstanceID -Device $volume.DeviceName -ErrorAction Stop
    }catch{
        Write-Output "ERROR: $_"
        exit
    }
}

Write-Output "INFO: Migrating $($orginalInstanceID) to $($newInstanceID) with $($originalVolumes.Count) connected volumes"
Write-Output "INFO: attaching the EBS volumes to $($newInstanceID)...."
ForEach($volume in $originalVolumes){
    try{
        Add-EC2Volume -VolumeId $volume.VolumeId -InstanceId $newInstanceID -Device $volume.DeviceName -ErrorAction Stop
    }catch{
        Write-Output "ERROR: $_"
        exit
    }
}

Write-Output "INFO: Tagging the $($newInstanceID) with original instance tags"
$orginalInstanceTags = $originalInstance.tags
ForEach($T in $orginalInstanceTags){
    try{
        $tag = New-Object Amazon.EC2.Model.Tag
        $tag.Key = $T.Key
        $value = $T.Value
        $tag.Value = $value
        New-EC2Tag -Resource $newInstanceID -Tag $tag -ErrorAction Stop
    }catch{
        Write-Output "ERROR: $_"
    }
}

Try{
    $tag = New-Object Amazon.EC2.Model.Tag
    $tag.Key = "convertedFrom"
    $value = $orginalInstanceID
    $tag.Value = $value
    New-EC2Tag -Resource $newInstanceID -Tag $tag -ErrorAction Stop
}catch{
    Write-Output "ERROR: $_"
}

Write-Output "INFO: Marking the $($orginalInstanceID) as old"
$orginalInstanceName = ($originalInstance.tags | ? {$_.Key -like "Name"}).Value
If($orginalInstanceName){
    try{
        $tag = New-Object Amazon.EC2.Model.Tag
        $tag.Key = "Name"
        $value = $orginalInstanceName+".old"
        $tag.Value = $value
        New-EC2Tag -Resource $orginalInstanceID -Tag $tag -ErrorAction Stop
    }catch{
        Write-Output "ERROR: $_"
    }
}

Write-Output "INFO: Tagging the $($orginalInstanceID) with original volumes for failback"
ForEach($device in $orginalBlockMappings){
    try{
        $tag = New-Object Amazon.EC2.Model.Tag
        $tag.Key = $device.DeviceName
        $value = $device.ebs.VolumeId
        $tag.Value = $value
        New-EC2Tag -Resource $orginalInstanceID -Tag $tag -ErrorAction Stop
    }catch{
        Write-Output "ERROR: $_"
    }
}

Write-Output "INFO: Starting the instance $($newInstanceID) with newly attached drives...."
try{
    Start-EC2Instance -InstanceId $newInstanceID -Force -ErrorAction Stop
}catch{
    Write-Output "ERROR: $_"
    exit
}
While((Get-EC2Instance -InstanceId $newInstanceID).Instances[0].State.Name -ne 'Running'){
    Write-Verbose "INFO: Waiting for instance to start..."
    Start-Sleep -s 10
}
$filterENI = New-Object Amazon.EC2.Model.Filter -Property @{Name = "attachment.instance-id"; Values = $newInstanceID}
$newInterface = Get-EC2NetworkInterface -Filter $filterENI
Write-Output "INFO: Conversion complete to $($newInstanceID)"
Write-Output "SUCCESS: Try logging into $($newInterface.PrivateIpAddress)"

Thanks Rene and Evan for passing on the idea.

Google Cloud’s Second Region in Australia

Google Cloud Platform (GCP) has extended its reach in Australia and New Zealand (ANZ) with a second region in Melbourne.

Why does this matter?

Having two regions inside Australia allows for customers to extend their architecture for highly available or disaster recoverable solutions. Google now join Azure (who has 3) as having multiple regions inside Australia, no doubt we will keep a close eye on AWS whom were the first public cloud provider to enter Sydney many moons ago.

What’s different about Google’s Regions?

The distinguishing network feature that sets GCP apart from its rivals is how they allow customers to design their Virtual Private Cloud (VPC). GCP allow for subnets in the single VPC to span across as many regions as you’d like. You have the ability to create a single globally distributed VPC with subnets in the Americas, Asia and Europe. Build a logical DMZ zone that has subnets in each region for your globally distributed web services. Unlike the other cloud providers whom their software defined networks are region specific and peering must be setup that incurs bandwidth usage and connection charges. Thoughts go through my head as to how you would build a globally distributed VPC with local on-ramp for your regions on-premises networks. None the less, allowing for your traffic from Asia to Europe to traverse Google’s backhaul could makes life easier. The devil is in the detail, as what is also unique to Google is the VM-VM egress charges with cross region. Which is kind of saying the same thing as peering, just putting the price on a different object. All things to carefully think about when planning your cloud deployments. Maybe in some circumstances based on the pricing model, Google outweighs the other heavy weight hitters.

What will be interesting to see is whether with Sydney and Melbourne onboarding soon, will the VM-VM egress pricing update to support Egress between Google Cloud regions within Australia. Currently if I look at what is written on the tin I’d assume that it falls under Egress between Google Cloud regions within Oceania (per GB) at $0.08, where Oceania includes Australia, New Zealand, and surrounding Pacific Ocean islands such as Papua New Guinea and Fiji. This region excludes Hawaii.

Google have a nice inter-Region Google Cloud Inter-region latency and throughput pivot table that gets metrics form a Perfkit test. The current lowest latent packet is to asia-southeast1 Singapore at ~92ms, we will definitely knock the socks off of that in ANZ with Melbourne to Sydney.

Google Cloud Inter-Region Latency and Throughput › Inter-region latency and throughput

Googles VPC network Example (here)


SQL Database Backup on IaaS using Azure Automation

I had a need to take a full SQL Database backup from a virtual machine with SQL Server hosted on Azure. This is done via an Azure Automation account, executing a runbook on a hybrid worker. This is a great way to take a offline copy of your production SQL and store it someplace safe.

To accomplish this we will use the PowerShell module ‘sqlps‘ that should be installed with SQL Server and run the command Backup-SqlDatabase.

Backup-SqlDatabase (SqlServer) | Microsoft Docs

Store SQL Storage Account Credentials

Before we can run the Backup-SqlDatabase command we must have a saved credential stored in SQL for the Storage Account using New-SqlCredential.

New-SqlCredential (SqlServer) | Microsoft Docs

Import-Module sqlps
# set parameters
$sqlPath = "sqlserver:\sql\$($env:COMPUTERNAME)"
$storageAccount = "<storageAccountName>"  
$storageKey = "<storageAccountKey>"  
$secureString = ConvertTo-SecureString $storageKey -AsPlainText -Force  
$credentialName = "azureCredential-"+$storageAccount

Write-Host "Generate credential: " $credentialName
  
#cd to sql server and get instances  
cd $sqlPath
$instances = Get-ChildItem

#loop through instances and create a SQL credential, output any errors
foreach ($instance in $instances)  {
    try {
        $path = "$($sqlPath)\$($instance.DisplayName)\credentials"
        New-SqlCredential -Name $credentialName -Identity $storageAccount -Secret $secureString -Path $path -ea Stop | Out-Null
        Write-Host "...generated credential $($path)\$($credentialName)."  }
    catch { Write-Host $_.Exception.Message } }

Backup SQL Databases with an Azure Runbook

The runbook below works on the DEFAULT instance and excludes both tempdb and model from backup.

Import-Module sqlps
$sqlPath = "sqlserver:\sql\$($env:COMPUTERNAME)"
$storageAccount = "<storageAccount>"  
$blobContainer = "<containerName>"  
$backupUrlContainer = "https://$storageAccount.blob.core.windows.net/$blobContainer/"  
$credentialName = "azureCredential-"+$storageAccount
$prefix = Get-Date -Format yyyyMMdd

Write-Host "Generate credential: " $credentialName

Write-Host "Backup database: " $backupUrlContainer
  
cd $sqlPath
$instances = Get-ChildItem

#loop through instances and backup all databases (excluding tempdb and model)
foreach ($instance in $instances)  {
    $path = "$($sqlPath)\$($instance.DisplayName)\databases"
    $databases = Get-ChildItem -Force -Path $path | Where-object {$_.name -ne "tempdb" -and $_.name -ne "model"}

    foreach ($database in $databases) {
        try {
            $databasePath = "$($path)\$($database.Name)"
            Write-Host "...starting backup: " $databasePath
            $fileName = $prefix+"_"+$($database.Name)+".bak"
            $destinationBakFileName = $fileName
            $backupFileURL = $backupUrlContainer+$destinationBakFileName
            Write-Host "...backup URL: " $backupFileURL
            Backup-SqlDatabase -Database $database.Name -Path $path -BackupFile $backupFileURL -SqlCredential $credentialName -Compression On 
            Write-Host "...backup complete."  }
        catch { Write-Host $_.Exception.Message } } }

NOTE: You will notice a performance hit on the SQL Server so schedule this runbook in a maintanence window.

Deploy Craft CMS with Azure App Service for Linux Containers

Here is some key points to deploy a Craft CMS installation on Azure Web App using container images. In this blog we will step you through some of the modifications needed to make the container image run in Azure and the deployment steps to run in an Azure DevOps Pipeline.

CraftCMS have reference material for their docker deployments found here:
GitHub – craftcms/docker: Craft CMS Docker images

Components

The components required are:

  • Azure Web App for Linux Containers
  • Azure Database for MySQL
  • Azure Storage Account
  • Azure Front Door with WAF
  • Azure Container Registry

Custom Docker Image

To make this work in an Azure Web App we have to do the following additional steps:

  • Install OpenSSH & Enable SSH daemon on 2222 at startup
  • Set the password for root to “Docker!”
  • Install the Azure Database for MySQL root certificates for SSL connections from the Container

We do this in the Dockerfile. We are customizing the NGINX implementation of CraftCMS to allow for the front end to service the HTTP/HTTPS requests from the App Service.

# composer dependencies
FROM composer:1 as vendor
COPY composer.json composer.json
COPY composer.lock composer.lock
RUN composer install --ignore-platform-reqs --no-interaction --prefer-dist

FROM craftcms/nginx:7.4
# Install OpenSSH and set the password for root to "Docker!". In this example, "apk add" is the install instruction for an Alpine Linux-based image.
USER root
RUN apk add openssh sudo \
     && echo "root:Docker!" | chpasswd 
# Copy the sshd_config file to the /etc/ directory
COPY sshd_config /etc/ssh/
COPY start.sh /etc/start.sh
COPY BaltimoreCyberTrustRoot.crt.pem /etc/BaltimoreCyberTrustRoot.crt.pem 
RUN ssh-keygen -A
RUN addgroup sudo
RUN adduser www-data sudo
RUN echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers

# the user is `www-data`, so we copy the files using the user and group
USER www-data
COPY --chown=www-data:www-data --from=vendor /app/vendor/ /app/vendor/
COPY --chown=www-data:www-data . .

EXPOSE 8080 2222
ENTRYPOINT ["sh", "/etc/start.sh"]

The corresponding ‘start.sh’

#!/bin/bash
sudo /usr/sbin/sshd &
/usr/bin/supervisord -c /etc/supervisor/conf.d/supervisor.conf

Build the Web App

The Azure Web App resource is deployed using a ARM template. Here is a snippet of the template, the key is to have your environment variables defined:

{
            "comments": "This is the docker web app running craftcms/custom Docker image",
            "type": "Microsoft.Web/sites",
            "name": "[parameters('siteName')]",
            "apiVersion": "2020-06-01",
            "location": "[parameters('location')]",
            "tags": "[parameters('tags')]",
            "dependsOn": [
                "[variables('hostingPlanName')]",
                "[variables('databaseName')]"
            ],
            "properties": {
                "siteConfig": {
                    "appSettings": [
                        {
                            "name": "DOCKER_REGISTRY_SERVER_URL",
                            "value": "[reference(variables('registryResourceId'), '2019-05-01').loginServer]"
                        },
                        {
                            "name": "DOCKER_REGISTRY_SERVER_USERNAME",
                            "value": "[listCredentials(variables('registryResourceId'), '2019-05-01').username]"
                        },
                        {
                            "name": "DOCKER_REGISTRY_SERVER_PASSWORD",
                            "value": "[listCredentials(variables('registryResourceId'), '2019-05-01').passwords[0].value]"
                        },
                        {
                            "name": "WEBSITES_ENABLE_APP_SERVICE_STORAGE",
                            "value": "false"
                        },
                        {
                            "name": "DB_DRIVER",
                            "value": "mysql"
                        },
                        {
                            "name": "DB_SERVER",
                            "value": "[reference(resourceId('Microsoft.DBforMySQL/servers',variables('serverName'))).fullyQualifiedDomainName]"
                        },
                        {
                            "name": "DB_PORT",
                            "value": "3306"
                        },
                        {
                            "name": "DB_DATABASE",
                            "value": "[variables('databaseName')]"
                        },
                        {
                            "name": "DB_USER",
                            "value": "[variables('databaseUserName')]"
                        },
                        {
                            "name": "DB_PASSWORD",
                            "value": "[parameters('administratorLoginPassword')]"
                        },
                        {
                            "name": "DB_SCHEMA",
                            "value": "public"
                        },
                        {
                            "name": "DB_TABLE_PREFIX",
                            "value": ""
                        },
                        {
                            "name": "SECURITY_KEY",
                            "value": "[parameters('cmsSecurityKey')]"
                        },
                        {
                            "name": "WEB_IMAGE",
                            "value": "[parameters('containerImage')]"
                        },
                        {
                            "name": "WEB_IMAGE_PORTS",
                            "value": "80:8080"
                        }

                    ],
                    "linuxFxVersion": "[variables('linuxFxVersion')]",
                    "scmIpSecurityRestrictions": [
                        
                    ],
                    "scmIpSecurityRestrictionsUseMain": false,
                    "minTlsVersion": "1.2",
                    "scmMinTlsVersion": "1.0"
                },
                "name": "[parameters('siteName')]",
                "serverFarmId": "[variables('hostingPlanName')]",
                "httpsOnly": true      
            },
            "resources": [
                {
                    "apiVersion": "2020-06-01",
                    "name": "connectionstrings",
                    "type": "config",
                    "dependsOn": [
                        "[resourceId('Microsoft.Web/sites/', parameters('siteName'))]"
                    ],
                    "tags": "[parameters('tags')]",
                    "properties": {
                        "dbstring": {
                            "value": "[concat('Database=', variables('databaseName'), ';Data Source=', reference(resourceId('Microsoft.DBforMySQL/servers',variables('serverName'))).fullyQualifiedDomainName, ';User Id=', parameters('administratorLogin'),'@', variables('serverName'),';Password=', parameters('administratorLoginPassword'))]",
                            "type": "MySQL"
                        }
                    }
                }
            ]
        },

All other resources should be ARM defaults. No customisation required. Either put them all in a single ARM template or seperate them out on their own. Your choice to be creative.

Build Pipeline

The infrastructure build pipeline looks something like the below:

# Infrastructure pipeline
trigger: none

pool:
  vmImage: 'windows-2019'
variables:
  TEMPLATEURI: 'https://storageAccountName.blob.core.windows.net/templates/portal/'
  CMSSINGLE: 'singleCraftCMSTemplate.json'
  CMSSINGLEPARAM: 'singleCraftCMSTemplate.parameters.json'
  CMSFILEREG: 'ContainerRegistry.json'
  CMSFRONTDOOR: 'frontDoor.json'
  CMSFILEREGPARAM: 'ContainerRegistry.parameters.json'
  CMSFRONTDOORPARAM: 'frontDoor.parameters.json'
  LOCATION: 'Australia East'
  SUBSCRIPTIONID: ''
  AZURECLISPID: ''
  TENANTID: ''
  RGNAME: ''
  TOKEN: ''
  ACS : 'registryName.azurecr.io'
resources:
  repositories:
    - repository: coderepo
      type: git
      name: Project/craftcms
stages:
- stage: BuildContainerRegistry
  displayName: BuildRegistry
  jobs:
  - job: BuildContainerRegistry
    displayName: Azure Git Repository
    pool:
      vmImage: 'windows-latest'
    steps:
    - task: CopyFiles@2
      name: copyToBuildHost
      displayName: 'Copy files to the build host for execution'
      inputs:
        Contents: '**'
        TargetFolder: '$(Build.ArtifactStagingDirectory)'
    - task: AzureFileCopy@4
      inputs:
        SourcePath: '$(Build.Repository.LocalPath)\CMS\template\*'
        azureSubscription: ''
        Destination: 'AzureBlob'
        storage: ''
        ContainerName: 'templates'
        BlobPrefix: portal
        AdditionalArgumentsForBlobCopy: --recursive=true
    - task: AzureResourceManagerTemplateDeployment@3
      displayName: "Deploy Azure ARM template for Azure Container Registry"
      inputs:
        deploymentScope: 'Resource Group'
        azureResourceManagerConnection: 'azureDeployCLI-SP'
        subscriptionId: '$(SUBSCRIPTIONID)'
        action: 'Create Or Update Resource Group'
        resourceGroupName: '$(RGNAME)'
        location: '$(LOCATION)'
        templateLocation: 'URL of the file'
        csmFileLink: '$(TEMPLATEURI)$(CMSFILEREG)$(TOKEN)' 
        csmParametersFileLink: '$(TEMPLATEURI)$(CMSFILEREGPARAM)$(TOKEN)' 
        deploymentMode: 'Incremental'
    - task: AzurePowerShell@5
      displayName: 'Import the public docker images to the Azure Container Repository'
      inputs:
        azureSubscription: 'azureDeployCLI-SP'
        ScriptType: 'FilePath'
        ScriptPath: '$(Build.ArtifactStagingDirectory)\CMS\template\dockerImages.ps1'
        errorActionPreference: 'silentlyContinue'
        azurePowerShellVersion: 'LatestVersion'

- stage: BuildGeneralImg
  dependsOn: BuildContainerRegistry
  displayName: BuildImages
  jobs:
  - job: BuildCraftCMSImage
    displayName: General Docker Image
    pool:
      vmImage: 'ubuntu-18.04'
    steps:
    - checkout: self
    - checkout: coderepo
    - task: CopyFiles@2
      name: copyToBuildHost
      displayName: 'Copy files to the build host for execution'
      inputs:
        Contents: '**'
        TargetFolder: '$(Build.ArtifactStagingDirectory)'
    - task: Docker@2
      displayName: Build and push
      inputs:
        containerRegistry: ''
        repository: craftcms
        command: buildAndPush
        dockerfile: 'craftcms/Dockerfile'
        tags: |
          craftcms
          latest

- stage: Deploy 
  dependsOn: BuildGeneralImg
  displayName: DeployWebService
  jobs:
  - job:
    displayName: ARM Templates
    pool:
      vmImage: 'windows-latest'
    steps:
    - checkout: self
    - checkout: coderepo
    - task: CopyFiles@2
      name: copyToBuildHost
      displayName: 'Copy files to the build host for execution'
      inputs:
        Contents: '**'
        TargetFolder: '$(Build.ArtifactStagingDirectory)'
    
    - task: AzureResourceManagerTemplateDeployment@3
      displayName: "Deploy Azure ARM single template for remaining assets"
      inputs:
        deploymentScope: 'Resource Group'
        azureResourceManagerConnection: ''
        subscriptionId: '$(SUBSCRIPTIONID)'
        action: 'Create Or Update Resource Group'
        resourceGroupName: '$(RGNAME)'
        location: '$(LOCATION)'
        templateLocation: 'URL of the file'
        csmFileLink: '$(TEMPLATEURI)$(CMSSINGLE)$(TOKEN)' 
        csmParametersFileLink: '$(TEMPLATEURI)$(CMSSINGLEPARAM)$(TOKEN)' 
        deploymentMode: 'Incremental'

- stage: Secure 
  dependsOn: Deploy
  displayName: DeployFrontDoor
  jobs:
  - job:
    displayName: ARM Templates
    pool:
      vmImage: 'windows-latest'
    steps:
    - task: CopyFiles@2
      name: copyToBuildHost
      displayName: 'Copy files to the build host for execution'
      inputs:
        Contents: '**'
        TargetFolder: '$(Build.ArtifactStagingDirectory)'
    
    - task: AzureResourceManagerTemplateDeployment@3
      displayName: "Deploy Azure ARM single template for Front Door"
      inputs:
        deploymentScope: 'Resource Group'
        azureResourceManagerConnection: ''
        subscriptionId: '$(SUBSCRIPTIONID)'
        action: 'Create Or Update Resource Group'
        resourceGroupName: '$(RGNAME)'
        location: '$(LOCATION)'
        templateLocation: 'URL of the file'
        csmFileLink: '$(TEMPLATEURI)$(CMSFRONTDOOR)$(TOKEN)' 
        csmParametersFileLink: '$(TEMPLATEURI)$(CMSFRONTDOORPARAM)$(TOKEN)' 
        deploymentMode: 'Incremental'
    - task: AzurePowerShell@5
      displayName: 'Apply Front Door service tags to Web App ACLs'
      inputs:
        azureSubscription: 'azureDeployCLI-SP'
        ScriptType: 'FilePath'
        ScriptPath: '$(Build.ArtifactStagingDirectory)\CMS\template\enableFrontDoorOnWebApp.ps1'
        errorActionPreference: 'silentlyContinue'
        azurePowerShellVersion: 'LatestVersion'    

Enable Front Door with WAF

The pipeline stage DeployFrontDoor has an enableFrontDoorOnWebApp.ps1

$azFrontDoorName = ""
$webAppName = ""
$resourceGroup = ""

Write-Host "INFO: Restrict access to a specific Azure Front Door instance"
try{
    $afd = Get-AzFrontDoor -Name $azFrontDoorName -ResourceGroupName $resourceGroup
}
catch{
    Write-Host "ERROR: $($_.Exception.Message)"
}

Write-Host "INFO: Setting the IP ranges defined in the AzureFrontDoor.Backend service tag to the Web App"
try{
    Add-AzWebAppAccessRestrictionRule -ResourceGroupName $resourceGroup -WebAppName $webAppName -Name "Front Door Restrictions" -Priority 100 -Action Allow -ServiceTag AzureFrontDoor.Backend -HttpHeader @{'x-azure-fdid' = $afd.FrontDoorId}}
catch{
    Write-Host "ERROR: $($_.Exception.Message)"
}

You should now have a CraftCMS web app that is only available through the FrontDoor URL.

Continuous Deployment

There are many ways to deploy updates to your website, an Azure Web App has a beautiful thing called slots that can be used.

# Trigger on commit
# Build and push an image to Azure Container Registry
# Update Web App Slot

trigger:
  branches:
    include:
      - main
  paths:
    exclude:
      - pipelines
      - README.md
  batch: true

resources:
- repo: self

pool:
  vmImage: 'windows-2019'
variables:
  TEMPLATEURI: 'https://storageAccountName.blob.core.windows.net/templates/portal/'
  LOCATION: 'Australia East'
  SUBSCRIPTIONID: ''
  RGNAME: ''
  TOKEN: ''
  SASTOKEN: ''
  TAG: '$(Build.BuildId)'
  CONTAINERREGISTRY: 'registryName.azurecr.io'
  IMAGEREPOSITORY: 'craftcms'
  APPNAME: ''

stages:
- stage: BuildImg
  displayName: BuildLatestImage
  jobs:
  - job: BuildCraftCMSImage
    displayName: General Docker Image
    pool:
      vmImage: 'ubuntu-18.04'
    steps:
    - checkout: self
    - task: CopyFiles@2
      name: copyToBuildHost
      displayName: 'Copy files to the build host for execution'
      inputs:
        Contents: '**'
        TargetFolder: '$(Build.ArtifactStagingDirectory)'
    - task: Docker@2
      displayName: Build and push
      inputs:
        containerRegistry: ''
        repository: $(IMAGEREPOSITORY)
        command: buildAndPush
        dockerfile: 'Dockerfile'
        tags: |
          $(IMAGEREPOSITORY)
          $(TAG)


- stage: UpdateApp 
  dependsOn: BuildImg
  displayName: UpdateTestSlot
  jobs:
  - job:
    displayName: 'Update Web App Slot'
    pool:
      vmImage: 'windows-latest'
    steps:
    - task: AzureWebAppContainer@1
      displayName: 'Update Web App Container Image Reference' 
      inputs:
        azureSubscription: ''
        appName: $(APPNAME)
        containers: $(CONTAINERREGISTRY)/$(IMAGEREPOSITORY):$(TAG)
        deployToSlotOrASE: true
        resourceGroupName: $(RGNAME)
        slotName: test