Deploying software systems to environments can be one of the most feared actions a software company makes. If every time software is deployed, the system must be manually coerced into operating as intended, deployment can be a nightmare. This can all be alleviated with fully automated deployments.
Fully automated deployments force deployments to occur the same way every time. Part of what makes them successful is utilizing IaC (Infrastructure as Code) and CaC (Configuration as Code). With your infrastructure and configurations stored in version control, your deployed environments are the same every time, allowing us to automate our deployments.
Not only is your infrastructure pre-defined, also is your deployment steps. These build and release steps should be the same for each of your multiple deployments for a single release. Thats right, we aren’t going to release our software just once a cycle. We are going to do it as often as possible until it doesn’t hurt. It may sound crazy, but if it hurts bad enough, it will get fixed. Here I’ll walk you through some concepts about how to fix it.
When we define our infrastructure as code, we can now also test with same environment setup as our production. Not only in the same environment, but in a clean one as well. This means that no stale data can skew the results of our test environments from a previous test or release, and tests can be done without fear of jeopardizing our production environment. We can test what ever scenario we think of and supply it to our test environment without repercussions.
Automating testing is a key part of a fully automated deployment. We can instantly know if there is an issue with any part of our software when our testing is automated. We also have more confidence in our software as we know a test cannot be accidentally skipped.
The first thing we need to do is set up our infrastructure and configuration as code. We recommend using Terraform and ARM templates for this. These are just json files that define different nodes and how they are interconnected. These files are checked into source control and are published as a unique build artifact. If you do not have version control, then setting up an Azure Devops instance is the way to go. It is free for up to 5 team members and includes build and release management for your source code. In your folder hierarchy I would have your infrastructure files in its own folder allowing you to organize your files cleanly. It also helps with setting up the build process to copy contents from a single folder and not have to filter out other files.
Once we have our infrastructure defined and checked into source control, we now need to create a build which produces our infrastructure configuration alongside our assemblies. There should only be one build pipeline per product, along with one product per git repository. If you have a microservice architecture, then only one build pipeline per microservice and it should be the only microservice in its’ git repository. When using Azure DevOps, you can publish multiple folders as part of a build definition, allowing you to limit what you download later in your release pipeline.
As part of the build process, we want to make sure that we don’t break existing logic. We can do this by running unit tests which don’t require our artifacts to be published. If any of the unit tests fail, then we can fail the build to prevent any faulty builds from entering the field. This can also be done as code is checked in with gated check-ins. Once our build process is complete and running successfully, with our assemblies and infrastructure files published as separate artifacts, we can move on to our release pipeline.
Defining Our Release
Our release pipeline will pick up where the build pipeline left off. When we create our Azure Devops release pipeline, we select what build definition we will be pulling from. We can select multiple build pipelines if needed but isn’t necessary here. Here we will define multiple environments, dev, test, and production.
The dev environment is meant for running automated tests. It isn’t for human interaction. Here unit tests are ran along with any other functionality driven automated tests that are appropriate. This is deployed to automatically any time a build is successful, regardless of the branch. This definition will automatically pull down all the artifacts produced by the build process. We can click on the agent section and filter out any tasks we would not want to use. In this situation we will be using them all so the defaults are fine. Our first step in the dev release is to create the environment in which our tests will run. We use our IaC scripts here and provision our environment. If using Terraform, there is a free marketplace task that you can use to do this. We run our integration tests and smoke tests. Here we can also test our IaC and CaC configurations to make sure our environment is spun up correctly. Make sure you check for any deployment failures due to services not existing or unable to communicate. These issues need to be resolved before creating your pull request to merge into a common branch.
Once this tests pass, create a pull request to merge your changes into your common branch. This can be your trunk or what ever branching strategy your organization is using, as long as developers or working on their own branch. The pull requests gives your team the chance to review and test each other’s work before they become part of your main branch.
The test environment is meant for running manual and exploratory tests. Any test for what ever reason can not be automated is done here. In this environment you should be testing your deployment branch. This could either be your trunk, master, or whatever branch you deploy your code from. This environment should be fresh and spun up for each deployment. You don’t want to have any residual data or metrics from previous tests skewing your new test results. Your migration tests come in your next environment. Right now, just focus on your manual testing for what you currently have. Make sure it meets any unique test or fringe cases and complies with any regulations you can not automatically test for.
Here is where we test our migration path, and performance. Our staging environment needs to be the same scale as our production environment. This may seem a bit expensive but will give you accurate metrics in your application performance. Our release pipeline for this environment should contain tasks to upgrade staging environments. If we are using IaC, this should spin up new resources if needed, ensure they are running before tearing down unused resources. We are testing the steps we will to deploy to production. Your production tasks should be a direct clone of your staging with a different endpoint target. You never want your production environment to be the first time you ran a task or a test.
Here we will run tests against a clone of the production database. There are a few things we want to test for. Ensure that during your deployment from your previous staging environment, that there was no lapse in service. The idea with a fully automated deployment is to enable rapid deployments, even multiple times a day. Additionally, ensure that we can still perform normal operations with our new code base. Things like read and write to a database and we aren’t passing invalid data or expect something that isn’t there. This is the time to test fault tolerance and disaster recovery. What happens if there is a database read timeout, or the application goes down. Here you can know how the application will respond instead of speculating. Here you can also determine your performance. Azure DevOps offers cloud-based load balance testing, allowing you to measure not only app performance under different load sets, but test your auto scaling if you have it in place. Also test your logging procedures. Make sure your error logs are logging levels in a manner that is useful. You don’t want your production environment start to act bizarre only to find out your logging does work.
Your production environment should be a clone of your staging environment. The only difference should be your endpoint. If you are using deployment slots for azure web apps then make sure you tested swapping in your staging environment first, so you know exactly how your application will respond. Some things to test in production is making sure your application can hit any external resources used. Some examples include database, storage locations, rest APIs. To a smoke test and make sure you can actually hit your website and simple, light commands that test your application, and each of its endpoints.
Now that our deployment is set up, we need to make them fully automated. This means that when a developer checks in their changes, they are automatically integrated into your environments for testing and hopefully production.
Firstly, we need automated builds. We can’t automatically deploy something that doesn’t exist. So, in our build settings, set the continuous integration to enabled. To prevent having duplication build definitions defined for each branch. Set the branch specification to be a star (*). This is done by clicking the drop down and typing the star into the filter and pressing enter. That’s it. Now every time code is checked in, a new build is queued. This will run unit test and static code analysis for each change committed.
Now that we are getting our builds automatically, lets release them automatically. I know most release managers like to know what is going out, and when. They may even want to decide when. We can manage that and still achieve our goal. Our goal here with fully automated deployments is not to eliminate all human interaction, but rather prevent humans from moving things around to deploy software. A human should only write the code and approve the release. Anything more than that, and you start to get deviation in your process. Our dev environment is simple. This is spun up automatically, a few tests are run, and is then spun down.
Each dev environment is unique to the source built. When triggered, the build number is used to identify the environment. This allows each build to use the same environment template but also allow developers to not step on each other. This does require your IaC templates to take variables to define the names of your infrastructure. This also helps you at a glance understand what builds are running where in your test, staging, and production environments. If any tests fail in the dev environment, then the source shouldn’t be merged into your common branch and development should continue, with different, unique version numbers being used for subsequent builds.
To enable automatic releases for our dev environment, click on the lightning bolt on the top left of the artifact when editing your release definition pipeline. A new dialog slides out from the right and the top option asks if we would like to enable continuous deploy trigger. Set this to enabled and every time a new build of this artifact is successful, a new release will be triggered (release pipeline started).
Next, we want to ensure that our dev environment is deployed to whenever a build is successful. Click on the lighting bolt on the left of our dev environment. A similar dialog slides out from the right and the triggers section is at the top. Make sure that the selected trigger is after release.
Follow these same steps for you test environment. Your test environment will be configured the same way with one slight difference. Instead of triggering after release, you want to trigger after stage. Set the selected stage to be dev. This means that this environment will only begin deployment after dev is successful. If dev fails, then test never starts. You can enable this stage to trigger when dev partially succeeds but I wouldn’t recommend it. To ensure high quality code and products, we only want to deploy when we are green across the board. Lets do the same thing for our staging and production environments, ensuring that staging yields to test and production yields to staging.
Gate Our Releases
To give our QA teams and release managers control over their environments, lets add some gates. Clicking on that same lightning bolt lets enable pre-deployment approvals. From here we add a user group who can approve the release. This will notify all the users in that group via e-mail that a release is pending for their approval. They can either approve or reject the release. You can also set a timeout so that any time a release is waiting more that desired, it is automatically rejected. You can also set up the same for after the release by clicking on the person icon on the left of the environment card. This allows you to set up post deployment triggers. From here you select the same group to approve or reject. These users can now reject a build if a bug was found in your manual or exploratory testing. If you use this on your staging environment, it can be used to indicate the migration path isn’t correct or isn’t performing well enough for release. These gates are to help control when a release happens, if that control is necessary. Remember, the less human involvement the better.
So to review:
- we define our our desired infrastructure our app will run in
- Define our build process to create all the needed artifacts for each deployment environemnt
- Define our deployment pipeline for each environemnt
- Setup automatic triggers to kick off our build release pipelines
Just like software development is an iterate process, so is your devops process. Get your base line in place and use it. Enforce and embrace it. Keep in mind that each organization is a living organism and is unique. While this is an excellent base line, it may need some tweaking here and there. Be sure to read the rest of our blogs on devops practices and let us know how we can help you.
Image source: https://cdn-images-1.medium.com/max/1200/0*RR96ioXxXw6gpY6W.jpeg