DevOps and cloud technologies are an increasingly popular combo in the technology world, and by extension when it comes to building labs as well. DevOps is a set of practices that combines software development (Dev) and IT operations (Ops). DevOps aims to shorten the life cycle of systems development while also providing continuous delivery with high quality software. Under DevOps methodology, core infrastructure is typically defined in a JSON or YAML “Template” and can be manipulated via code – this is referred to as Infrastructure as Code (IaC). DevOps is closely tied with Agile software development practices and is characterized by a constant cycle of evolution, one in which nothing is ever complete.
The popularity of DevOps methodologies, coupled with the mass migration of almost every type of technology to the cloud, has contributed to the fact that both are quickly becoming de facto standards. And that means that labs that provide skills development and validation must account for the prevalence of DevOps and cloud technologies.
When we talk about DevOps and the cloud, what’s often being referred to is how to learn about the cloud and how to manipulate it either by logging in directly or by using cloud resource templates. For the sake of this blog, we’re talking about a lab scenario in which users interact with a cloud platform directly, via a web portal, or indirectly, via a command line or programming methods.
When we talk about the cloud in the context of building a lab, there are many ways to interact. Of course, we’re referring to major cloud providers such as Microsoft Azure® and Amazon Web Services® (AWS), but also plenty of other software-as-a-service (SaaS) providers. Those who are building labs may also have their own cloud platform. We also need to consider the components we’ll need, such as whether or not virtual machines are required, which will vary depending on the scenario.
Different Types of Cloud Labs
These are a few of the factors that will determine how we build a lab. For labs built on Lab on Demand platform, there are three common scenarios for those that focus on DevOps and cloud technologies. The first is accessing a large cloud platform for which we have native support, such as Azure and AWS. Second, accessing public cloud platforms that we do not currently have native support for. And third, integrating with a cloud platform in which you own.
Native integration: Simplicity might be the key word for this scenario. Thanks to our native integration, as a lab developer you add authentication details to our platform, and the platform then manages everything. Our native integration keeps the process clean and keeps the security behind it simple while also giving you lots of capabilities as an author. Without being required to come up with code to make it work, you can specify users, constraints around their access, and cloud resources to deploy for every lab launch, the platform will then also clean everything up when the lab is complete. Returning to the key word I mentioned earlier, this scenario keeps the lab authoring experience very simple.
Public cloud platforms, without native integration: These can be harnessed a couple of ways. Things can be pre-created in the cloud platform, such as user credentials, which would get handed to individual users as they launch labs. Another option is lifecycle actions, which are custom code the lab builder can define and run against the cloud platform to manipulate the APIs in different ways, such as creating user accounts and other resources. Commonly, these capabilities are used in tandem.
A cloud platform you own: This scenario presents a couple of options. First, you can pre-deploy or write your own code similar to public cloud platforms without a native integration. Secondly, another option is to perform a custom install. In this scenario you install your own cloud platform into a virtual machine and a learner gets her own instance of your platform. This scenario fully disconnects from a live production platform and puts learners into a personalized instance. The advantage is that everything can be preconfigured exactly as you want it, regardless of how the public cloud is functioning. It may not be as up to date, but it’s isolated, which means you don’t have deal with how the internet happens to be working – or not – that day.
There are two components that apply to each of the three scenarios detailed above—the concept of creating a hybrid scenario and lifecycle actions.
Lifecycle actions enhance your ability to author custom code that can run against the cloud platform. Lifecycle actions apply to each of the three scenarios, but in different ways. Lifecycle actions may be a requirement with cloud platforms that currently do not have a native integration, whereas with larger platforms they are an option and can extend the capabilities beyond what can be done natively.
For example, in native integration scenarios that require a storage account with files already provisioned in it, lifecycle actions are required to add the files into the storage account. For smaller cloud platforms, lifecycle actions may be required to interact with the API to automate user provisioning or environment configuration. Similarly, if you’re using on your own cloud platform, you may need lifecycle actions to interact with the API.
Finally, hybrid environments apply to each scenario. In these scenarios a virtual machine or docker container is added into the equation, allowing for a replication of what’s on-premises that can integrate with the cloud. Examples include migrating a SQL Server from on-premises into the cloud, or coding an application locally and then deploying it as a cloud web app.