Migrating Legacy IT Systems to the Cloud might seem like an insurmountable challenge, especially as legacy IT Systems are often full of weird custom code and outdated requirements from old technology.
However, you can still move them to the cloud and start to reap the benefits of hosting your IT in the cloud, including reduced costs and less resource needed for your IT team to support the system.
At Digital Craftsmen, when we’re migrating legacy enterprise applications, our projects have some common themes: simplification, automation and security.
Let’s look at each of these in more detail.
With simplifying your cloud migration, the initial audit is key. What have you got? How hard is it working? What third party license restrictions are there?
Is there any special hardware needing fixing? You might have a hardware security module that you might need to co-locate with that equipment.
What are the Disaster Recovery requirements? As we know in the public cloud, at all times you have to build you own disaster recovery.
And if you’ve got very specific disaster recovery requirements, you might want to be able to use that private cloud capabilities to deal with that.
When you’ve answered these questions, a lot of your vendors will have an existing reference design. If you’re looking at a virtual private cloud or private cloud solution, you can use a lot those same reference diagrams.
That said, you should take advantage of Platforms-as- a-Service features when they’re available. For instance, consider using a hosted database solutions Platforms-as- a-Service product, like Amazon Web Services. Built-in backups within a virtual cloud environment is often part of the service, but there are lots more benefits of using a PaaS provider.
If you want to have more control over the specification of the instances that you run and aren’t comfortable with PaaS, you can choose a Virtual Private Cloud service over the public cloud.
With a Virtual Private Cloud, you typically get a resource pool that you can carve up into whatever shape you need it to be, so you have more control over how your IT system is setup in the cloud.
Automating cloud management is really around building confidence in the deployed system. What we’re trying to do there is to make sure that you are really confident in the setup you’ve got.
Configuration management is really important. You can’t just look at the main parts of your system. You’ve got to build different ways to understand what’s going on with your system and automate that as much as you can. This helps you know what is going on at all times.
Centralise this configuration as much as you can: Getting a single pane of glass on things gives you a better view of your infrastructure overall. The better visibility you have, the more chance you’ve got of detecting a problem before it really becomes a major issue.
Using intrusion detection gives you lots of early warnings if something’s going wrong, so make sure to deploy active intrusion detection and make use of it.
For example, in September 2016, ClixSense had user names, passwords, home addresses and email addresses of over 6 million users exposed after hackers hit the ‘click for cash’ website. To make matters worse, the passwords were stored in plaintext.
The hackers entered through an old UAT server that ClixSense forgot that they had. Make sure that you’re monitoring any old servers and have fully automated monitoring to alert you to any problems that come up.
Patching is very dull, but it’s really, really important when there’s an internet-facing service. Again, make sure you use something like ConnectWise Automatic or anyone with products out there to help you automate that process.
Automation helps you generate confidence in your cloud setup, so that you’re confident that people aren’t attacking those systems. It’s also useful to amplify this upstream to the people who care about uptime and security from the business point of view.
Probably the biggest difference with public clouds and virtual private clouds is that when you move it outside of your own offices, there’s going to be some kind of globally accessible control panel, which means security is critical.
For example, Code Spaces was a SaaS provider to software developers offering source code management tools like Git and Apache Subversion, all on Amazon Web Services (AWS). They lost their entire business because someone hacked into their Amazon account and was able to delete their work.
A lot of security is common sense. Use multi-factor authentication (MfA) if you can. Always use strong passwords. Rigorously enforce access control so you only give the minimum required access to people to achieve the job functions that they need to achieve.
If you’re particularly concerned by the security risks of bringing your IT systems into the cloud, your application doesn’t have to be internet facing at all. You can run a virtual private cloud in your infrastructure, connect that with a VPN to go back to your office, then try and tie the VPN down to just known IP addresses. This will give you some of that protection you’re looking for.
Data solvency is still complicated, but is getting a lot better than it was a couple of years ago.
Lots of people say “no, no – we can’t go to the cloud because of data solvency.” But data solvency is much better than it was. For example, Amazon now has a data solvency services across its entire estate and Digital Craftsmen are ISO 270001 accredited within all of our processes.
Lastly, you might consider a hybrid approach. You might for instance use a virtual private cloud to keep your data on the databases, somewhere you know exactly where it is. And then use the public cloud alongside a virtual private cloud environment, so that you can scale up and run web services out there where it’s less critical from a compliance point of view.
Digital Craftsmen are managed cloud services specialists, so if you’re looking to migrate a legacy IT system or application to the cloud, we can support you. Contact Digital Craftsmen now to see how we can help with your cloud migration.