Read on to see how proven processes can get you migrated and working in a cloud environment quickly and cost effectively.
OK, so you’re convinced – cloud is the way forward for your business or your customers.
You’re sold on the commercial and productivity advantages, the DR plans are in place and the platform tested. Now all that’s left to meet the hallowed ‘Go-Live’ date is to move all that data into the new cloud environment. Easy? You probably don’t think so. Data migration into the cloud is definitely a challenge, but there are a variety of methods you can choose from to ensure that the challenge is managed in an acceptable way for you and/or your customer.
The most important consideration with data migration is the scheduling.
Whether you are managing this process for your own business, or are an integrator providing the new cloud service to your customer, you need to be able to break the migration down into scheduled, manageable ‘chunks’. It seems obvious, but too often the data migration part of a cloud on-boarding project reads like this:
“Step 8: Migrate Data.”
When dealing with any appreciably large data set, the migration is not likely to take place as a ‘one shot’ deal. This would simply be too difficult to manage. A better approach is to logically separate and break down the data set and set appropriate times and durations for each chunk. For example, let’s not treat data files (such as documents, spreadsheets and so forth) the same as emails. They are two distinct types of data, though both very important. Emails, for example, are a type of ‘static-set data’. What this means is that while of course a user may send and receive many emails a day, an email, once sent or received, does not change. This makes email a type of data that is easy to ‘delta synchronise’ (don’t we love migration terminology!). In other words, there is a low risk in email migration, as if you were to migrate a users’ email and accidentally miss some, it’s not difficult to fix. You can do another export/import to ‘catch up’ anything missing.
Varied document or spreadsheet data
The same cannot be said for more varied document or spreadsheet data. This type of data is ‘dynamic-set data’. Unlike an email, that doesn’t change once it’s sent or received, documents and spreadsheets are potentially in a constant state of change. Just because a copy of the document exists does not mean it is the most current one. What this means for you is that there is a much higher risk when migrating data files – if you do not schedule the migration, cutover and user access to data properly, you may end up with a split-brain document where both an on-premise and a cloud copy of the same file are updated by different people. Suddenly those files are out of sync, and there’s no easy way to fix that problem! Overwriting either file with the other copy would result in a loss of data. You need to ensure that you do not allow access to both copies of a file at once when you begin migrating. A ‘hard-cutover’ time is necessary for data files like this.
Of course, the idea is not be scary – these problems are completely surmountable, but it all comes back to how we break the migration into chunks, and use appropriate methods and scheduling that best suits each type of data. Hypernode has some great process documents and tools that help you perform this breakdown.
Choosing the appropriate tools
Once we understand the key challenges in performing a migration, we then just need to choose appropriate tools and methods to make sure we can meet a timeline for cutover. There are many methods use by Hypernode and other vendors to achieve this. For example, an email migration may best be performed using a third-party tool designed for the purpose, such as MigrationWiz. This hosted solution can be used to ‘funnel’ email from one server to another. A benefit is it’s ability to move between different versions or vendors of email server technology. There is a cost involved of course, but this may be one way to ensure your email migration is straightforward.
Data migration is a slightly different process
Often the best way to start is to take a full copy of all the data and physically move it to the cloud provider – Hypernode has a process for accepting external HDD’s or other physical data at any of our locations for transfer into a private cloud environment, for example. Once a primary copy of all that data is present in the cloud environment, you could use a variety of applications and tools to ‘diff’ that data set across the WAN – essentially just replicating changed or new files. Once the hard cutover date is reached, access to the existing on-premise data needs to be removed for all users to prevent the previously mentioned ‘split-brain’ files and you’re ready to go!
Data migration processes don’t have to be complex or difficult. If you follow the simple rules of breaking down your data set into the right chunks, choosing the right process for each one and understanding the difference between ‘soft’ and ‘hard’ cutovers, you’ll be well equipped for a seamless migration. If you need some help making this happen, talk to the Hypernode team.