By Charles Wright and Larry Lopez
We often work with companies with a cloud adoption strategy based on clear business thinking. They’ve made good decisions about a selected cloud configuration and vendors, but they struggle with their actual migration because they neglected or misunderstood the importance of discovery. In our experience, while effective discovery means leaving nothing to chance, it’s more achievable than you may think.
Discovery planning, the phase in which an understanding of the overall IT infrastructure is achieved, normally begins with a survey of enterprise assets, which includes data from existing systems, as well as data imported from other sources, historical benchmarks, and performance details. We often find that much of this information, especially legacy data that is in older formats, is spread out across various repositories including archaic spreadsheets and databases.
Applications have requirements that need to be duplicated at the target destination in order to run just as they did at their original source. This includes a range of incumbent requirements, and the discovery phase itself must provide accurate, timely, relevant data that addresses both functional and non-functional considerations.
Accuracy and Data Relevancy
It’s not uncommon to encounter customers who believe they don’t need to engage in discovery since they already have a customer management database system or CMDB that contains all of the necessary server and application dependencies. While it’s easy to be lured into the hope that this is true, more often than not, we’ve found that the data contained in these systems is so poorly planned and captured, it’s of little value.
Whether the data is relevant to the migration project itself, or not, it’s of critical importance. While the information may pass the first test of being accurate, it must also provide a snapshot of the functional requirements for the migration project to succeed. This includes host firewall rules, software along with hardware inventory, backend database affinities, along with utilization data for capacity planning, upstream and downstream servers, load balancer configurations, and SSL certification requirements.
Gathering Data
Discovery automation will gather forensics from applications and infrastructure within the source environment, however, it is limited because data used for a migration needs to be recent for an accurate assessment. Without recent and accurate data, it can only be regarded as a partial source of infrastructure topology. Another challenge is ‘information overload’ which often results from feature-rich automation technology that captures more data than is necessary or doesn’t fully capture what’s needed to organize the information into an actionable state.
This happens often when the collection process is managed teams employing collection technology selected solely upon the bells and whistles they offer, rather than the operational data they mine. When cumbersome discovery software takes weeks, or even, months of integration time, followed by massive data evaluation and parsing efforts, we scratch our heads wondering why things have been made so complicated. Ultimately, the relevant information comes down to a view into what’s running and how—utilization, application and hardware inventory, affinity connection information, and interdependency mapping.
BI
One of the most important and overlooked parts of a migration project is consideration of how the compute components serve the applications used by the business itself. Combining business intelligence (BI) with hard data is the only way to create a plan that reflects the non-functional requirements that no discovery technology yields on its own. Since migration is first and foremost an exercise in risk mitigation, the business drivers for the migration itself must be defined and understood by both the IT and management teams.
Typically, we find that most migrations come on the heels of business milestones such as an upgrade in data center facilities, mergers and acquisitions where there is a reduction in redundancy, business continuity planning, legacy system upgrades, cloud adoption, or multi-cloud configurations.
In each case, the priorities, risks, costs and complexity requires a meticulous understanding of the business imperatives behind the decision to migrate in the first place. This process of gathering BI includes an assessment of blackout periods, user acceptance plans, uptime requirements, and service-level agreements that provide insight into the destination environment.
Putting It Together
When planning for discovery, we’ve also found that a good rule of thumb is to set a window of 30 to 45 days for setup and data gathering. This timeframe allows for the capture of enough data sets to draw coherent conclusions, including batch processing which is typical in most host environments, and provides for enough time to have meaningful discussions to collect BI. Discovery that takes longer than this usually produces information that is redundant and doesn’t result in better analysis.
We’re strong proponents of planning in which applications are prioritized and grouped into ‘waves.’ While move groups combine all the servers that must move together in a single migration window, Wave Planning includes any combination of move groups and workloads including non-production move groups, external-facing move groups, and production move groups. Capacity elements necessary for the support of incoming compute environments, such as physical resources, and configuration of the target environment, are all addressed in this type of planning. They are also timed to accelerate the migration process.
A good cloud migration strategy starts with spending the time to conduct a detailed evaluation of your compute infrastructure. By doing this, the coming transformation phase and overall migration efforts inevitably become primed and tuned to run at full throttle, following a blueprint for well-orchestrated and seamless mobility. The decisions companies make in the discovery phase can save an enormous amount of time and effort—not to mention money and are crucial to ensure a smooth transformation to the cloud.
Charles Wright is CEO/CTO, founder of ATADATA. With two decades of technical leadership roles, including 12 years at IBM’s Migration Factory, he’s an expert in large-scale transformation projects. As both an IBM Certified Architect and Open Source Group Master Architect with 25 years of experience, Larry Lopez oversees solution architecture for ATADATA.
Aug2017, Software Magazine