Welcome!

SDN Journal Authors: Yeshim Deniz, Liz McMillan, Elizabeth White, Pat Romanski, TJ Randall

Related Topics: @CloudExpo, Containers Expo Blog, Release Management , Cloud Security, @DXWorldExpo, SDN Journal

@CloudExpo: Article

Back Up Data Correctly to Avoid a Disaster, Even When Disaster Strikes

As the tech landscape evolves, data storage practices need to be carefully considered & amended to meet changing requirements

The convergence of increased data stored on company networks and stricter compliance regulations dictating the length of time the information needs to be stored for have helped cloud storage explode in popularity. The cloud offers an easy-to-use, scalable and cost-effective solution for data storage. However, organizations must seriously consider how they manage their storage from a back-up and disaster recovery perspective. While there is no doubt that cloud computing can speed disaster recovery - from reducing the time it takes to restore data, to the fact that information is stored off-site, alleviating the risk of natural disasters - incorrectly managing the storage can often prove more of a hindrance than a help. Whether an employee accidently deletes a file, or a more sinister hack on the company network takes place, for most organizations it is inevitable that data recovery will need to occur at some point. Planning for disaster is essential and having in place an effective back-up and disaster recovery process can save headaches down the line.

As the tech landscape evolves and organizations increasingly have to adapt to new trends, such as virtualization and unstructured Big Data, data storage practices need to be carefully considered and amended to meet changing requirements. There are a multitude of options available that can cause IT teams to struggle with understanding the best solution for their organizational needs. Companies often fail to consider future scenarios when making decisions and, instead, focus on their needs at the current time. This has the potential to cause problems down the line, particularly when it comes to back up and disaster recovery strategies.

From hardware failure to network hacks the potential for data loss is huge. A recent survey by independent research firm, TechValidate* revealed that significant hardware failures occur far more frequently than many may believe. Cited in the survey, 52 percent of respondents had seen a failure within the last year and of that number, 37 percent had suffered the loss within the last six months. However, the same study also revealed that 81 percent of organizations do not have a tried and tested back-up and disaster recovery strategy in place. What is alarming about these statistics is the fact that disaster recovery will be an inevitable requirement at some point for almost every business, but most have not prepared for the eventuality.

If more than two-thirds of U.S. companies have not tested their disaster recovery strategies, chances are they will have no idea how long it will take to restore their business-critical data if disaster were to strike. Where data is stored will make all the difference. While storing data all in one place may once have been the norm, this need not be the case with a cloud solution. In fact, storing everything in one environment can contradict a number of the cloud's value propositions, leading to adverse financial and disaster recovery effects. Cloud storage is a relatively cheap commodity, but storing everything - from emails about company social events to key customer information - all in one place can rapidly become expensive, even in the cloud. Also, from a practical point of view, it's likely that a lot of information stored within the company will never be looked at again and while compliance initiatives dictate that data has to be retained for a certain period of time, the location is up to the organization. There is therefore no reason to store the everyday essential information in the same location as the ‘never-again' information.

Further, if an outage occurs, any company will need to get its business-critical information back as close to immediately as possible. But if every piece of company information recorded over the last 10 years is being recovered at once, the process will be hindered and take far longer than necessary, or feasible, for business operations. This will not only cause serious headaches for anyone who needs access to the data, but it could also cost millions in lost revenue. Imagine a retail outlet not being able to process payments correctly because their server has gone down and they can't get it back up quickly enough because of all the other less essential information that they are restoring. The revenue lost could be extremely detrimental.

Storing by Importance
A new approach should be considered in order to ascertain where data should be stored. A key element that must be a part of your disaster recovery plan is the idea of "tiering" the data to be recovered based on its overall business importance. This allows resources to be correctly proportioned with the budget requirements and business impact.

The first step should be deciding which applications and data are business critical and which are not. This will then allow the data to be grouped depending on its importance and a ‘storage hierarchy' can be put in place. Data that does not need to be accessed frequently can be placed in lower cost storage that may take days to recover, while business-critical information should be placed in more expensive storage where it can be recovered quickly. In the event that a system's restore is necessary, irrelevant information will not slow the process down and everything can be returned at a speed that is appropriate to its importance.

Most companies will have vast amounts of data and manually deciding what data is stored where would be a laborious process for an individual, or even a team after the initial segregation has taken place. Therefore, once the hierarchy has been put in place, it can be combined with an automated system that intelligently tracks and tags all data based on predefined rules, and automatically diverts it to the correct location. Not only does this allow IT teams to focus on more value adding tasks, but also guarantees all data is backed-up, without concern that anything may have been missed.

With these systems in place, businesses can test and tweak their strategies and be sure that in the event of an outage, their applications, data and systems are only the touch of a button away. Planning, implementing and testing data recovery techniques help make the actual disaster, be the only disaster.

*Survey conducted by independent research firm, TechValidate, December 2012.

More Stories By Bob Davis

With more than 25 years of software marketing and executive management experience, Bob Davis oversees Kaseya’s global marketing efforts. He applies significant experience from marketing network and system management solutions to directing Kaseya’s strategy, product marketing, branding, public relations, design and social networking functions One of the original founders of the company, Davis returned to Kaseya in 2010.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


CloudEXPO Stories
In very short order, the term "Blockchain" has lost an incredible amount of meaning. With too many jumping on the bandwagon, the market is inundated with projects and use cases that miss the real potential of the technology. We have to begin removing Blockchain from the conversation and ground ourselves in the motivating principles of the technology itself; whether it is consumer privacy, data ownership, trust or even participation in the global economy, the world is faced with serious problems that this technology could ultimately help us in at least partially solving. But if we do not unpack what is real and what is not, we can lose sight of the potential. In this presentation, John Bates-who leads data science, machine learning and AI in the Adobe Analytics business unit-will present his 4-prong model of the general areas where Blockchain can have a real impact and the specific use...
The benefits of automated cloud deployments for speed, reliability and security are undeniable. The cornerstone of this approach, immutable deployment, promotes the idea of continuously rolling safe, stable images instead of trying to keep up with managing a fixed pool of virtual or physical machines. In this talk, we'll explore the immutable infrastructure pattern and how to use continuous deployment and continuous integration (CI/CD) process to build and manage server images for any platform. Then we'll show how automate deploying these images quickly and reliability with open DevOps tools like Terraform and Digital Rebar. Not only is this approach fast, it's also more secure and robust for operators.
Cloud is the motor for innovation and digital transformation. CIOs will run 25% of total application workloads in the cloud by the end of 2018, based on recent Morgan Stanley report. Having the right enterprise cloud strategy in place, often in a multi cloud environment, also helps companies become a more intelligent business. Companies that master this path have something in common: they create a culture of continuous innovation. In his presentation, Dilipkumar Khandelwal outlined the latest research and steps companies can take to make innovation a daily work habit by using enterprise cloud computing. He shared examples from companies that have benefited from enterprise cloud computing and took a look into the future of how the cloud helps companies become a more intelligent business.
Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also received the prestigious Outstanding Technical Achievement Award three times - an accomplishment befitting only the most innovative thinkers. Shankar Kalyana is among the most respected strategists in the global technology industry. As CTO, with over 32 years of IT experience, Mr. Kalyana has architected, designed, developed, and implemented custom and packaged software solutions across a vast spectrum of environments and platforms. His current area of expertise includes hybrid, multi-cloud as-a-service strategies that drive digital and cognitive enterprises to operational excellence. Throughout his career, Mr. Kalyana has established himself as a brilliant strategist, respected technical advisor, renowned speaker, admired author, and insigh...
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throughout enterprises of all sizes.