Fast Backup and Recovery with Sepaton

Subscribe via E-mail

Your email:

Migrate from IBM ProtecTIER to Sepaton Webinar

July 23, 2014 at 12 PM EDT

Register

 

 

Follow Me

Backup and Recovery Insights

Current Articles | RSS Feed RSS Feed

Extending Encryption

 
Florin Dejeu

The Data Center Journal recently featured an article I wrote entitled "Extending Encryption."

In this article, I write that data breaches are an everyday occurrence, and the consequences for organizations that suffer a breach can be horrendous, ranging from loss of consumer confidence to fines and punitive damages.

Some organizations, however, have discovered through experience that there is one tool that can help: encryption. For instance, last fall, the Second Appellate District Court of California ruled in favor of the defendant, the University of California, in a breach dating back to 2011 involving the theft of a laptop with 16,000 patient records. A plaintiff had sued under the Confidentiality of Medical Information Act (CMIA), a state statute. The case was decided in part on the fact that the patient records on the laptop were encrypted, thus nullifying any notion of actual loss of confidentiality or privacy.

To be sure, encryption of data “in flight” is increasingly the norm, and many organizations now also recognize the importance of encrypting data “at rest” on a storage device. But despite the growing need to protect sensitive data to meet regulatory and compliance mandates as well as service-level agreements, many are hesitant to make a wide-ranging embrace of the technology. The resistance to enabling encryption of data at rest has to do with a number of important factors: complexity, risk and costs. From a complexity standpoint, the problem is having to deal with many key managers and applications, most of which have proprietary key managers, along with the day-to-day management of multiple encryption-key-management (EKM) systems. Risks and cost go hand in hand: the risk of not meeting a backup window certainly presents heavy risk on any enterprise with the goal to meet and improve on service levels. Now imagine the resistance to enabling encryption of data at rest given that doing so on most systems, will increase the time to safety of the data on the backup appliance, thus increasing the risk. From a cost perspective, imagine the time lost by having to manage all these disparate systems daily versus a centralized key-management approach to solving this issue, as well as a single data-protection platform to manage.

Still, few can ignore the worsening threat that is bedeviling security professionals. Clearly, data inside the firewall can no longer be considered safe. Furthermore, the rise of cloud data storage and cloud computing in general means organizations must now vouch for data in environments they don’t even directly control. Those trends are now driving demand for and interest in protecting data at rest via encryption.

Fortunately, a double revolution is brewing. First, complexity associated with managing encryption keys is no longer as great a barrier thanks to developments such as the Key Management Interoperability Protocol (KMIP), an industry-standard communications protocol linking key-management systems and encryption systems. This protocol is governed by the Organization for the Advancement of Structured Information Standards (Oasis). Enterprises using multiple encryption solutions, each with their own respective (and often proprietary) key managers, have faced ever more complexity and cost. By implementing via an industry standard, the economics shift. Thus, KMIP is rapidly becoming the recognized method to manage encryption keys, no matter where they are required—and both vendors and their customers are committing to the technology.

KMIP isn’t entirely new. In fact, it was initially submitted to Oasis for standardization back in 2009. Enthusiastic vendors quickly began to announce updates to their products that would incorporate KMIP; actual demos and compliant-product deliveries followed.

With KMIP, a server stores and controls keys, certificates and user-defined objects. Client devices access these objects using the protocol through a server-implemented security model. Objects have core base-object properties such as key length and value, as well as extended attributes that can include user-defined features. Objects each have a unique, fixed identifier along with a name, which can be altered if desired.

Momentum is building. According to Oasis, the 2013 RSA Conference included KMIP clients from Cryptsoft, IBM, Quintessence Labs, and Thales e-Security communicating with key-management servers from Cryptsoft, HP, IBM, Quintessence Labs, Thales e-Security, Townsend Security and Vormetric. The clients and servers were able to demonstrate the full key-management life cycle, including creating, registering, locating, retrieving, deleting and transferring symmetric and asymmetric keys and certificates between vendor systems. Support for older and newer versions of KMIP was also demonstrated at the event.

Although KMIP has provided broad enablement for encryption through simplified key management, others have been focusing on delivering innovations related to the inherent performance challenge—particularly relative to backup activities, which can become unacceptably slow when coupled with encryption and decryption activities. These performance challenges can inhibit the ability to implement encryption as well as perform other vital tasks. For instance, traditional backup systems often lack sufficient scalability and robustness to simultaneously meet the challenges presented by explosive data growth, increasing risk and complexity, tight budgets, and the additional requirements of encryption—especially simultaneously. Data growth in particular—generally averaging about 20 percent per year—magnifies all the other challenges.

One part of the challenge is simply having enough storage. But facing a need to backup data from powerful applications such as enterprise resource planning (ERP), databases like SAP, and Oracle Business Suite, large enterprises and their IT staff are recognizing that just adding more non-scalable disk solutions or tape libraries to the mix is no longer a cost-effective or efficient strategy. They need approaches that are more workable; providing enterprise-optimized data-protection solutions that can help them address these challenges and stay ahead of the data growth curve.

Enterprises need a powerful and flexible data-protection platform that delivers scalability, performance and flexibility to accommodate both data growth and changing storage architectures, as well as enhanced data-protection features such as encryption. Furthermore, they must be reliable in addition to being easy to deploy and use, providing higher levels of service while reducing operating and capital costs.

Successful solutions to the performance dilemma, as outlined above, combined with encryption and the capabilities of KMIP, should deliver performance that neither suffers when data is being “ingested” nor increases the “time to safety” (the amount of time when data is on its way to disk but not yet encrypted).

Enterprise-scale organizations must move ahead now, gaining the benefits of more-capable and efficient storage infrastructure along with the crucial protection that encryption of data at rest provides. Doing so can help contain costs and avoid the alarming risks associated with data breaches and compliance violations. Fortunately, the technology is available today.

 

 

 

 

Large Enterprise Data Protection - Flexibility and Agility Prevail

 
Mike Thompson, President, CEO, Sepaton

One of our customers, a very large natural gas company is a
Symantec NetBackup user with huge data volumes to protect. Like many Symantec
customers, they see the advantages of Symantec’s OST products and have started
a migration process from their current NBU deployment.

Change is constant in large enterprises and they appreciate the value of
systems and technologies that can accommodate change with minimal cost or
upheaval to ongoing operations. As this company moves its backup from a
NetBackup environment with Fiber Channel attached SAN storage to a
NetBackup/OST environment with 10 Gigabit Ethernet, the need for a scalable
disk based data protection platform with multiprotocol flexibility is acute.
Instead of adding multiple bounded appliances and dividing their backup volumes
between them, they are using a flexible, scalable appliance that uses storage
pooling technology to separate their NBU and OST environments. The platform
allows them to add compute and storage resources to their environments as
needed and to migrate from one protocol to the other in a seamless, risk-free
way.

Flexibility and agility goes a long, long way in large enterprises.

 

Warning: Big Data will Break Your Backup

 
Mike Thompson, President, CEO, Sepaton

The volume of data that every enterprise is dealing with continues
to grow at an alarming rate. Backup has been on the forefront of this wave. In
fact, the amount of data being protected is growing so fast and the solutions
are so broken it’s like pouring gasoline on a fire. But this is just the
beginning – as technologies like cloud and big data continue to take hold, the
problem is getting significantly worse – and most of the traditional storage approaches
will fail under the weight of this growth. The “big data” wave will ripple
through enterprise environments starting in primary storage and building to a
tsunami of epic proportions by the time it reaches the data protection
environment. It will create fire-drill after fire-drill.

Backup is the first place customers will feel the pain – any
non-scalable data protection technology will leave customers with a hard choice
– keep buying difficult to manage data protection silos or switch to truly scalable
data protection architectures that have a chance of solving the problem. “Big
Data Backup” is about to hit us all and break the infrastructure we’ve been
building for the last decade.

But the trend won’t stop there. Current methodologies for archive
and staging data for traditional analytics platforms will also fail as data
sets grow to sizes that turn simple data transfers into major pain points. The
traditional scale out NAS approach just won’t keep up. Newer massively scalable
object repositories will become the only viable solutions going forward. Any
second tier solutions not built around these contemporary technologies will
eventually fall out of competitive positions.

As we already are seeing, the pain around building scalable
primary cloud and analytic repositories is already being felt. Most enterprise
customers simply can’t approach the problem like Google, Yahoo and other
engineering dominant companies. Massively scalable and manageable analytic
platforms solutions are needed -- and they must interoperate efficiently with
similarly designed scalable data protection platforms going forward.

Interoperability is not just a nice-to-have, it is essential.
Really big data needs to move as little as possible, and when it does move,
efficiency will become critical. Most of the traditional analytics storage
technologies are buckling under the current load. With no slow-down in sight
we’re sure to see these traditional technology approaches fail completely in
the near future.

So what is a customer to do? First, be aware of the size and scope
of the problem. It’s overwhelming and it’s going to get much, much worse.
Second, be cautious about committing to tactical fixes and traditional siloed
approaches. Even if they let you “get by” for a while, they will eventually
fail on all fronts – causing massive added cost, weak manageability, and data
movement that will crush you.

Start to evaluate emerging “Big Data Backup”, “Big Archive” and
“Big Data” solutions and build plans to integrate them into their storage
ecosystems soon. And finally, customers need to remember emerging solutions
need to not only scale and perform cost effectively, but they also need to be
manageable in all ways. IT staff are being asked to manage more and more data
per person. In fact the ratio of IT staff to volume of data managed is getting
ridiculous, so issues like ease of deployment, upgrade and normal operation are
critical. Remember, these solutions must also automate data management and
integrity tasks, periodically scrubbing data. The bigger data stores get (and
they will get huge in coming years), the more data integrity on these systems
becomes an issue to worry about.

In a coming blog, I will discuss the demands that big data is
making on backup environments in more detail.

Warning: Big Data will Break Your Backup Part 2

 
Mike Thompson, President, CEO, Sepaton

Big Data Demands Big Data Backup

As I mentioned in my last blog Warning: Big Data will Break Your Backup, we
have all gotten used to hearing about huge data growth and its impact on data
centers. As daunting as this data growth has been, it doesn’t compare to the
tsunami being generated in Big Data environments. Big data is measured in
petabytes and growing by orders of magnitude annually. Simply adding more
traditional data protection technologies – tape and multiple siloed systems is
not sufficient. They are just too slow, and too labor-intensive be feasible and
cost-effective in these environments.

Big Data Environments Need a New Approach

Just as big data primary storage is efficient, fast, and scalable, big data
secondary storage and data protection needs to be efficient, fast, and scalable
too. Let’s start with efficient.

With big data protection comes big costs. The days of backing up
and storing everything (just in case) are over. Cost reduction, labor savings,
and system efficiency are paramount to allow organizations to protect massively
growing amounts of data while meeting backup windows and business objectives.
Automatic, tiered data protection that includes CDP, snapshots, D2D, tape,
remote replication and cloud is a must-have. Data protection technologies
designed for big data environments need to be capable of automatically
identifying and moving low-priority data to the lowest cost recovery tier -
without administrator involvement.

Tiered Storage Becomes Essential

A tiered recovery model enables data managers to balance costs with recovery
time, depending on the recovery SLA requirements of the specific dataset. While
low priority data may have slower restore times, all restores use the same
mechanism, regardless of tier.

Second, even with smart policy driven tiering, big data
environments still need fast, scalable ingest performance to meet their data
protection needs. Stacking up dozens of single-node, inline deduplication
systems will lead to crushing data center sprawl and complexity. To meet data
protection needs and windows, big data environments need massive single system
capacity coupled with scalable, multi-node deduplication that doesn’t slow data
ingest or bog down restore times.

Compatibility with Existing Infrastructure and Reporting

Third, given the scale of big data environments, high risk rip and replace
solutions are out of the question. New data protection solutions need to
coexist and integrate seamlessly into existing environments without disruption
or added complexity. Optimally, IT staff can manage the resulting environment -
both new and existing IT resources - in one simple view that consolidates
management and reporting for the entire infrastructure.

Database Deduplication Efficiency

Big Data environments often have a large volume of duplicate data in segments
that are too small for many of today’s deduplication technologies to detect.
New big data-optimized deduplication technologies are required to minimize
costs and footprint regardless of where the data is retained.

The bottom line: traditional data protection cannot handle
demanding big data environments. A new approach is needed that delivers:

• Fast, deterministic ingest/outgress performance

• Intelligent, automated tiering

• Enterprise-wide management control and reporting

• Scalable deduplication designed specifically for big data database
environments

In a coming blog, I will cover the Big Data topic in more depth
with a discussion of what is needed in a big backup appliance.

 

10 Questions For Your Data Protection Vendor On Big Backup

 
Mike Thompson, President, CEO, Sepaton

Large enterprises must look closely at the strengths and
weaknesses of the available deduplication technologies on the market before
choosing a solution that best meets the needs of their own Big Backup
environments.

The volume of data generated by most companies is now growing at
such an explosive rate that many data centers have simply run out of the space,
power, cooling and storage capacity required to handle it. In large enterprise
organizations with ‘Big Backup’ environments, the sheer volume and variety of
data to be protected requires a level of performance and scalability that few
data protection technologies can deliver.

Keeping up with exponential data growth within budget, space,
power and cooling constraints is a constant data center challenge. And it is
being compounded by increasingly stringent regulatory requirements and business
initiatives demanding higher service levels, longer online retention times and
higher levels of data protection. There are several deduplication technologies
available to meet the needs of small-to-medium-sized organizations. However, for
large enterprise organizations with Big Backup environments, most deduplication
technologies fall short, resulting in costly, inefficient capacity reduction,
and time-consuming administration. Understanding the distinctions between these
technologies is essential for choosing those most appropriate for Big Backup.

SEPATON recommends that all large enterprises ask their backup
technology vendor the following ten questions. This will help them gain a clear
understanding of the strengths and drawbacks of each option so that they can
choose one that best meets their Big Backup requirements:

1. What impact will deduplication have on backup performance, both
now and over time?

2. Will deduplication degrade restore performance?

3. How will capacity and performance scale as the environment grows?

4. How efficient is the deduplication technology for large databases (e.g.
Oracle, SAP, SQL Server)?

5. How efficient is the deduplication technology in progressive incremental
backup environments such as Tivoli Storage Manager (TSM) and in NetBackup OST?

6. What are realistic expectations for capacity reduction given the high data
change rate common in Big Backup environments?

7. Can administrators monitor backup, deduplication, replication and restore
processes enterprise-wide?

8. Can deduplication help reduce replication bandwidth requirements for large
enterprise data volumes without slowing backup performance?

9. Can IT teams ‘tune’ deduplication by data type to meet their specific needs?

10. How much experience does the vendor have with large enterprise backup
applications such as Symantec NetBackup/OST and TSM?

For more details, a white paper titled 'Choosing an Enterprise-Class Deduplication Technology.'

 

2014 Predictions - Sepaton Data Protection Trends to Watch For

 

Florin DejeuVirtual Strategy Magazine invited me to reflect on the past year in a recent articlethat looks ahead to the data protection challenges, trends, and emerging technologies that will shape our coming year. As IT  professionals, we divide our time between looking back -- analyzing our systems  to identify ways to improve them, to save money, to do more with fewer  resources  – and looking ahead to the latest technologies to keep our data  centers  efficient, flexible, and responsive in today’s dynamic marketplace.  Here are  trends to watch out for in 2014:

Trend #1: Data Growth Hockey Stick

Data growth is nothing new in enterprise data centers. What will be new for  2013 is the hockey stick growth that will be driven by increased use of big  data  analytics, growing number of applications, and increased use of large  database-driven applications.

What this means:

Rapid data growth causes three major (and myriad minor) issues. First,  backing up large and fast-growing data volumes within backup windows becomes an  almost daily struggle.

Second, your costs skyrocket. Capital spending increases as you quickly (and  often unexpectedly) outgrow your backup system and start adding systems for  capacity and performance. Your IT labor costs go up too as your IT  administrators spend hours moving data off disk-based systems onto tape  archives, adding and load balancing additional systems, and tuning every  possible part of your data center to squeeze out every last drop of  performance.

Third, you start compromising protection levels. You can’t move massive  volumes of data offsite for disaster protection efficiently, you can’t backup  data as frequently as you would like. You can’t encrypt data at rest.

What you should do:

Use systems that are specifically to handle enterprise data protection  needs. These systems lets you add performance and capacity as you need it to  protect massive data volumes without data center sprawl. They automatically  load  balance and tune themselves for optimal efficiency. They are built to  deliver  the performance you need to meet backup windows with ease.

Trend #2: Increased Reliance on Large Database-Driven Applications in a Sea  of Unstructured Data

Enterprises are moving more and more business operations onto large Oracle,  SQL, DB2 database driven systems. As a result, enterprises are backing a wide  mixture of data types and more database data than ever before and their  tolerance for downtime for this data is nearly zero.

What this means:

Database data is notoriously difficult to deduplicate efficiently for two  reasons. First, databases store data in very small segments (<8KB) that  inline deduplication systems cannot process without slowing backup performance.  Second, to meet backup windows, they are typically backed up using  multiplexing,  and multistreaming, which are not supported by inline, hash-based  deduplication  systems. As a result, data centers are being inundated with  massive volumes of  under-deduplicated database data.

What you should do:

Use a data protection system that is built to handle enterprise class  databases and backups with a mixture of data types. These systems use smart  hybrid deduplication technology that determines the best technology to use  based  on the actual data type. They can deduplicate any data type, including  database  data – even in multiplexed, multistreamed backups - at the byte level  without  slowing ingest performance. They analyze the composition of incoming  data  streams and apply the deduplication technique (inline and/or post process)  that  will deliver the most efficient combination of performance to meet backup  windows, capacity optimization, and replication bandwidth use.

Trend #3: Increasingly Complex Backup Environments

The days of simply backing up to tape or virtual tape are long over. Today’s  data protection environment uses a wide range of backup protocols and data  protection technologies– from backing up to cloud, tiering of data, snapshots,  source and target deduplication, archiving, to name a few.

What this means:

Data center complexity quickly translates to increased cost and more risk of  data loss and downtime. As data centers continue to divide backup data volumes  into disparate backup and protection schema, they not only lose efficiencies of  scale, they also increase the likelihood of human error. As we look ahead to  the  coming months and years, data centers will continue to grow and increase in  complexity as they introduce more different technologies and backup schemes to  handle data growth and meeting regulatory requirements and service level  agreements.

What you should do:

Consider implementing a data protection system that is designed to provide  the best efficiency and most flexibility that you require today and also into  the future. These systems can read the content of incoming backup volumes, make  intelligent decisions in regards to best technology for efficiency, and move  massive data volumes with remarkable efficiency. These systems are designed  with  the future in mind for enterprise data centers as they will provide a way  for IT  managers to leverage the benefits of leveraging numerous data protection  technologies – cloud, tiering, hybrid deduplication, snapshots, etc. - without  adding risk and complexity.

These three trends will shape the future of enterprise data protection in  2014 and beyond.

 

Wanted: An Intelligent Approach to Data Protection

 
Mike Thompson, President, CEO, SepatonData growth continues to skyrocket and data is becoming increasingly important to business. As a result, the data center infrastructure to support that growth is becoming increasingly important. Yet with constrained IT budgets, backing up increasing data volumes is even more difficult and managing the increasing diversity of applications, systems, protocols, and types of data sources is even more complex.

One of our customers, NTT Europe, a managed service provider, was planning to expand throughout Europe. They needed a backup environment that could grow with them without disrupting service to their end users. They chose Sepaton because of our grid scalability and because of our unique deduplication capability that uses recently patented technology to deduplicate very large data volumes efficiently without slowing backup or restore performance.

According to NTT Europe, other deduplication methods were much slower and less efficient in reducing capacity requirements than the Sepaton DeltaStor™ technology. Using our patented deduplication technology, NTT can now restore customer data immediately, without waiting for lengthy ‘rehydration’ processes. Check out their story here.

While deduplication ratios vary based on data types, it’s fair to say our customers have seen dramatic space savings using Sepaton deduplication methodologies and are saving costs by backing up entirely to one easy-to-manage data protection appliance. Many of our customers have seen industry-leading 10:1 data reduction in their progressive incremental TSM backup environments.

For example, with Hitachi Data Systems (HDS) high density drives in the Sepaton system combined with DeltaStor deduplication, one of the country's largest regional healthcare systems has enough capacity to support their data growth for approximately three more years. They anticipate that the added density will save them thousands of dollars annually and now have the capacity to pull all data off their existing physical LTO4 drives and backup entirely to one easy-to-manage Sepaton data protection appliance.

As the volume of data continues to grow, new, more cost effective ways of protecting data are needed and innovative and intelligent deduplication is an important way to go about doing just that.

Implementing High-Performance Backup, Recovery: 10 Best Practices

 
Mike Thompson, President, CEO, Sepaton

I recently provided content to eWeek for a slideshow on 10 best practices for implementing backup and recovery.

Data protection capabilities for high-performance, virtualized environments come with some of the thorniest issues in IT. Problems exist in how well backup and recovery practices work in both physical and virtual realms—because most enterprises now have both—as well as in cost and complexity. In a recent study published in eWEEK, which surveyed businesses in the United Kingdom, the United States, Germany and France, 85 percent of small and midsize businesses (SMBs) cited cost as an obstacle, 83 percent are concerned about their data protection capabilities, and 80 percent pointed to complexity issues. These challenges are based partly on continued attempts to apply traditional, physical-world strategies to data protection in virtual environments, instead of adopting a new approach. Enterprises are taking new approaches to the challenges they face in implementing data protection. For example, 55 percent of SMBs say they plan to change their tools for backing up virtual environments in the next two years. Here are 10 key recommendations for implementing a high-performance, cost-effective, grid-scalable backup and recovery system with high deduplication rates.

 

The Hidden Costs of System Sprawl

 
Florin Dejeu

Data Center Knowledge published a recent article of mine on the hidden costs of system sprawl.

While data center managers have grown accustomed to rapid data growth, few could have anticipated the unprecedented data growth and increased complexity that has overwhelmed many data center backup environments in the past few years. According to industry analysts, data in large enterprises is growing at 40-60 percent compounded annually.

Data growth is fueled by the proliferation of new business applications, the introduction of Big Data analytics, the increased use of mobile devices and tablets in the work place, and the increased use of large databases to run core company functions (ERP, payroll, HR, production management). Companies are not only creating massive volumes of data, they are also under pressure to meet increasingly stringent and complex requirements for protecting and managing that data. For example, they have to back it up in shorter times, retain it for longer periods of time, encrypt it without slowing backup performance, replicate it efficiently, and restore it quickly.

Until recently, many enterprise data managers responded to data growth by simply adding disk-based backup targets. The most common type of disk-based backup target provided inline data de-duplication and a reasonable level of performance and capacity to accommodate the increased data volume. However, these systems are simply not designed for today’s massive data volumes or fast data growth because they lack two critical capabilities: they do not scale and they do not de-duplicate enterprise backup data efficiently. As a result, for many large enterprise data centers, the “add another system” approach has reached its breaking point.

The Hidden Costs of Sprawl – Total Cost of Ownership

For many organizations the breaking point for non-scalable systems is the point at which they cannot any longer meet their backup windows. While adding a single system may not seem overly cumbersome, for large enterprise data centers that require several of these systems, it can add unplanned cost, complexity, risk, and administrative time. The hidden costs and total cost of ownership (TCO) impact are significant:

  • Overbuying systems. Companies are forced to add an entire system when they have plenty of capacity but only need more performance or conversely, have performance and need more capacity.

  • Wasting money on capacity. By separating data onto multiple non-scalable systems, these systems cannot de-duplicate globally, reducing the efficiency of their capacity optimization.

  • Wasted IT admin time. To add a new non-scalable backup system, IT admins have to divide the existing backup(s) onto multiple new systems and load balance for optimal utilization a process that becomes more time-consuming and complex with every new system added.

  • Added maintenance cost. Each new system increases the cost of system maintenance by an order of magnitude. Every time a new software or hardware update or upgrade is needed, or standard maintenance is required.

  • Slow backups. Non-scalable systems typically use hash-based, inline de-duplication that slows backup performance over time. They are highly inefficient in database backup environment common in enterprises data centers for two reasons. First, databases often store data in sub-8KB segments that are too small for inline, hash-based deduplication to process efficiently without becoming a bottleneck to backup. Second, they do not support fast multiplexed, multi-streamed database backups – requiring IT staff to choose between fast backups and capacity optimization.

  • Rising data center costs. In simple terms, more systems with less-efficient de-duplication means more rack space, power, cooling, and data center floor space.

Less is More for Low TCO

In today’s fast-growing enterprise backup environments, consolidating backups onto a single, enterprise-class disk-based backup appliance is proving to be both more cost-efficient and less prone to human error and data loss than the “siloed” approach described above.

Backup and recovery appliances are designed specifically to handle the massive data volumes and complex backup requirements of today’s data centers. These purpose built backup appliances (PBBAs) are designed to backup, de-duplicate, replicate, encrypt, and restore large data volumes quickly and cost efficiently. To ensure you choose an enterprise-class backup and recovery appliance, ensure to use the following best practices:

Opt for Guaranteed High Performance

Understand the performance impact that processing-intensive functions, such as de-duplication, replication, and encryption have on the system. Enterprise-class systems are designed to perform these functions in a way that does not slow performance. Some even offload CPU functions Ensure that any published performance rates are for guaranteed, continuous performance, and not simply the highest rates achievable in a widely varying ingest rate.

Grid Scalability is Essential

As described above, adding, managing, and using multiple backup systems is not practical or cost-efficient in today’s fast-growing, complex data centers. Enterprise-class backup and recovery systems offer grid scalability, that is, the ability to add performance and/or capacity independently as you need it. This pay-as-you-grow model eliminates over-buying, reduces IT management time, and enables you to store tens of petabytes of data in a single, consolidated backup appliance.

Storing data in a single, optimized system has the additional benefits of enabling highly efficient, global de-duplication, and eliminating the need for load balancing and ongoing system-tuning.

Ensure Deduplication is Designed for Enterprise Data Centers

One of the most effective ways to reduce the cost of backup and recovery is to implement enterprise-class de-duplication. Unlike de-duplication optimized for small-to-medium-business, enterprise de-duplication is designed to enable faster backup performance and better overall capacity requirements. It is also capable of tuning de-duplication to the specific data types for optimal use of CPU, disk, and replication resources. For example they can de-duplicate database data at the byte level for optimal capacity savings or recognize data that will not de-duplicate efficiently (i.e., image data) and back it up without de-duplication. This “tunability” can save enterprises thousands of dollars in savings in capacity and processing costs.

Reporting and Dashboards Enable Savings

Detailed reporting and dashboards are key to enabling IT administrators to manage more data per person. They automate all disk subsystem management processes and put detailed status information at the administrator’s fingertips. They also provide predictive warning of potential issues enabling administrators to take action before they become urgent.

Lowest Total Cost of Ownership

For today’s large enterprise backup and recovery environments, the days of adding more and more backup systems are over. The speed of data growth, massive volume of data, and complexity of backup and recovery policies necessitate the use of enterprise-class purpose-built backup appliances. These appliances enable organizations to maintain backup windows by moving massive data volumes to the safety of the backup environment at predictable, fast ingest rates. They also streamline and simplify complexity by consolidating tens of petabytes of stored data onto a single, cost-efficient easy-to-manage system.

Enterprise Data Protection 2013

 

Mike Thompson, President, CEO, SepatonVirtual Strategy Magazine recently featured an article I wrote entitles "Enterprise Data Protection 2013."

I write that for the past ten years or more, large enterprises have struggled to keep up  with exponential data growth. This growth has driven enterprises to move from  physical tape to disk-based virtual tape libraries, and now to more powerful  disk-based data protection solutions than ever before. Despite these  improvements data growth continues to push backup and recovery systems to their  limits.

In 2013, I predict that the exponential data growth that we have experienced  in the last ten years will explode to a level that takes many IT departments by  surprise. Driven by Big Data analytics, cloud, and increased reliance on very  large databases (Oracle, DB2, SAP), data growth is about to increase at rates  that most data centers cannot accommodate.

Data protection in this fast-growing environment will quickly push backup  systems with inline, hash-based deduplication to the breaking point. Without  the  ability to scale, enterprise data centers will find themselves first adding  more  and more of single-node backup systems to get the performance they need to  meet  backup windows. As these systems multiply like rabbits in the data center,  administration labor costs will begin to rise. More systems mean more system  maintenance, updating, and tuning – and more chance for human error.

Evolution to Data Protection Consolidation on Grid Scalable Systems

Enterprises will increasingly move to consolidate these systems onto one or  a small number of scalable backup and recovery appliances that not only scale  performance and capacity but also provide a holistic view of the backup  environment companywide.

To meet data protection needs and shrinking backup windows, enterprises will  move away from backup and recovery solutions that use pure hash-based inline  deduplication and to more capable Content-Aware multi-method deduplication  technologies that can scale both performance and capacity as the backup  environment grows. They also designed to cut capacity in multiplexed,  multistreamed databases and massive backup environments without slowing backup  performance. Companies will be looking to adopt systems that provide a high  degree of automation of disk subsystem management (e.g., load balancing), and  provide powerful management and reporting interfaces that enable IT admins to  track data and manage data through the backup, deduplication, replication, and  secure erasure processes efficiently.

Companies will adopt new data protection solutions that coexist and  integrate seamlessly into existing environments added complexity. They will  enable companies to manage all of their data protection through a simple  management and reporting for the entire infrastructure – including both  existing  and new technology. Consolidation also enables more efficient  deduplication and  overall capacity optimization as data is not divided among  numerous “silos of  storage”.

Although today’s enterprises are accustomed to rapid data growth, few are  prepared for the enormous growth of data that is going to result from big data  analytics and very large databases. They will move away from the single node  systems and toward more sophisticated, grid scalable technology designed to  backup, deduplicate, replicate, and restore tens of petabytes of data on a  single system.

All Posts