Category: storage

  • Creativity the Key to Secure Data Backup

    Guus Leeuw jr, president & CEO of ITPassion Ltd, urges creativity in the way data is stored.

    Any piece of electronic information needs to be stored somewhere and somehow. This should guarantee access to that piece of information over the years.

    You want that information backed up, in case a disaster strikes, so that you can restore and access it again. For some information, a need exists to keep it for a long period of time, three or seven years.

    Let’s focus on backup and restore for a moment. Often, a system or its data is backed up for disaster recovery purposes.

    Tapes are then eventually sent off-site for safe storage. Such tapes must be re-introduced to a restore environment. What happens with the tape while it is in secure storage is often unknown to the Enterprise.

    A tape that is sent for off-site storage contains some form of catalogue to identify the tape and its contents.
    This catalogue, in extreme cases, must hold enough information to retrieve the stored data, even if one had to re-install a new backup environment due to disaster.

    Backup solutions conforming to the NDMP standard could utilise a pre-described recipe to store the data on the tape, in form of well-quantified storage records. Anybody with a conforming reader application could then retrieve the data off the tape and try to inspect it.

    This is a potential security risk, especially in light of recent events of lost data and the concern that that caused with the general public. It would be good if the backups were duly encrypted so that even a good hacker cannot crack the contents of the tape, which is supposedly important, considering that a lot of Government Agencies deal with private data.

    Equally important is the fraud that we hear about so often in the news lately: Thrown-away computers that get shipped to some far-away location, where the hard disks are inspected for private data such as credit card and other “useful” information. It would be good if a PC had a little program that wipes all data securely off the disk, before people turn it off one last time.

    Governments have done what it takes to support this kind of security: Air Force System Security Instructions 5020, CESG, German VSITR, just to name a few. Tools are not hard to find, however they are generally not for free, and in my opinion, Governments can do more to publish the availability of this type of product.

    Talking of storage, let’s focus on the part of the storage infrastructure that is mostly “forgotten”, but very critical: the fibre optical network between the server equipment and the actual storage equipment.

    With the current trend to reduce carbon footprint and hence save the planet, there is another aspect of virtualisation that is actually more critical to business than the reduction of carbon footprint alone. That aspect is cost savings. Did you know that you can slash your annual IT cost by at least 40 per cent when opting for virtualised server environments alone: You need less hardware, which is the biggest cost, and overall you would spend less on power and cooling.

    As these virtualised environments support more and more guest environments, simply because the underlying physical layer gets more powerful, a faster and better access to the back-end storage systems is required.

    Speeds of up to 8Gbps are not unheard of in the industry for your storage network. Even storage devices start supporting 8Gbps connection speeds. Do you need it? Not always. If you’re supporting several I/O-intensive guest servers, you might be surprised how much more throughput you can achieve over 8Gbps bandwidth versus over 4Gpbs bandwidth.

    Implementing Microsoft Exchange environments on virtualised hardware becomes very possible. Especially if you can achieve end-to-end, virtual server to storage, guaranteed data paths as if your virtual environment were a physical environment.

    Hosting for multiple Government Agencies starts to wander into the realm of the possible as well. If all Agencies in a County were to put their IT together, great things can happen to the overall cost of running IT at the Government.

    Sharing knowledge and space wherever possible would seem a good strategy to follow up on, especially now that the public is intense on reducing Government Expenditure, increasing the success of Government IT Projects, and, last but least, enforcing the reduction of carbon footprint, which is also supported by the Government itself.

    Overall a good many ways exist to increase the capabilities of storage, backup and restore, and archiving. It is time that the IT industry becomes creative in this area.

  • DLM Technology to Achieve ILM

    Alec Bruce, solutions manager, Hitachi Data Systems UK, explains what is currently possible with and ILM what resellers need to tell their customers about achieving true ILM.

    Information Lifecycle Management (ILM) has been hyped in the last few years and is often seen as a panacea for all business and IT challenges that can be implemented immediately.

    The reality is different, as true ILM is still many years away.

    A SNIA survey found that one of the most common ways of losing information is not being able to interpret it properly – a problem ILM is intended to overcome.

    The key lies in the difference between information and data. Data is defined as the raw codes that make up any document or application.

    This data becomes information when it is put into context – its value and meaning can change depending on that context.

    An IT system works with data. Information is a much more subjective concept – something that is simple for humans to understand but not easy for machines. Establishing rules and processes that govern business and IT operations based on the value of information is correspondingly complex.

    Distributed Lock Manager (DLM) is the combination of solutions that helps CIOs and IT managers deliver data management services to any given application environment. This includes protecting data, moving it around, and presenting it to that environment – activities that are tightly connected with managing the different storage resource profiles.

    Information cannot exist without the data that underpins it, so ILM relies on DLM processes to effectively fit in with the IT infrastructure while also addressing changing business priorities.

    General management practices put in place around storage mean that many IT departments have deployed DLM at least partially. It has become wide-spread because it enables better alignment of data storage and management practices with key enterprise applications, helping to drive IT towards business process management objectives – an important aim for all CIOs and part of the eventual ILM vision.

    ILM has generated hype because it enables IT to drive better efficiency and business performance but it may be five to ten years before we are able to realise true ILM. What most of the industry sees as ILM at the moment is in fact DLM – controlling the movement of data across the storage hierarchy depending on its value to the business.

    Traditionally content is moved down the storage hierarchy as it ages, but in fact the most important piece of information in any organisation is the one needed for the next business evolution. DLM ensures that wherever the answer is, it is easily accessible when required.

    By introducing rules to relate the movement of data to application demands, companies are incorporating a link with business process management as well, but not equivalent to ILM practices. While DLM can be related to business on an application requirement level, ILM will do so on a business information level.

    In summary, managing information is much more complex than managing data. While the industry should be looking towards ILM as a future goal, the technology available today means that DLM is currently more achievable and should be approached as the first step in the process.

  • Virtualisation – Back-Up And Recovery Strategies

    With the wave of virtualisation sweeping across the business IT infrastructure, Mark Galpin, product marketing manager of Quantum, encourages IT managers to embrace the advantages of virtualisation after fully considering the impact on the back-up and recovery infrastructure.

    There can be no doubt that virtualisation is the technology trend of the moment.

    Google the term and more than 30 million links offering expertise in the area will appear in milliseconds – and this is not just more technology hype.

    The virtualisation trend is having an impact on the business IT landscape.

    Drivers for virtualisation range from hardware, power and space savings through to increased manageability and data protection.
    Analyst group Forrester reports that 23 per cent of European firms are today using server virtualisation, and an additional 12 per cent are piloting the process as a means of reducing costs.

    IDC also predicts that the total number of virtualised servers shipped will rise to 15 per cent in 2010, compared to 5 per cent in 2005.

    And with the recent flotation of virtualisation leader VMware at a market value of GBP£9 billion, many investors as well as IT experts are betting their business on this trend becoming accepted everyday best practice.

    Virtualisation brings benefits

    Virtualisation has brought us new ways of doing things from managing desktop operating systems to consolidating servers.
    What’s also interesting is that virtualisation has become a conceptual issue – a way to deconstruct fixed and relatively inflexible architectures and reassemble them into dynamic, flexible and scalable infrastructures.

    Today’s powerful x86 computer hardware was originally designed to run only a single operating system and a single application, but virtualisation breaks that bond, making it possible to run multiple operating systems and multiple applications on the same computer at the same time, increasing the utilisation and flexibility of hardware.

    In essence, virtualisation lets you transform hardware into software to create a fully functional virtual machine that can run its own operating system and applications just like a “real” computer.

    Multiple virtual machines share hardware resources without interfering with each other so that you can safely run several operating systems and applications at the same time on a single computer.

    The VMware approach to virtualisation inserts a thin layer of software directly on the computer hardware or on a host operating system. This software layer creates virtual machines and contains a virtual machine monitor or “hypervisor” that allocates hardware resources dynamically and transparently so that multiple operating systems can run concurrently on a single physical computer without even knowing it.

    However, virtualising a single physical computer is just the beginning. A robust virtualisation platform can scale across hundreds of interconnected physical computers and storage devices to form an entire virtual infrastructure.

    By decoupling the entire software environment from its underlying hardware infrastructure, virtualisation enables the aggregation of multiple servers, storage infrastructure and networks into shared pools of resources that can be delivered dynamically, securely and reliably to applications as needed. This pioneering approach enables organisations to build a computing infrastructure with high levels of utilisation, availability, automation and flexibility using building blocks of inexpensive industry-standard servers.

    Benefits can come with initial increased complexity

    One of the great strengths of virtualisation is its apparent simplicity and its ability to simplify and increase flexibility within the IT infrastructure. However, as time passes there are some important lessons emerging from early adopters’ experience which are important to consider.

    IT managers looking to unleash virtualisation technology in their production networks should anticipate a major overhaul to their management strategies as well. That’s because as virtualisation adds flexibility and mobility to server resources, it also increases the complexity of the environment in which the technology lives. Virtualisation requires new thinking and new ways of being managed, particularly in the back-up and recovery areas of storage in a virtualised environment.

    Virtual servers have different management needs and have capabilities that many traditional tools cannot cope with. They can disappear by being suspended or be deleted entirely, and they can move around and assume new physical addresses.

    As a result, some existing infrastructures need to become more compatible with virtual machines in areas such as back-up and recovery.

    Many of the virtualisation deployments to date have been implemented on application or file servers where unstructured data is the key information. In these environments, VMware tools for back-up and recovery work well. Copies of the virtual machine images can be taken once a week, moved out to a proxy server and then saved onto tape in a traditional manner.

    Real returns available through virtualising structured data

    But the real returns on investment for business from virtualisation will come in its ability to virtualise the structured data of its key applications such as Oracle, SQL or Exchange. Many of these areas have been avoided to date because of the complexity of protecting these critical business applications in a virtualised environment.

    The standard VMware replication tools take a snapshot image in time and do not provide a consistent state for recovery and rebuild of structured data.

    The answer for critical applications where recovery times need to be seconds rather than hours is to build expensive highly available configurations. This solves system or site loss risks but protection is still required against data corruption, accidentally deleted data and virus attack.

    Less critical systems also need to be protected and data sets retained for compliance and regulatory purposes. In most data centres, traditional backup and recovery will be performing these functions today using familiar software tools that integrate with the database and tape or disk targets for the data.

    So, the obvious solution is to continue to back-up everything as before but in a virtualised environment the increased load on the network infrastructure would become unbearable very quickly with machines grinding to a halt and applications groaning.

    Tape systems with their high bandwidths and intolerance of small data streams are also unsuitable as targets as more flexibility is needed to schedule back-ups to multiple devices.

    The answer is disk-based back-up appliances

    With structured data, the answer is to use new disk-based back-up appliances to protect data. Using a Quantum DXi solution, for example, businesses can combine enterprise disk back-up features with data de-duplication and replication technology to provide data centre protection and anchor a comprehensive data protection strategy for virtualised environments.

    DXi solutions bring a number of additional benefits. In as much as they are useful when storing structured data, they are also effective in storing virtual machine disk format (VMDK) images and unstructured data, meaning users can benefit from a single point of data management. A benefit of storing VMDK images on de-duplicated disk is that all VMDK image are very much alike and so achieve an exceptionally high de-duplication ratio. This means much larger volumes of data can be stored on limited disk space.

    The DXi range leverages Quantum’s patented data de-duplication technology to dramatically increase the role that disk can play in the protection of critical data. With the DXi solutions, users can retain 10 to 50 times more back-up data on fast recovery disk than with conventional arrays.

    With remote replication of back-up data also providing automated disaster recovery protection, DXi users can transmit back-up data from a single or multiple remote sites equipped with any other DXi-Series model to a central, secure location to reduce or eliminate media handling. DXi–Series replication is asynchronous, automated, and it operates as a background process.

    Whether businesses are looking to increase return on investment from their virtualisation implementations or planning a virtualised environment, the lessons are clear. To make the most of this latest technological innovation, IT managers must plan their recovery and back-up strategies to cater for the virtual new world.

  • Tips for email management and archiving

    With only 20 per cent of companies demonstrating good control on email management, Dave Hunt, CEO of C2C, comments on the state of email management and archiving and notes what resellers can do to position themselves as protectors of companies’ most used and valuable communication method.

    Although around 30 per cent of organisations have some form of archiving in place, most consider that this would not constitute adequate control.

    A recent survey by C2C found that 65 per cent of respondents had set mailbox capacity limits meaning in effect, that end users were responsible for managing their own mailboxes.

    Just how bad does it get?

    In practice, this self regulation probably results in significant lost productivity and constitutes a poor strategy for managing and discovering data.

    We consider the top five questions being by resellers interested in recommending email management:

    1. Is Email control a management or archive issue?

    It is a management issue and archiving is part of the solution. Resellers should identify a solution that identifies unnecessary emails, handles attachments and provides automated quota management which should be part of a strategic ‘cradle to grave’ management of email. It isn’t a case of archiving email merely to reduce the live storage footprint, but part of a well thought-out strategy, designed hand-in-hand with the customer that aids productivity and time management and that can be implemented by an IT department simply and economically.

    2. What is the biggest problem for email management – storage costs, ‘loss’ of information or compliance issues?

    All of these are problems. Some will cost your customers on a daily basis; others could result in huge fines in liability. Failure to preserve email properly could have many consequences including brand damage, high third-party costs to review or search for data, court sanctions, or even instructions to a jury that it may view a defendant’s failure to produce data as evidence of culpability.

    3. What guidelines should be in place for mailbox quotas – and how can these be made more business friendly?

    Most specialists in email management agree that mailbox quotas are a bad idea. The only use would be a quota for automatic archiving, whereby, on reaching a specific mailbox threshold, email is archived automatically (and invisibly to the user) until a lower threshold is reached. Our C2C survey also found that those who self-manage email to stay within quotas frequently delete messages, delete attachments, and/or create a PST file. The over-reliance on PST files as a means to offload email creates several challenges when companies must meet legal requirements, since PST files do not have a uniform location and cannot be searched centrally for content with traditional technologies. Resellers can explain that reliance on PST files is poor practice.

    4. Once retention schedules and compliance have been met, does the email need to be destroyed – and if so, how should resellers’ recommend companies go about this?

    In some instances it is necessary to delete emails once the retention period has passed, in others it is only an option. Deletion also depends on the industry type, for instance, does it have to be guaranteed destruction, such as to US DoD standards, or is a simple removal of the email sufficient?

    5. What would be your top tips be for email management?

    Resellers that wish to add true value should consider the whole picture of email data management, from the instant an email is sent to the time it is finally destroyed.

  • Ten Criteria For Enterprise Business Continuity Software

    Jerome Wendt, president and lead analyst of DCIG Inc, an independent storage analyst and consulting firm, outlines 10 criteria for selecting the right enterprise business continuity software

    The pressures to implement business continuity software that can span the enterprise and recover application servers grow with each passing day.

    Disasters come in every form and shape from regional disasters (earthquakes, floods, lightning strikes) to terrorist attacks to brown-outs to someone accidently unplugging the wrong server.

    Adding to the complexity, the number of application servers and virtual machines are on the rise and IT headcounts are flat or shrinking.

    Despite these real-world situations, companies often still buy business continuity software that is based on centralized or stand-alone computing models that everyone started abandoning over a decade ago.

    Distributed computing is now almost universally used for hosting mission critical applications in all companies.

    However business continuity software that can easily recover and restore data in distributed environments is still based on 10 year old models.

    This puts businesses in a situation when they end up purchasing business continuity software that can only recover a subset of their application data.

    Organizations now need a new set of criteria that accounts for the complexities of distributed systems environments.

    Today’s business continuity software must be truly enterprise and distributed in its design.

    Here are 10 features that companies now need to identify when selecting business continuity software so it meets the needs of their enterprise distributed environment:

    • Heterogeneous server and storage support.
    • Accounts for differences in performance.
    • Manages replication over WAN links.
    • Multiple ways to replicate data.
    • Application integration.
    • Provides multiple recovery points.
    • Introduces little or no overhead on the host server.
    • Replicates data at different points in the network (host, network or storage system).
    • Centrally managed.
    • Scales to manage replication for tens, hundreds or even thousands of servers.

    The requirements for providing higher, faster and easier means of enterprise business continuity have escalated dramatically in the last decade while the criteria for selecting the software remains rooted in yesterday’s premises and assumptions.

    Today’s corporations not only need to re-evaluate what software they are using to perform these tasks but even what criteria on which they should base these decisions.

    The 10 criteria listed here should provide you with a solid starting point for picking backup continuity software that meets the requirements of today’s enterprise distributed environments while still providing companies the central control and enterprise wise recoverability that they need to recover their business.

    To read the full criteria please go to DCIG Inc.

  • New High Speed Camera Memory Stick

    Sony model ideal upgrade for high performance digital cameras and HD camcorders

    As files get bigger, so the pressure for flash memory grows.

    The latest offering from Sony Recording Media & Energy is one solution for users needing high capacity and high speed data transfer.
    The Memory Stick PRO-HG Duo HX comes with 4GB or 8GB capacity and a read speed of 20MB/second (15MB/second write).

    This makes it more than capable of coping even with the strain of HD video.

    When used with the supplied USB adaptor for maximum speed, it can shorten data transfer time by one-third compared to Sony’s Memory Stick PRO Duo (Mark 2).

    The provision of a USB adaptor as a standard accessory also makes it very simple to transfer data onto a PC or notebook.

    Also useful is the free, downloadable Memory Stick Data Rescue Service which can quickly recover deleted photographs and files.

    The Memory Stick PRO-HG Duo HX uses an 8-bit parallel interface to achieve this level of performance and comes with a 10 year warranty.

    It will be available from October 2008.

  • Hitachi Aims to Repeat Robust Growth

    Hitachi Data Systems hits 45 per cent growth in 2007 and hopes to keep momentum going in current year

    Hitachi Data Systems (HDS), part of the Hitachi Storage Solutions Group, is looking to continue its robust growth in the Asean region in its fiscal year that started in April 2008.

    Ravi Rajendran, Asean general manager of HDS, said that in fiscal 2007, the company achieved 45 per cent year-on-year growth in Asean.

    “It’s a fantastic revenue scenario to be in,” he said. “We believe we can keep the growth momentum going during the current fiscal year.”

    Hitachi Storage Solutions Group, which apart from HDS, comprises Hitachi’s storage business in Japan, recorded an 8 per cent increase in revenue to 361 billion yen (USS $4.61 billion) in fiscal 2007.

    Rajendran said that last year the company won a record number of new customers in Asean.

    “What’s important is that we grew much faster than the market and we believe we improved our market share substantially,” he said.

    Software and services revenue constitute 48 per cent of HDS’ total revenue – and this is an area of growth for HDS in the Asean region.

    Rajendran said that Hitachi continues to invest in innovation and will sink USD $2 billion worldwide in fiscal 2008 in storage solutions.

  • No Black Hole for CERN Data

    The largest scientific instrument on the planet will produce roughly 15 Petabytes (15 million Gigabytes) of data annually when it begins operations

    System crashes and the ensuing data loss may be most IT managers’ idea of the end of the world.

    Yet spare a thought for the folk running the LHC Computing Grid (LCG) designed by CERN to handle the massive amounts of data produced by the Large Hadron Collider (LHC).

    Many people believe the USD $4bn energy particle acclerator, which crisscrosses the border between France and Switzerland, is a Doomsday Machine that is going to create micro black holes and strangelets when switched on tomorrow.

    While that is, hopefully, pure fantasy what is more of a nightmare is how to deal with the colossal amounts of data that the 27km-long LHC is going to produce.

    The project is expected to generate 27 TB of raw data per day, plus 10 TB of "event summary data", which represents the output of calculations done by the CPU farm at the CERN data center.

    The LHC is CERN’s new flagship research facility, which is expected to provide new insights into the mysteries of the universe.

    It will produce beams seven times more energetic than any previous machine, and around 30 times more intense when it reaches design performance, probably by 2010.

    Once stable circulating beams have been established, they will be brought into collision, and the final step will be to commission the LHC’s acceleration system to boost the energy to 5 TeV, taking particle physics research to a new frontier.

    CERN director general, Robert Aymar, said: “The LHC will enable us to study in detail what nature is doing all around us.
    “The LHC is safe, and any suggestion that it might present a risk is pure fiction.”

    Originally standing for Conseil Européen pour la Recherche Nucléaire (European Council for Nuclear Research), CERN was where the World Wide Web began as a project called ENQUIRE, initiated by Sir Tim Berners-Lee and Robert Cailliau in 1989.

    Berners-Lee and Cailliau were jointly honored by the ACM in 1995 for their contributions to the development of the World Wide Web.

    Appropriately, sharing data around the world is the goal of the LCG project.

    Since it is the world’s largest physics laboratory, CERN’s main site at Meyrin has a large computer center containing very powerful data processing facilities primarily for experimental data analysis.

    Its mission has been to build and maintain a data storage and analysis infrastructure for the entire high energy physics community that will use the LHC.

    And because of the need to make the data available to researchers around the world to access and analyse, it is a major wide area networking hub.

    The data from the LHC experiments will be distributed according to a four-tiered model. A primary backup will be recorded on tape at CERN, the “Tier-0” center of LCG.

    After initial processing, this data will be distributed to a series of Tier-1 centers, large computer centers with sufficient storage capacity and with round-the-clock support for the Grid.

    The Tier-1 centers will make data available to Tier-2 centers, each consisting of one or several collaborating computing facilities, which can store sufficient data and provide adequate computing power for specific analysis tasks.

    Individual scientists will access these facilities through Tier-3 computing resources, which can consist of local clusters in a University Department or even individual PCs, and which may be allocated to LCG on a regular basis.

    A live webcast of the event will be broadcast tomorrow. What are your thoughts on LHC – will it reveal the secrets of the universe or a gaping black hole?