Author: admin

  • Date announced for Xperia X1 release

    Sony Ericsson announces firm date for first countries to receive the XPERIA X1

    The UK, Germany and Sweden will be the inaugural launch sites for Sony Ericsson’s Xperia X1 when it is released on 30 September.

    The Windows Mobile 6.1 smartphone will then be rolled out around the world with 32 more countries in Asia, Europe, the Middle East, Africa, and Latin America expected to get the handset by the fourth quarter of the year.

    North America, China, Australia, and Russia are also listed as committed launch regions, though dates for those are still to be announced.

    There is to be a “live global webcast” hosted by Sony Ericsson on 15 September to demonstrate the Xperia X1 “in-depth”.

    This will be followed by the first of the nine episodes from the Johnny X reality thriller. A Q&A session will also be held on Sony Ericsson’s Premiere website.

    Rikko Sakaguchi, senior VP and head of creation and development at Sony Ericsson, said the in-depth demonstration on the web cast would showcase how the handset was “truly unique”.

    He said the nine panel eco-system put the user in total control of the primary experiences available on the phone and allowed consumers to personalize the panel interface to suit their needs and lifestyle.

    “The Xperia X1 has the highest quality screen on the market, four-way navigation keys and optical joy stick to give a stress-less browsing experience and, with its super fast processor and network speed the Xperia X1 really bridges the gap between personal, entertainment and work mobile needs,” he said.

    The first device from Sony Ericsson’s Xperia sub-brand, the X1 has many high-end features, including: a 3 inch TFT touchscreen display with accelerometer and a wide 800 x 480 pixel resolution, a panel-based interface to work with Windows Mobile 6.1, global GSM and HSDPA connectivity, a full QWERTY keyboard, internal GPS and A-GPS, Wi-Fi and full HTML browser, a 3.2 MP auto focus camera with flash and video recording, lots of multimedia-oriented goodies and 400MB of expandable memory.

    Before the actual availability of the new handset, Sony Ericsson will showcase it at Tent London, between September 18 and September 21, during the London Design Week. No price has yet been released for the Xperia X1.

    A full list of the countries earmarked for Xperia X1 release in 2008:

    – Europe: Austria, Belgium, Czech Republic, Denmark, France, Hungary, Italy, The Netherlands, Norway, Poland, Portugal, Spain and Switzerland

    – Asia: Bangladesh, Cambodia, India, Indonesia, Malaysia, Philippines, Singapore, Taiwan, Thailand and Vietnam

    – The Middle East: Kuwait, Saudi Arabia and UAE

    – Latin America: Argentina, Chile, Bolivia, Paraguay and Uruguay

    – Africa: South Africa.

  • Leading VoIP Developer In Agreement with Codima

    Selects Codima VoIP Solution For Japanese Governmental NGN Project

    Codima Inc, a global provider of best practice software tools for VoIP and IT Asset Management, has announced it has entered a partnership agreement with Artiza Networks, Japan’s leading Network and VoIP Testing Solution developer.

    To supply the Next Generation Network (NGN) project, Artiza Networks has selected the comprehensive VoIP management solution Codima Toolbox.

    Artiza Networks will provide the solution offered by Codima to enable their customers, including leading Japanese IT corporations to build the new Japanese IP-based infrastructure.

    The Japanese Next Generation Network (NGN) project requires fundamental investments in infrastructure technology.

    Analyst IDC Japan values the network management market alone at US$7.75 billion over the next three years.

    To address this significant demand Artiza Networks has decided to select the Codima VoIP Solution and prioritize the project by forming a specific NGN Group.

    The group will target the customer base including major system integrators, telecom carriers and telecommunications manufacturers.

    Artiza Networks required a comprehensive end-to-end solution to monitor and troubleshoot SIP-based VoIP systems in real time with options for pre-assessment testing and engineer’s kits.

    Codima delivers a suite of products that is ideal for these purposes. Specifically Codima’s flagship products autoVoIP, providing robust post-deployment monitoring and troubleshooting to ensure Quality of Service (QoS) and autoMap with its unique capabilities to map and visualize IT networks directly with Microsoft Office Visio are of interest to enable cost efficient VoIP network management.

    The ground breaking governmental initiative involving Japan’s leading IT corporations NEC, Fujitsu OKI and Hitachi, transforms Japan’s analogue telephone network into an IP-based infrastructure for unified communications.

    According to Ministry of Internal Affairs and Communications, 100 per cent of the Japanese population will have access to broadband connections by 2010.

    The Japanese External Trade Organization Jetro estimates that the converged network market in Japan was worth 59.3 trillion yen (US$485 billion) in 2007 and will balloon to 87.6 trillion yen (US$740 billion) by 2010.

    Christer Mattsson, CEO of Codima, said: “We are proud to form a partnership with Artiza Networks, an experienced industry leader in Japan.
    “The local partnership will open up channels to resell our products to the NGN project, a governmental initiative of a magnitude that holds unparalleled business opportunities for companies like Codima.”

    Takashi Tokonami, CEO of Artiza Networks, said: “Codima Toolbox met our requirements for a comprehensive VoIP management solution.

    “Being a supplier to the Japanese network market for almost twenty years, we consider the market is shifting toward Next Generation Network, and it boosts up Codima Toolbox.”

  • Williams to distribute Syspine VoIP Phone System in Canada


    Quanta Computer has announced that Williams Telecommunications Corp will be serving as a master distributor of the Syspine Digital Operator Phone System throughout Canada.

    Syspine, an advanced IP phone system designed for small businesses with up to 50 employees, was created for ease of use, low costs and integration with other technologies.

    Featuring Microsoft Response Point technology, Syspine has a powerful voice-recognition system that can be linked with a company’s internal phone directory, as well as an individual’s Microsoft Office Outlook address book.

    Robert Gordon, director of sales and marketing at Syspine, said Williams Telecommunications was a well-known and respected name in the telecom industry.

    “It’s a win-win situation. Williams’ network of dealers and resellers will be able to introduce Syspine to an untapped market, while this is an equally great opportunity for Williams to deliver an affordable VoIP phone system with Microsoft Response Point technology to small businesses in Canada,” he said.

    Headquartered in Mississauga, Ontario, Williams distributes products from many of the major manufacturers, as well as a complete line of peripheral products.

    Jim Willians, president and CEO of Williams, said VoIP is growing exponentially and Williams had been there right from the beginning.

    “We have always been inspired by the latest technology and its opportunities and are excited to partner with Syspine, to distribute their Digital Operator Phone System, and Microsoft, which supplies their Response Point technology to the system,” he said.

    “This new product is easily and rapidly deployed and offers customers flexibility, stability and scalability, as well as the latest features.”

  • Intel PCs to wake up for VoIP phone calls

    A wake-up call for the PC: Intel-powered computers to snap out of sleep when you phone them

    Intel is unveiling new technology that will let computers wake up from their power-saving sleep state when they receive a phone call over the Internet.

    Current computers have to be fully “on” to receive a call, making them impractical and energy-wasters as replacements for the telephone.

    The new Intel component will let computers automatically return to a normal, full-powered state when a call comes in. The computer can activate its microphone and loudspeaker to alert the user, then connect the call.

    Trevor Healy, chief executive of Jajah, which will be the first Internet telephone company to utilize the feature, said: “This certainly helps the PC become a much better center of communications in the home.”

    Joe Van De Water, director of consumer product marketing for Intel , said the first Intel motherboards with the Remote Wake capability will be shipping in the next month.

    These components, which are at the heart of every computer, will most likely be used by smaller computer manufacturers. Bigger names like Dell Inc. and Hewlett-Packard Co. use their own motherboard solutions, but Intel is working to supply them with the technology as well.

    The four initial Remote Wake motherboards will be for desktop computers and will need an Internet connection via Ethernet cable, as Wi-Fi doesn’t work in sleep mode.

    Van De Water said the computer will know to wake up only for calls from services to which the user has subscribed, so computer-waking prank calls should be impossible.

  • Data Center Expertise Increasingly Valued

    Computer data center experts are being shown new respect, according to The New York Times, and the trend is set to continue.

    In Silicon Valley, mechanical engineers who design and run computer data centers have traditionally been regarded as little more than blue-collar workers in the high-tech world.

    For years, the mission of data center experts was to keep the computing power plants humming, while scant thought was given to rising costs and energy consumption.

    Today, they are no longer taken for granted as data centers grow to keep pace with the demands of Internet-era computing, according to a report in The New York Times.

    As a result of their immense need for electricity and their inefficient use of that energy, data centers pose environmental, energy and economic challenges

    That means people with the skills to design, build and run a data center that does not endanger the power grid are suddenly in demand.

    Their status is growing, as are their salaries — climbing more than 20 per cent in US in the last two years into six figures for experienced engineers.

    Jonathan G. Koomey, a consulting professor of environmental engineering at Stanford University, said: “The data center energy problem is growing fast, and it has an economic importance that far outweighs the electricity use.

    “So that explains why these data center people, who haven’t gotten a lot of glory in their careers, are in the spotlight now.”

    Chandrakant Patel, a mechanical engineer at Hewlett-Packard Labs, said that data centers can be made 30 per cent to 50 per cent more efficient just by applying current technology.

    Patel, who has worked in Silicon Valley for 25 years, said that at one time, “we were seen as sheet metal jockeys”.
    “But now we have a chance to change the world for the better, using engineering and basic science,” he said.

    No letup in demand for data center computing

    Digital Realty Trust, a data center landlord with more than 70 facilities, said that customer demand for new space is running 50 per cent ahead of its capacity to build and equip data centers for the next two years.

    For every new center, new data center administrators need to be hired.

    Indeed, some data managers with only a degree from a two-year college can command a USD $100,000 salary.

    Trade and professional conferences for data center experts, unheard of years ago, are now commonplace.

    Five-figure signing bonuses, retention bonuses and generous stock grants have become ingredients in the compensation packages of data center experts today.

    The pace of the data center buildup is the result of the surging use of server computers, which in the United States rose to 11.8 million in 2007, from 2.6 million a decade earlier, according to IDC, a research firm.

    Worldwide, the 10-year pattern is similar, with the server population increasing more than fourfold to 30.3 million by 2007.

    Based on current trends, by 2011 data center energy consumption will nearly double again, requiring the equivalent of 25 power plants. The world’s data centers, according to recent study from McKinsey & Company, could well surpass the airline industry as a greenhouse gas polluter by 2020.

    Because the task ahead, analysts say, is not just building new data centers, but also overhauling the old ones, the managers who know how to cut energy consumption are at a premium.

    Most of the 6,600 data centers in America, analysts say, will be replaced or retrofitted with new equipment over the next several years.

    They apparently have little choice. Analysts point to surveys that show 30 per cent of American corporations are deferring new technology initiatives because of data center limitations.

    Mechanical and electrical engineers with experience in data center design, air-flow modeling and power systems management are in demand.

    Now that costs and energy consumption are priorities, the data center gurus are getting a hearing and new respect.

  • Creativity the Key to Secure Data Backup

    Guus Leeuw jr, president & CEO of ITPassion Ltd, urges creativity in the way data is stored.

    Any piece of electronic information needs to be stored somewhere and somehow. This should guarantee access to that piece of information over the years.

    You want that information backed up, in case a disaster strikes, so that you can restore and access it again. For some information, a need exists to keep it for a long period of time, three or seven years.

    Let’s focus on backup and restore for a moment. Often, a system or its data is backed up for disaster recovery purposes.

    Tapes are then eventually sent off-site for safe storage. Such tapes must be re-introduced to a restore environment. What happens with the tape while it is in secure storage is often unknown to the Enterprise.

    A tape that is sent for off-site storage contains some form of catalogue to identify the tape and its contents.
    This catalogue, in extreme cases, must hold enough information to retrieve the stored data, even if one had to re-install a new backup environment due to disaster.

    Backup solutions conforming to the NDMP standard could utilise a pre-described recipe to store the data on the tape, in form of well-quantified storage records. Anybody with a conforming reader application could then retrieve the data off the tape and try to inspect it.

    This is a potential security risk, especially in light of recent events of lost data and the concern that that caused with the general public. It would be good if the backups were duly encrypted so that even a good hacker cannot crack the contents of the tape, which is supposedly important, considering that a lot of Government Agencies deal with private data.

    Equally important is the fraud that we hear about so often in the news lately: Thrown-away computers that get shipped to some far-away location, where the hard disks are inspected for private data such as credit card and other “useful” information. It would be good if a PC had a little program that wipes all data securely off the disk, before people turn it off one last time.

    Governments have done what it takes to support this kind of security: Air Force System Security Instructions 5020, CESG, German VSITR, just to name a few. Tools are not hard to find, however they are generally not for free, and in my opinion, Governments can do more to publish the availability of this type of product.

    Talking of storage, let’s focus on the part of the storage infrastructure that is mostly “forgotten”, but very critical: the fibre optical network between the server equipment and the actual storage equipment.

    With the current trend to reduce carbon footprint and hence save the planet, there is another aspect of virtualisation that is actually more critical to business than the reduction of carbon footprint alone. That aspect is cost savings. Did you know that you can slash your annual IT cost by at least 40 per cent when opting for virtualised server environments alone: You need less hardware, which is the biggest cost, and overall you would spend less on power and cooling.

    As these virtualised environments support more and more guest environments, simply because the underlying physical layer gets more powerful, a faster and better access to the back-end storage systems is required.

    Speeds of up to 8Gbps are not unheard of in the industry for your storage network. Even storage devices start supporting 8Gbps connection speeds. Do you need it? Not always. If you’re supporting several I/O-intensive guest servers, you might be surprised how much more throughput you can achieve over 8Gbps bandwidth versus over 4Gpbs bandwidth.

    Implementing Microsoft Exchange environments on virtualised hardware becomes very possible. Especially if you can achieve end-to-end, virtual server to storage, guaranteed data paths as if your virtual environment were a physical environment.

    Hosting for multiple Government Agencies starts to wander into the realm of the possible as well. If all Agencies in a County were to put their IT together, great things can happen to the overall cost of running IT at the Government.

    Sharing knowledge and space wherever possible would seem a good strategy to follow up on, especially now that the public is intense on reducing Government Expenditure, increasing the success of Government IT Projects, and, last but least, enforcing the reduction of carbon footprint, which is also supported by the Government itself.

    Overall a good many ways exist to increase the capabilities of storage, backup and restore, and archiving. It is time that the IT industry becomes creative in this area.

  • DLM Technology to Achieve ILM

    Alec Bruce, solutions manager, Hitachi Data Systems UK, explains what is currently possible with and ILM what resellers need to tell their customers about achieving true ILM.

    Information Lifecycle Management (ILM) has been hyped in the last few years and is often seen as a panacea for all business and IT challenges that can be implemented immediately.

    The reality is different, as true ILM is still many years away.

    A SNIA survey found that one of the most common ways of losing information is not being able to interpret it properly – a problem ILM is intended to overcome.

    The key lies in the difference between information and data. Data is defined as the raw codes that make up any document or application.

    This data becomes information when it is put into context – its value and meaning can change depending on that context.

    An IT system works with data. Information is a much more subjective concept – something that is simple for humans to understand but not easy for machines. Establishing rules and processes that govern business and IT operations based on the value of information is correspondingly complex.

    Distributed Lock Manager (DLM) is the combination of solutions that helps CIOs and IT managers deliver data management services to any given application environment. This includes protecting data, moving it around, and presenting it to that environment – activities that are tightly connected with managing the different storage resource profiles.

    Information cannot exist without the data that underpins it, so ILM relies on DLM processes to effectively fit in with the IT infrastructure while also addressing changing business priorities.

    General management practices put in place around storage mean that many IT departments have deployed DLM at least partially. It has become wide-spread because it enables better alignment of data storage and management practices with key enterprise applications, helping to drive IT towards business process management objectives – an important aim for all CIOs and part of the eventual ILM vision.

    ILM has generated hype because it enables IT to drive better efficiency and business performance but it may be five to ten years before we are able to realise true ILM. What most of the industry sees as ILM at the moment is in fact DLM – controlling the movement of data across the storage hierarchy depending on its value to the business.

    Traditionally content is moved down the storage hierarchy as it ages, but in fact the most important piece of information in any organisation is the one needed for the next business evolution. DLM ensures that wherever the answer is, it is easily accessible when required.

    By introducing rules to relate the movement of data to application demands, companies are incorporating a link with business process management as well, but not equivalent to ILM practices. While DLM can be related to business on an application requirement level, ILM will do so on a business information level.

    In summary, managing information is much more complex than managing data. While the industry should be looking towards ILM as a future goal, the technology available today means that DLM is currently more achievable and should be approached as the first step in the process.

  • Virtualisation – Back-Up And Recovery Strategies

    With the wave of virtualisation sweeping across the business IT infrastructure, Mark Galpin, product marketing manager of Quantum, encourages IT managers to embrace the advantages of virtualisation after fully considering the impact on the back-up and recovery infrastructure.

    There can be no doubt that virtualisation is the technology trend of the moment.

    Google the term and more than 30 million links offering expertise in the area will appear in milliseconds – and this is not just more technology hype.

    The virtualisation trend is having an impact on the business IT landscape.

    Drivers for virtualisation range from hardware, power and space savings through to increased manageability and data protection.
    Analyst group Forrester reports that 23 per cent of European firms are today using server virtualisation, and an additional 12 per cent are piloting the process as a means of reducing costs.

    IDC also predicts that the total number of virtualised servers shipped will rise to 15 per cent in 2010, compared to 5 per cent in 2005.

    And with the recent flotation of virtualisation leader VMware at a market value of GBP£9 billion, many investors as well as IT experts are betting their business on this trend becoming accepted everyday best practice.

    Virtualisation brings benefits

    Virtualisation has brought us new ways of doing things from managing desktop operating systems to consolidating servers.
    What’s also interesting is that virtualisation has become a conceptual issue – a way to deconstruct fixed and relatively inflexible architectures and reassemble them into dynamic, flexible and scalable infrastructures.

    Today’s powerful x86 computer hardware was originally designed to run only a single operating system and a single application, but virtualisation breaks that bond, making it possible to run multiple operating systems and multiple applications on the same computer at the same time, increasing the utilisation and flexibility of hardware.

    In essence, virtualisation lets you transform hardware into software to create a fully functional virtual machine that can run its own operating system and applications just like a “real” computer.

    Multiple virtual machines share hardware resources without interfering with each other so that you can safely run several operating systems and applications at the same time on a single computer.

    The VMware approach to virtualisation inserts a thin layer of software directly on the computer hardware or on a host operating system. This software layer creates virtual machines and contains a virtual machine monitor or “hypervisor” that allocates hardware resources dynamically and transparently so that multiple operating systems can run concurrently on a single physical computer without even knowing it.

    However, virtualising a single physical computer is just the beginning. A robust virtualisation platform can scale across hundreds of interconnected physical computers and storage devices to form an entire virtual infrastructure.

    By decoupling the entire software environment from its underlying hardware infrastructure, virtualisation enables the aggregation of multiple servers, storage infrastructure and networks into shared pools of resources that can be delivered dynamically, securely and reliably to applications as needed. This pioneering approach enables organisations to build a computing infrastructure with high levels of utilisation, availability, automation and flexibility using building blocks of inexpensive industry-standard servers.

    Benefits can come with initial increased complexity

    One of the great strengths of virtualisation is its apparent simplicity and its ability to simplify and increase flexibility within the IT infrastructure. However, as time passes there are some important lessons emerging from early adopters’ experience which are important to consider.

    IT managers looking to unleash virtualisation technology in their production networks should anticipate a major overhaul to their management strategies as well. That’s because as virtualisation adds flexibility and mobility to server resources, it also increases the complexity of the environment in which the technology lives. Virtualisation requires new thinking and new ways of being managed, particularly in the back-up and recovery areas of storage in a virtualised environment.

    Virtual servers have different management needs and have capabilities that many traditional tools cannot cope with. They can disappear by being suspended or be deleted entirely, and they can move around and assume new physical addresses.

    As a result, some existing infrastructures need to become more compatible with virtual machines in areas such as back-up and recovery.

    Many of the virtualisation deployments to date have been implemented on application or file servers where unstructured data is the key information. In these environments, VMware tools for back-up and recovery work well. Copies of the virtual machine images can be taken once a week, moved out to a proxy server and then saved onto tape in a traditional manner.

    Real returns available through virtualising structured data

    But the real returns on investment for business from virtualisation will come in its ability to virtualise the structured data of its key applications such as Oracle, SQL or Exchange. Many of these areas have been avoided to date because of the complexity of protecting these critical business applications in a virtualised environment.

    The standard VMware replication tools take a snapshot image in time and do not provide a consistent state for recovery and rebuild of structured data.

    The answer for critical applications where recovery times need to be seconds rather than hours is to build expensive highly available configurations. This solves system or site loss risks but protection is still required against data corruption, accidentally deleted data and virus attack.

    Less critical systems also need to be protected and data sets retained for compliance and regulatory purposes. In most data centres, traditional backup and recovery will be performing these functions today using familiar software tools that integrate with the database and tape or disk targets for the data.

    So, the obvious solution is to continue to back-up everything as before but in a virtualised environment the increased load on the network infrastructure would become unbearable very quickly with machines grinding to a halt and applications groaning.

    Tape systems with their high bandwidths and intolerance of small data streams are also unsuitable as targets as more flexibility is needed to schedule back-ups to multiple devices.

    The answer is disk-based back-up appliances

    With structured data, the answer is to use new disk-based back-up appliances to protect data. Using a Quantum DXi solution, for example, businesses can combine enterprise disk back-up features with data de-duplication and replication technology to provide data centre protection and anchor a comprehensive data protection strategy for virtualised environments.

    DXi solutions bring a number of additional benefits. In as much as they are useful when storing structured data, they are also effective in storing virtual machine disk format (VMDK) images and unstructured data, meaning users can benefit from a single point of data management. A benefit of storing VMDK images on de-duplicated disk is that all VMDK image are very much alike and so achieve an exceptionally high de-duplication ratio. This means much larger volumes of data can be stored on limited disk space.

    The DXi range leverages Quantum’s patented data de-duplication technology to dramatically increase the role that disk can play in the protection of critical data. With the DXi solutions, users can retain 10 to 50 times more back-up data on fast recovery disk than with conventional arrays.

    With remote replication of back-up data also providing automated disaster recovery protection, DXi users can transmit back-up data from a single or multiple remote sites equipped with any other DXi-Series model to a central, secure location to reduce or eliminate media handling. DXi–Series replication is asynchronous, automated, and it operates as a background process.

    Whether businesses are looking to increase return on investment from their virtualisation implementations or planning a virtualised environment, the lessons are clear. To make the most of this latest technological innovation, IT managers must plan their recovery and back-up strategies to cater for the virtual new world.

  • Tips for email management and archiving

    With only 20 per cent of companies demonstrating good control on email management, Dave Hunt, CEO of C2C, comments on the state of email management and archiving and notes what resellers can do to position themselves as protectors of companies’ most used and valuable communication method.

    Although around 30 per cent of organisations have some form of archiving in place, most consider that this would not constitute adequate control.

    A recent survey by C2C found that 65 per cent of respondents had set mailbox capacity limits meaning in effect, that end users were responsible for managing their own mailboxes.

    Just how bad does it get?

    In practice, this self regulation probably results in significant lost productivity and constitutes a poor strategy for managing and discovering data.

    We consider the top five questions being by resellers interested in recommending email management:

    1. Is Email control a management or archive issue?

    It is a management issue and archiving is part of the solution. Resellers should identify a solution that identifies unnecessary emails, handles attachments and provides automated quota management which should be part of a strategic ‘cradle to grave’ management of email. It isn’t a case of archiving email merely to reduce the live storage footprint, but part of a well thought-out strategy, designed hand-in-hand with the customer that aids productivity and time management and that can be implemented by an IT department simply and economically.

    2. What is the biggest problem for email management – storage costs, ‘loss’ of information or compliance issues?

    All of these are problems. Some will cost your customers on a daily basis; others could result in huge fines in liability. Failure to preserve email properly could have many consequences including brand damage, high third-party costs to review or search for data, court sanctions, or even instructions to a jury that it may view a defendant’s failure to produce data as evidence of culpability.

    3. What guidelines should be in place for mailbox quotas – and how can these be made more business friendly?

    Most specialists in email management agree that mailbox quotas are a bad idea. The only use would be a quota for automatic archiving, whereby, on reaching a specific mailbox threshold, email is archived automatically (and invisibly to the user) until a lower threshold is reached. Our C2C survey also found that those who self-manage email to stay within quotas frequently delete messages, delete attachments, and/or create a PST file. The over-reliance on PST files as a means to offload email creates several challenges when companies must meet legal requirements, since PST files do not have a uniform location and cannot be searched centrally for content with traditional technologies. Resellers can explain that reliance on PST files is poor practice.

    4. Once retention schedules and compliance have been met, does the email need to be destroyed – and if so, how should resellers’ recommend companies go about this?

    In some instances it is necessary to delete emails once the retention period has passed, in others it is only an option. Deletion also depends on the industry type, for instance, does it have to be guaranteed destruction, such as to US DoD standards, or is a simple removal of the email sufficient?

    5. What would be your top tips be for email management?

    Resellers that wish to add true value should consider the whole picture of email data management, from the instant an email is sent to the time it is finally destroyed.

  • Ten Criteria For Enterprise Business Continuity Software

    Jerome Wendt, president and lead analyst of DCIG Inc, an independent storage analyst and consulting firm, outlines 10 criteria for selecting the right enterprise business continuity software

    The pressures to implement business continuity software that can span the enterprise and recover application servers grow with each passing day.

    Disasters come in every form and shape from regional disasters (earthquakes, floods, lightning strikes) to terrorist attacks to brown-outs to someone accidently unplugging the wrong server.

    Adding to the complexity, the number of application servers and virtual machines are on the rise and IT headcounts are flat or shrinking.

    Despite these real-world situations, companies often still buy business continuity software that is based on centralized or stand-alone computing models that everyone started abandoning over a decade ago.

    Distributed computing is now almost universally used for hosting mission critical applications in all companies.

    However business continuity software that can easily recover and restore data in distributed environments is still based on 10 year old models.

    This puts businesses in a situation when they end up purchasing business continuity software that can only recover a subset of their application data.

    Organizations now need a new set of criteria that accounts for the complexities of distributed systems environments.

    Today’s business continuity software must be truly enterprise and distributed in its design.

    Here are 10 features that companies now need to identify when selecting business continuity software so it meets the needs of their enterprise distributed environment:

    • Heterogeneous server and storage support.
    • Accounts for differences in performance.
    • Manages replication over WAN links.
    • Multiple ways to replicate data.
    • Application integration.
    • Provides multiple recovery points.
    • Introduces little or no overhead on the host server.
    • Replicates data at different points in the network (host, network or storage system).
    • Centrally managed.
    • Scales to manage replication for tens, hundreds or even thousands of servers.

    The requirements for providing higher, faster and easier means of enterprise business continuity have escalated dramatically in the last decade while the criteria for selecting the software remains rooted in yesterday’s premises and assumptions.

    Today’s corporations not only need to re-evaluate what software they are using to perform these tasks but even what criteria on which they should base these decisions.

    The 10 criteria listed here should provide you with a solid starting point for picking backup continuity software that meets the requirements of today’s enterprise distributed environments while still providing companies the central control and enterprise wise recoverability that they need to recover their business.

    To read the full criteria please go to DCIG Inc.