Tag: virtualisation

  • Vembu Launches Online Backup on Amazon Web Services


    Vembu Technologies has made available for production StoreGrid Cloud AMI, an online backup "virtual appliance" on Amazon Web Services.

    The company says that with the StoreGrid Cloud AMI and the Amazon Web Services infrastructure, it is now possible for service providers to offer a scalable, secure and highly redundant online backup service to their small and medium business (SMB) customers without any upfront capital investment in a data center.

    Online backup service providers can now configure the StoreGrid Cloud AMI virtual appliance to run as a backup server in the Amazon Elastic Compute Cloud (Amazon EC2).

    StoreGrid Cloud AMI will use the Amazon Simple Storage Service (Amazon S3) to store backup data from client machines at remote locations.

    The StoreGrid Cloud AMI virtual appliance also leverages Amazon Elastic Block Store (Amazon EBS) to store meta-data information in the MySQL relational database.

    Steve Rabuchin, director of Developer Relations and Business Development for Amazon Web Services, said AWS is designed to help alleviate for its customers, the cost and effort associated with building, operating and scaling technology infrastructure.

    "We are pleased that the StoreGrid Cloud AMI is able to leverage Amazon Web Services to extend this service to their customers," he said.

    Even service providers who want to keep backup data in their own data centers can use the StoreGrid Cloud AMI virtual appliance as a replication server. This deployment would enable them to replicate the backup data into the Amazon S3 storage cloud, thus offering more redundancy to the data.

    Sekar Vembu, CEO, Vembu Technologies

    Sekar Vembu, CEO, Vembu Technologies, said that investing, managing and scaling server and storage infrastructure is one of the most complex tasks for any online backup service provider.

    "StoreGrid Cloud AMI for Amazon Web Services eliminates this complexity by virtualizing the computing and storage infrastructure in a cloud," he said.

    Vembu released the Beta version of StoreGrid Cloud AMI in December 2008, and since then more than 50 service providers have been testing it.

    This production release incorporates feedback from these Beta partners, including the enhancement to use Amazon EBS as a temporary cache before uploading backup data to Amazon S3.

    StoreGrid Cloud AMI is available for purchase now and is priced as an annual subscription per StoreGrid backup client, with USD $30 for desktops and USD $60 for servers.

  • EMC Unveils Virtual Data Centre With High End Storage


    EMC Symmetric VMax is the latest breakthrough technology from EMC. It provides for a virtual data center with high end storage and scales up to 2 PB of usable protected capacity, writes Samantha Sai for storage.biz-news.

    Unlike alternate arrays, it equips its customers with an ability to consolidate workloads with a comparatively small footprint.

    These systems will be available immediately.

    Joe Tucci, EMC Chairman, President and CEO said: "The shift from physical to virtual computing is being driven by efficiency gains too compelling to ignore.

    "Virtualization’s ability to maximize resources and automate complex and repetitive manual tasks is overtaking the server world and is now happening in the storage world.

    "EMC is leading the way with the biggest breakthrough in new high-end storage design in nearly two decades, enabling storage customers to deploy a flexible, dynamic, energy-efficient information infrastructure and get the maximum value for their investment."

    The new architecture can be deployed with flash, Fibre channel and Serial Advanced Technology Attachment (SATA) drives.
    Virtualized and physical servers are supported including open systems, mainframes and system hosts.

    The Virtual Logical unit number (LUN) technology moves data to the right tiers and redundant array of independent disks (RAID) types at the right time.

    Virtual provisioning efficiently allocates, grows and reclaims storage.

    The Extended distance protection replicates data over distances and achieves zero data loss protection.

    Information centric security systems with advanced RSA security technology have been built in to keep the data safe; reduce risk and improve compliance. The high end storage array uses multi core processors to lower power costs and IOPS per dollar.

  • Cisco Reveals More Details On UCS Platform


    Cisco has revealed more details on its Unified Computing System (UCS) for virtualized data centers a month after it was first announced.

    Company executives used a live Internet TV broadcast to provide further insight into pricing, processing power and memory capacity.

    The networking company’s UCS is a mainstream data center computing platform that promises to seamlessly integrate processor, storage and network systems in a virtualised architecture.

    It offers medium and large enterprises a single architecture that links all data centre resources together, so overcoming the "assembly-required" nature of distinct virtualisation environments.

    Starting in the second quarter of 2009, Cisco plans to offer complete systems of up to 320 compute nodes housed in 40 chassis, with data flowing across 10 gigabit Ethernet.

    When first revealed in March, details on the UCS were limited, largely because the system is based on Intel’s Nehalem-class Xeon 5500 series of server chips, which wasn’t released until March 30.

    This week, Soni Jiandani, vice president of marketing for the Cisco Server Access Virtualization Group, and David Lawler, vice president of product marketing for the Cisco Server Access Virtualization Group, provided some more details

    They revealed performance-test results that show UCS performs either first or second in benchmark tests against competing systems in trials conducted by VMmark and SPEC, which will have full results available soon.

    Cisco also added new details around its Memory Extension Technology, a core component of UCS, which Cisco said enables the CPU to access four times the amount of memory compared to typical blade systems.

    The company said its memory extension can cut memory costs by 33 per cent to 60 per cent in 64-GB, 96-GB and 144-GB deployments, while expanding available memory to 192 GB and 384 GB.

    They said this solves the problem of users running out of memory before running out of CPU availability.

  • Replication and Cloud Computing Are Inseparable


    Cloud-based computing is coming of age. The practice is emerging as a computing model that offers flexibility in infrastructure and investment.

    The core of this service is utility that is backed by loosely coupled infrastructure that is self healing, geographically dispersed, designed for user self service, writes Samantha Sai for storage-biz.news.

    The infrastructure is instantly scalable and adjustable to the ebb and flow of business. The services are accessible across IP based networks and all management issues are handled by the cloud provider.

    Users can demand raw compute or storage capacity resources or full blow application services instantly.

    Cloud storage is seen as a solution to the ever present need for cost effective storage.

    Cloud based systems provide easily accessible, affordable disaster recovery options for large enterprises that need to implement off site protection for new projects.

    Small and medium enterprises find this highly affordable and an interesting alternative to expensive investment in storage hardware.

    Recovery point and recovery time objectives of small, medium and large enterprises can be met by cloud storage providers who make available storage space on a pay as you go basis while taking on the management issues of such secondary location storage infrastructure.

    Amazon Web Services, with EC2 Compute Cloud, and GoGrid, with GoGrid Cloud Hosting, are some cloud storage service providers who make compute cycles and storage capacity available on immediate deployment basis.

    The replication creates copies of the enterprise data on these sites and allows the key applications to be restarted and run at the remote location in the event of disaster. Interestingly there will be no capital expenditure—only operational expenditure—till the disaster recovery happens.

    The replication technology is available in storage arrays; network based appliances and through host based software.

    Array based replication and Network based applications require similar setup at both the source and the target locations. Host based replication on the other hand use block based replication approaches or file based approaches to replicate virtual machines in real time.

    Host based replication can also be combined with cloud based infrastructure at a nominal cost for extending protection further down the hierarchy in the organization. The replication can also happen real time.

    Replication and cloud computing have certainly matured and are being considered as an effective alternative to local backup.

    Eric Burgener, a senior analyst and consultant with the Taneja Group research and consulting firm, said replication and cloud computing can also be considered as an alternative to local backup.

    Eric Burgener, Taneja Group

    "Disk-based backup has a lot to offer companies, including faster backups, faster restores, and more reliable recovery (relative to tape-based infrastructures)," he said.

    "If you’re considering moving to disk, don’t overlook the fact that it gives you access to replication technology.

    "For data sets that require stringent RPOs/RTOs, replication can be used to kill two birds with one stone: data is quickly and easily available for file- and even system-level restores from the remote location, but the fact that the location is remote provides the resilience demanded by a DR plan."

  • Optimizing Virtualization On Single Architecture: Dell Announces ISCSI SAN Storage Arrays


    Dell has announced that as part of its data center strategy it will focus on combining servers, storage and services optimized for virtualization in a single architecture.

    As a step in this direction Dell has come out with a new series of EqualLogic iSCSI SAN with fast processors, more cache, additional Ethernet ports and support for solid state disc drives, writes Samantha Sai for storage.biz-news.

    The series of storage arrays is named PS6000 and will be available in five models – PS6000E, PS6500E, PS6000X, PS6000XV and PS6000S.
    This product is distinguished by its speed.

    The redesigned controllers, faster processors and a fourth Gigabit Ethernet port for connectivity is expected to be 9.1 per cent faster than its predecessors for sequential write workloads and 29 per cent faster for sequential read workloads.

    The arrays can be scaled up to 16 TB.

    Dedicated controllers for SSD drives for PS6000S have been added to scale the performance linearly.

    The SSD is 50 GB and is available in 400GB and 800 GB dual controller configurations. The array is designed for low latency, high IOPS applications.

    The expanded management software suite that has been unveiled along with this hardware complements the PS6000 launch.

    This software supports RAID6, Microsoft Hyper-V Smart Copysnapshots and enhanced integration for Microsoft Exchange, SQL Server, VMware and Citrix XenCenter.

    The technology aims to use storage resources efficiently in a networked environment.

    It effectively exploits hypervisor resources in virtual machines.

    The SAN headquaters is a centralized dashboard that helps administrators monitor performance and events of dozens of PS Series groups.

    This technology is available free of cost and under warranty or service agreement.

    The arrays are priced at USD $17000, while the SSDs are priced at USD $25000.

    The series is available from Dell and its PartnerDirect, Channel partner.

    An IDC report says that Dell EqualLogic holds 31% of the iSCSI San Market as on date.

  • Cloudera Aims To Capture Data Center Market With Hadoop Cloud Solution


    A startup software dealer is bringing cloud-computing technology used by the likes of Yahoo, Facebook and Google to regular enterprise data centers.

    Silicon Valley-based Cloudera plans to make big data-processing capabilities accessible and affordable for all companies, writes Samantha Sai for storage.biz-news.

    Mike Olson, CEO of Cloudera, said Hadoop is a cloud-computing technology used to store and process petabytes of data on systems consisting of hundreds or even thousands of servers.

    "Processing this kind of big data has been too expensive or too technically difficult for all but the most sophisticated IT organizations until now," he said.

    IDC speculates that the global IT expenditure on cloud services will expand approximately threefold in the next 2-3 years, when it is estimated to total USD $42 billion and account for close to 9 per cent of revenues in five important market sectors.

    IDC also predicts that expenditure on cloud computing will pick up pace throughout the next 2-3 years, and will most likely secure 25 per cent of IT spending growth in 2012.

    This is expected to grow the following year and capture at least a 1/3rd of the market.

    David Smith, Gartner’s vice president, thinks that cloud computing still has some way to go and the competition is just starting.

    "Cloudera is not the only company supporting Hadoop. HP is doing a lot of work with Hadoop, as is Yahoo," he said.

    However, there is a major difference between Cloudera and the others like Yahoo.

    Cloudera is set up as a specific one-stop shop for the free Java software structure that presently sanctions the cloud.

    Christophe Bisciglia, Cloudera’s founder and former manager of Google’s Hadoop cluster, said that listening to the community, he consistently hears that Hadoop installation, configuration, and deployment needs to be easier.

    "That’s the primary reason why we built the Cloudera distribution for Hadoop," he said.

    "But furthermore, a distribution fosters community growth by providing a common platform to share code, experience and, most importantly, innovation."

    Cloudera’s latest Web-based configuration tool will facilitate enterprises to produce custom-tailored packages that meet their exact wants.

    In addition, Cloudera is making a preconfigured VMware image liberally offered for assessment and use with the company’s complimentary online teaching.

    "The Cloudera distribution of Hadoop gives you the same tools you already know to provide standardized packaging and automatic configuration," said Bisciglia.

    He said that Cloudera’s sharing of Hadoop has always been founded on a established code of reliability.

    "We enable users to limit upgrades to major project milestones built on code that is tried, trusted, and proven reliable," he said.

    Finally, there will always be a few users who will need assistance in setting up and using the software for some critical adventures and this is where Cloudera will make up the money.

    "These enterprises need a company to stand behind the package, and help them find and fix problems when they come up," said Olson.

  • Cisco Transforms Data Center With UCS


    Cisco has launched a mainstream data center computing platform – Unified Computing System (UCS) – that promises to seamlessly integrate processor, storage and network systems in a virtualised architecture.

    The move pits the networking equipment market leader against the world’s largest systems vendors, including HP, IBM, Dell, Fujitsu and others.

    UCS offers medium and large enterprises a single architecture that links all data centre resources together, so overcoming the "assembly-required" nature of distinct virtualisation environments.

    Prem Jain, senior vice president of Server Access and Virtualization Business Unit at Cisco, said UCS unites compute, network, storage access and virtualization resources in a single energy-efficient system that unleashes the power of virtualization.

    "By delivering and supporting Microsoft operating systems for the Unified Computing System, we’re offering a familiar Windows platform to help our customers integrate this revolutionary new architecture into existing data center environments so they can quickly realize the benefits of unified computing," he said.

    Virtualisation has transformed the structure of server and storage environments in data centres. It is now extending to network virtualisation.

    With UCS, Cisco is positioning itself so as to have a controlling role across all three levels of virtual technology.

    Starting in the second quarter of 2009, it plans to offer complete systems of up to 320 compute nodes housed in 40 chassis, with data flowing across 10 gigabit Ethernet.

    Critical to its challenge will be its ability to draw on the expertise of key partners. These include:

    • Its compute capabilities, UCS B-Series blades, will be based on Intel Nehalem processors
    • The follow-on generation will be from Intel Xeon.
    • VMware will supply the critical virtualisation software
    • BMC will enable "a single management environment for all data centre devices".
    • EMC and NetApp will be responsible for the storage system units
    • Emulex and Qlogic will input storage networking technology
    • Oracle will deliver middleware
    • Key systems software will come from Microsoft and Red Hat.
    •  

    John Chambers, Cisco’s CEO, said UCS could do a lot for Cisco’s bottom line.

    He said it gives Cisco access to about a quarter of the many billions spent inside the data center, up from less than 10 per cent presently.
    That’s the principal reason why Cisco is reinventing the basic building block of the data center.

  • DataCore San Software Boosts Server Virtualization Support







    The latest versions of DataCore’s SANmelody 3.0 and SANsymphony 7.0 storage virtualization software were under preview at the recent VMWorld Europe 2009.

    The products are due to be shipped later this month with 64 bit software architecture and various new features for virtual servers, writes Samantha Sai for storage-biz.news.

    The company says SANSymphony is geared at enterprises aiming to virtualize their storage area networks, while SANmelody is for small Fiber Channel and iSCSI SANs of up to 32 TB.

    The virtual disk pooling, synchronized mirroring for high availability, load balancing, thin provisioning and other advanced features are exciting and will be welcomed by enterprises using the software.

    The 64 bit controller software supports large cache on the physical server up to a theoretical limit of 1 TB as against the earlier versions which supported only 20 GB cache.

    Jack Fegreus, CEO of Southborough, Mass.-based OpenBench Labs, points out that a terabyte of cache is "at the far edge of reality for most normal sites today", but given Moore’s Law, "1TB of cache may well be average".

    Today many organizations use as much as 256 GB of cache on average.

    This increased cache implies that there will be a denser consolidation of servers into virtual machines and performance of VM backups may improve by minimizing I/O to disk.

    The Transporter Option that comes with SANmelody and SANsymphony can also perform conversions between physical and virtual servers.

    This feature is significant as a server can be converted from a physical Windows box to a Microsoft Corp.Hyper V Image and then to a VMware ESX image.

    It can then be converted back to a logical unit Number (LUN) mapped to a physical server. This feature could be an advantage to people who are running multiple virtual servers with different operating systems.

    Themis Tokkaris, systems engineer with Arizona-based pest control company Truly Nolen, says that "it is also an open idea", adding: "If I’m not happy with ESX in the future, I’m not stuck with it."

    DataCore SAN offers its users the option of using a new free plug-in for VMWare Inc’s Virtual Infrastructure Client.

    James Price, vice president of product and channel marketing at DataCore SAN, claims that it will offer "cleaner visibility and easier to understand mappings and paths".

    He further points out that the upgrades will provide a way to reclaim free capacity on volumes using thin provisioning.

    DataCore SAN is not alone in this storage virtualization space.

    Many of the features offered by DataCore, such as 64 bit support, thin provisioning and so on, have been included in the packages of other vendors.

    Symantec Corp, Double-Take Software Inc and others offer 64 bit support for the data protection space.

    Compellent Technologies Inc also offered free space recovery for thin provisioning about a year ago. DataCore SAN’s offer, however, combines these features into a server-centric approach and appears to have set the trend for the future of networked storage solutions.

    However, Fegreus says DataCore’s combining these features into a server-centric approach looks like the wave of the future for networked storage as integration increases between SANs and servers.

  • Nexsan Launches iSCSI SAN Aimed at Standalone or Virtualised IT Environments


    Nexsan has introduced its first iSCSI SAN, which has been specifically designed and priced to give SMBs and SMEs a new value-alternative in implementing the protocol.

    The Nexsan iSeries is intended as a complete, easy-to-implement, enterprise-class SAN that is ideal for use in standalone or fully virtualised IT environments.

    It is available in two configurations which include additional storage expansion to meet customers growing data storage requirements.

    Bob Woolery, Nexsan’s senior vice president of marketing, said the iSeries gives customers flexibility and value.

    "We’ve designed the iSeries to give customers a solution with all the storage services, data protection and scalability they need at a price they can afford," he said.

    "And, we’re giving our channel partners a new value alternative in this growing market segment. We’ve truly changed the game with value."

    The iSeries offers iSCSI, Fibre Channel and NAS configurations from the same system and provides up to 1PB of storage.

    It also includes a complete suite of easy-to-use enterprise storage services, including virtualisation.

    Woolery said the iSeries was being sold for a single up-front price to affordably accommodate a company’s increasing storage needs.

    "This holistic approach simplifies pricing, removes hidden costs and licensing fees associated with competitive products and smoothly accommodates an organisation’s IT requirements as they change," he said.

    Other benefits offered by the Nexsan iSeries include:

    • Virtualised storage for flexibility and intelligent automation of routine tasks
    • High performance for running multiple demanding applications from a single high-density system
    • Simultaneous use of SAS and/or SATA disk drives in the same storage chassis for application flexibility and low-cost scalability
    • VMware-certified, ensuring high performance in both physical and virtual environments
    • Easy to deploy and manage with wizard-based setup, administration and central management of volumes, snapshots, HA/data replication, mirroring and data migration
    • High-performance with up to four RAID engines per storage system
    • Nexsan’s AutoMAID™ energy-saving technology reduces energy costs by up to 60% without compromising application performance
    • Key application support: storage pooling, virtual servers, archiving
    • High-speed, highly responsive – hyper transport bus, real-time OS, enterprise-class network chip set
    • Fully redundant and designed for 99.999% availability
    • Available immediately through Nexsan’s global network of authorized value-added resellers.
  • Virtualisation Provides Effective Disaster Recovery Solution


    SecurStore has warned that companies need to ensure a reliable IT disaster recovery plan is in place when times are tough.

    The online, automated and managed data backup and recovery specialists said that the current economic difficulties meant that customer service is going to be the all important differentiator throughout 2009.

    It said the list of what can go wrong in the working environment is extensive, so it was essential to ensure that services ccould still be provided to clients in the event of power failures, natural disasters or sabotage is paramount.

    Alexander Eiriksson, COO of SecurStore, said many companies were living dangerously, operating without a reliable backup and recovery plan, which he described as a major risk.

    He said companies that are using 30-year old tape backup technologies are just as insecure since tapes notoriously suffer from reliability issues.

    Virtualisation technology is increasingly being seen as a cost-effective and immediate disaster recovery plan.

    Using Virtualisation, companies need fewer servers leading to a reduction in hardware maintenance and reduced IT employee working time.

    It also simplifies IT management, minimises space and saves power, all leading to reduced costs.

    "Gartner recently published a report that ranked Virtualisation as third in a list of 10 technologies that CIOs will focus on to realise value from existing assets," said Eiriksson.

    He said specialists such as SecurStore offered simple, cost effective agent-less, online backup and recovery solution that enabled organisations to maximise their virtualisation strategy while achieving superior data protection and recovery management without performance degradation.