Tag: data-center

  • Alibaba looks set to Launch its Cloud Services in the U.S.

    Alibaba looks set to Launch its Cloud Services in the U.S.

    alibaba-group

    The Chinese commerce firm Alibaba is keen on launching its cloud computing services in the U.S. and this became clear after they announced the establishment of a data center in Silicon Valley.

    The Silicon Valley data center (location still unknown for security reasons) is Alibaba’s first center outside China. The Chinese giant claims to have more than 1.4 million cloud services customers in China and its expansion to the U.S. is in line with its plan to expand to Southeast Asia and into Europe before the end of the year. 

    Microsoft, Google, and Amazon Web Services are among the leading cloud players in the United States and it will be interesting to see how Aliyun (an Alibaba development) will thrive since they are going to offer similar services as the aforementioned giants.

    Some of the notable services that they are keen on offering include data processing and cloud storage, load-balancing for companies running websites, data security, and a virtual serve.

    Aliyun is yet to announce its prices but they said that their services will be “cost effective”. They also added that there first target will be Chinese companies with American business interests, but they will later develop and ramp up their services to appeal to international companies as well by the end of the year.

     

     

     

     

  • Efficient Data Center Design Can Lead to 300% Capacity Growth in 60% Less Space

    Emerging trends in data center design mean that new data centers will be able to provide a 300 percent growth in capacity in 60 percent less space than existing data centers, according to Gartner. New data centers are being designed to be efficient in terms of power utilization, space allocation and capital expenditure.

    “There is a real and growing desire to increase productivity in data centers,” said Dave Cappuccio, chief of infrastructure research at Gartner. “Organizations are starting to take a serious look at consumption ratios of compute power to energy consumed and then compare them against estimated productivity of applications and the equipment to deliver that application. Couple this with the realization that most IT assets are underutilized — for example, x86 servers are running at 12 percent utilization, racks are populated to 50 to 60 percent capacity, floor space is ‘spread out’ to disperse the heat load — it becomes clear that an efficiently designed and implemented data center can yield significant improvements.”

    Traditionally, organizations would mitigate the power and cooling issues in data centers by spreading out the physical infrastructure across a larger floor space, but this trend is coming to an end as more servers are needed and floor space is becoming a premium. This is forcing organizations to more densely populate existing server racks, and as a result driving an increase in localized power and cooling demand.

    Cappuccio said the trend toward higher-density cabinets and racks will continue unabated through 2012, increasing both the density of compute resources on the data center floor, and the density of both power and cooling required to support them. IT managers for the past few years have focused solely on solving the power and cooling issue with hot and cold isles, distributed equipment placement, specialty cooling and self-contained environments.

    Gartner said in the future the issue will move up the corporate food chain as executives realize that the substantial energy costs for IT today are but a fraction of what future costs will be at current growth rates. At current pricing the operating expense (that is energy) to support an x86 server will exceed the cost of that server within three years.

    Given current trends it’s likely that operating costs of servers could easily equal their capital costs within the first few years, putting severe strains on IT organizations to fully utilize equipment they have, while only using equipment absolutely necessary. “The days of idle machines sitting on the data center floor during off peak hours will be a thing of the past. At current energy rates a 40kW rack could cost upward of $5,400 per server, per year,” Mr. Cappuccio said.

    “The new data centers are not like the old ones. Organizations need to make a break with the past and realize that innovation in data center design will yield both reduced capital and operating expenditure,” said Mr. Cappuccio. “Think small, think dense – the objective is the highest compute performance per kilowatt.”

    There are actions that can be taken today to reduce power consumption and thereby improve overall efficiencies in data centers. They include:

    1) Implementing row- and rack-based cooling for higher-density equipment can reduce energy consumption by up to 15 percent while making the data center more scalable.

    2) Rightsizing the new data center by building and provisioning only what is needed — and then expanding only when needed — can reduce the long-term operating expenses by 10 to 30 percent.

    3) Using air economizers in certain geographies is a simple step with sizable rewards. Gartner said that many data centers actually have air handlers with economizer modes on existing equipment but have it disabled from the early years when energy was not the issue it is today.

    4) Paying particular attention to floor layouts, not only with respect to hot aisle/cold aisle factors, but with regard to overall air movement (distance) to reduce workloads on your air handling equipment.

    5) Virtualize as much as possible — especially on x86 equipment. The average x86 server has very low utilization levels but requires a high degree of its maximum power to run. Push these systems to higher utilization levels to reduce overall energy consumption, reduce floor space and see more-efficient use of your IT assets.

    Gartner said that energy consumption will be the most dominant trend in data centers during the next five years — both from efficiency and a monitoring/management standpoints. Reduction in energy consumption will take on many forms, from introducing ‘green’ technologies, such as chilled water or refrigerant cooling at the device level, to real-time infrastructure management, which allows the movement of resources based on workloads and time of day. With potential regulatory involvement in data center efficiencies, IT and facilities managers will be required to show continuous improvements in how resources are utilized.

  • BTI Receives Brocade Data Center Ready Status for Storage Networking Modules


    BTI Systems has announced that the BTI 7000 Series has additional client services modules, verified by Brocade Communications Systems, as compatible with Brocade-based SAN infrastructure.

    The Brocade Data Center Ready program is a testing and configuration initiative designed to foster end-to-end SAN interoperability. As part of the program, testing is conducted in SAN configurations that include a heterogeneous mix of servers, storage systems, Brocade switches, SAN management and enterprise applications, and other SAN technologies.

    Vendors receive Brocade Data Center Ready qualification after completing tests to confirm that vendors’ products meet interoperability guidelines.

    “Storage area networks are being deployed worldwide at a rapid rate as a scalable, high-performance networking foundation for storage environments. The Brocade Data Center Ready program is an example of our continued commitment to delivering end-to-end interoperability to customers,” said Ben Taft, Senior Director of Strategic Alliances at Brocade.

    BTI Systems certified the Dual 4G Multiprotocol Transponder (1, 2, 4G Fibre Channel), Dual 10G Multiprotocol Transponder (10G Fibre Channel), and the 10-port Multiprotocol Muxponder (1, 2, 4G Fibre Channel), complementing the Dual 1G and 2.5G Multiprotocol Transponders previously certified.

    Jason Smith
    , Solutions Marketing and Certifications Program Manager of BTI Systems, claims the Brocade Data Center certification is a key industry stamp that is important to their customers.

    “They know that BTI Systems’ platforms meet the high standards of Brocade’s rigorous testing and they can be confident about the performance we deliver,” he said.

    The BTI 7000 Series delivers all the capabilities of large core network platforms in the industry’s most compact, modular, low power consumption, easy-to-use packet optical network system.

    BTI’s Intelligent Service Edge solutions provide wavelength and packet-level delivery of high capacity services such as video, storage, wired data, wireless data and voice, and media to the network edge.

  • Texas Memory Systems Delivers Record 5-Million IOPS Flash-based SSD System

    Texas Memory Systems launches the RamSan-6200 SSD System that offers up to 100 terabytes of Flash-based storage in a 40U rack configuration that can sustain a record 5 million input/outputs per second (IOPS) with 60 gigabytes per second throughput while using a little over 6 kilowatts of power.

    To achieve the equivalent level of performance with hard disk-based storage arrays would require several thousand of the 15,000 RPM hard disk drives.

    The RamSan-6200 is a scaled up system that combines twenty RamSan-620 solid state disks in a single datacenter rack and uses Texas Memory Systems’ TeraWatch software to provide unified management and monitoring from a single GUI console. The system utilizes enterprise grade Single Level Cell (SLC) Flash as well as multiple levels of RAID and advanced Flash management algorithms.

    A single RamSan-620 unit provides 5TB of Single Level Cell (SLC) Flash with 250,000 sustained IOPS for random reads and random writes. Each RamSan-620 unit can support 2 to 8 Fibre Channel or up to 4 InfiniBand links.

    At the chip level, TMS uses only SLC Flash memory. Each Flash chip incorporates an Error Checking and Correction (ECC) data field within the chip to check and correct single-bit errors.

    At the board level, each set of Flash chips is organized as a board-level RAID, thereby eliminating any single chip failure from corrupting data.

    At the system level, the RamSan-620 allows to designate one of the cards inside the system as an active-spare that works hand-in-hand with the chip level RAID on each of our boards. If one of the cards experiences a failure that degrades its RAID protection, the system will immediately migrate the data on that card to the hot-spare to return to a fully redundant state.

  • China VoIP & Digital Telecom Virtualization Project Receives Governmental Funds


    Jinan Yinquan Technology has announced that its data center virtualization technology project has received 500,000 yuan from Shandong Economic and Information Technology Committee.

    Following the award, the wholly owned subsidiary of China VoIP & Digital Telecom said it is well positioned to take full advantage of the tremendous economic growth currently being experienced in China.

    The company is currently marketing its NP Soft Switch system in China and is testing stages of other IT products.

    Li Kunwu, chairman and CEO of CVDT

    The Chinese government established the fund to award outstanding energy-saving industrial technology projects in Shandong Province in 2009.

    Li Kunwu, chairman and CEO of CVDT, said the virtualization technology solution provided by Yinquan is in full compliance with the conservation-oriented society and Green IT concepts advocated by the country.

    "Yinquan maintains a leading position in virtualization technology, and the government’s endorsement will accelerate our expansion in the virtualization market in 2009," he said.

    "Also, with the government’s support, industrializing virtualization in China will be expedited. I believe Yinquan should have bright growth prospects in virtualization."

  • IT Execs Doubt Virtualization is Data Recovery Remedy


    Separate backup data center locations are not being used by many companies to provide the complete data-recovery system, according to research.

    Instead they are relying on failover to separate storage arrays and servers within the same physical building.

    Market researcher Harris Interactive said this is the Achilles heel of many virtualized IT environments.

    Three-quarters of IT executives surveyed believe virtualization by itself can play a major role in an enterprise disaster recovery plan.

    But they said it in no way represents a complete answer to a DR strategy, according to a "State of Disaster Recovery" survey released by Harris.

    While many IT decision-makers say they have deployed virtualization in a production setting, survey data indicated that most have not yet utilized it in a disaster-recovery situation.

    A full-fledged disaster-recovery system using virtualization replicates the system and all its data to an off-site location away from the main enterprise data center.

    In the event of the main data center going offline and out of action, virtual machines replicated at the backup location continue to keep the workloads running smoothly, with little or no latency in daily production.

    However, many companies are not able to deploy separate backup data center locations to provide the complete data-recovery system, relying instead on failover to separate storage arrays and servers within the same physical building.

    Seventy-four per cent of survey respondents indicated that virtualization can play a major role but is not a total solution for disaster recovery plans.

    One-quarter of IT respondents said they would never include virtualization technologies in their disaster recovery plans.

    Sixty per cent of respondents said they have virtualization in place now as a recovery tool from unplanned outages; only 29 per cent said they have used it successfully.

    Eight per cent said they used virtualization but that it didn’t work to their satisfaction.

    Another 29 per cent of IT decision-makers say they have deployed virtualization but not yet used it as a tool for disaster recovery.

    The survey said that over the next two years, half of IT decision-makers say they will be looking into virtualization as an option for managing unplanned outages and disaster recovery.

    About a quarter of IT executives say they will be looking into cloud computing and grid networking as potential options.

    The survey was commissioned by SunGard Availability Services, which provides disaster recovery services, managed IT services, information availability consulting services and business continuity management software to more than 10,000 customers in North America and Europe.

  • Online Data Backup Center Aims For Zero Carbon Footprint


    A UK-based provider of online data backups is building a technically-advanced data center powered solely by renewable energy.

    Located in North Wiltshire, WorldBackups has purchased a 2,600 sq. ft. ex-BT exchange where it will develop its own on-site renewable energy generation, passing on the reduced power costs to its customers.

    The company says it is bypassing the carbon offset route and is instead working towards a time when it won’t have a carbon footprint to counter.

    The data center is due to go live in the second quarter of 2010.

    Roland Scott, managing director of WorldBackups said pioneering technology isn’t solely the preserve of Silicon Valley or hi-tech hotspots in the UK.

    He said the company was building a completely self-sufficient data center and proving that green investment can be good business.

    "Our ethos is two-fold: a duty to make use of renewable and clean energy when we can, and also to be ready for the arrival of future environmental and compliance laws," he said.

    World Backups is developing a system that will shut down its servers at different times throughout the day and night to preserve power.

    In the case of most other data centers providing services such as website hosting and telecoms, the servers need to be turned on around the clock.

    World Backups will be able to switch off unwanted servers, even during peak times, thanks to an advanced application that recognises when extra resources are required and can fire up the necessary network space within one minute.

    The resulting cut back in the power required to run the center will make generating its energy through renewable sources on-site a viable option.

    "We intend to use renewable energy during the day and sell the excess back to a supplier of electricity that comes solely from renewable energy sources," said Scott.

    "We’ll then purchase back that renewable energy if and when it’s needed."

    Scott said that in a recession, there’s always a concern that green businesses will suffer as companies and consumers look to cut costs.

    "We believe that eco-enterprise and commercial success don’t have to be mutually exclusive.

    "’Going green’ will make us more competitive, and our customers won’t have to sacrifice backing up their valuable data during the crunch."

    World Backups’ software can be installed directly on to a server, laptop or PC to protect data such as email systems, databases, directories, office documents and pretty much all files from fire, theft, corruption and other disasters.

    Scott said its online backups are easy to implement, secure, and offer an affordable and reliable alternative to tapes. Customers are guaranteed 24/7 access to their data from anywhere with an internet connection world-wide, and don’t have to worry about mislaid tapes or IT team holidays.

    All data is encrypted before it leaves the customer’s system(s) to be replicated to WorldBackups’ data center, giving customers complete control over its security.

    In addition, data can be backed up or mirrored to a server on the customer’s own site, providing quick and easy local access should a restore be required.

  • PANDUIT Launches Fiber Optic System To Address Data Center Demands


    PANDUIT has introduced next-generation, high-speed data transport capabilities for the data center.

    The high performance, fiber optic system to connect server, storage, and network systems is aimed at meeting ever-increasing bandwidth and application requirements.

    Rick Pimpinella, fiber research manager at PANDUIT, said that as virtualization, consolidation, and convergence initiatives continue to grow more pervasive, so do the demands placed on the physical infrastructure.

    He said PANDUIT was launching its OM4 Fiber Optic System to meet the needs for faster processing speeds and greater storage capabilities, as well as long-reach and cross-connect deployments.

    "As a result of our continued research into multimode fiber performance and our active participation in standards committees, we can now offer the next progression in high performance optical connectivity," he said.

    "PANDUIT is able to offer a fiber system that exceeds the bandwidth specification being proposed in the draft standard for OM4."

    Pimpinella said the company’s OM4 Fiber Optic System offers high performance and seamless integration of 10 Gb/s Ethernet and 8Gb/s Fiber Channel network capabilities and beyond, to minimize physical infrastructure risk in the data center.

    He said it integrates multi-fiber low loss MTP and single fiber connectivity solutions with premium grade high performance laser optimized multimode fiber (with a minimum EMB of 5000 MHz·km) to deliver consistent performance and reliability of critical systems.

    The modular system includes pre-terminated cassettes, interconnect assemblies, equipment cords and harnesses.

  • Rackable Systems Announces First Quarter Fiscal 2009 Financial Results


    Rackable Systems this week announced its financial results for the first quarter of fiscal year 2009.

    The ecological server and storage product provider reported Q1 revenue of USD $44.4 Million, up 14 percent sequentially, including delivery of two ICE Cube containerized data centers.

    Total revenue for the first quarter ending April 4, 2009, was USD $44.4 million, compared to USD $38.8 million for the fourth quarter of 2008 and USD $67.8 million in the first quarter of 2008.

    Mark J. Barrenechea, president and CEO of Rackable Systems, said he was pleased with its revenue and working capital progress quarter over quarter.

    But he admitted dissatisfaction with the overall results.

    "Although the economic turmoil will remain a challenge in 2009, we are focused on accelerating innovative products to market, controlling expenses and completing the acquisition of Silicon Graphics’ assets, enabling us to achieve better gross margins and customer diversification," he said.

    Rackable Systems ended the first quarter of 2009 with USD $181.2 million in cash, cash equivalents, long-term and short-term investments, compared to USD $180.6 million at the end of last quarter.

    The company’s lower gross margin was attributed to three factors:

    • reducing high-cost inventories of certain components through aggressive pricing
    • the significant revenue mix of our large Internet data center business
    • increased competitive pressure from various server vendors offering aggressive deals during the quarter

    Rackable Systems has received court approval to acquire substantially all the assets of Silicon Graphics, Inc. for USD $42.5 million in cash, plus the assumption of certain liabilities associated with the acquired assets.

    The acquisition is anticipated to be completed by approximately in May subject to the satisfaction of closing conditions.

  • Vembu Launches Online Backup on Amazon Web Services


    Vembu Technologies has made available for production StoreGrid Cloud AMI, an online backup "virtual appliance" on Amazon Web Services.

    The company says that with the StoreGrid Cloud AMI and the Amazon Web Services infrastructure, it is now possible for service providers to offer a scalable, secure and highly redundant online backup service to their small and medium business (SMB) customers without any upfront capital investment in a data center.

    Online backup service providers can now configure the StoreGrid Cloud AMI virtual appliance to run as a backup server in the Amazon Elastic Compute Cloud (Amazon EC2).

    StoreGrid Cloud AMI will use the Amazon Simple Storage Service (Amazon S3) to store backup data from client machines at remote locations.

    The StoreGrid Cloud AMI virtual appliance also leverages Amazon Elastic Block Store (Amazon EBS) to store meta-data information in the MySQL relational database.

    Steve Rabuchin, director of Developer Relations and Business Development for Amazon Web Services, said AWS is designed to help alleviate for its customers, the cost and effort associated with building, operating and scaling technology infrastructure.

    "We are pleased that the StoreGrid Cloud AMI is able to leverage Amazon Web Services to extend this service to their customers," he said.

    Even service providers who want to keep backup data in their own data centers can use the StoreGrid Cloud AMI virtual appliance as a replication server. This deployment would enable them to replicate the backup data into the Amazon S3 storage cloud, thus offering more redundancy to the data.

    Sekar Vembu, CEO, Vembu Technologies

    Sekar Vembu, CEO, Vembu Technologies, said that investing, managing and scaling server and storage infrastructure is one of the most complex tasks for any online backup service provider.

    "StoreGrid Cloud AMI for Amazon Web Services eliminates this complexity by virtualizing the computing and storage infrastructure in a cloud," he said.

    Vembu released the Beta version of StoreGrid Cloud AMI in December 2008, and since then more than 50 service providers have been testing it.

    This production release incorporates feedback from these Beta partners, including the enhancement to use Amazon EBS as a temporary cache before uploading backup data to Amazon S3.

    StoreGrid Cloud AMI is available for purchase now and is priced as an annual subscription per StoreGrid backup client, with USD $30 for desktops and USD $60 for servers.