Deliver Your News to the World

ADVA Optical Networking leads with qualification for new version of IBM InfiniBand-based GDPS mainframe clustering solution


WEBWIRE

New protocol for lowest latency enhances enterprise disaster recovery/business continuity solutions

ADVA Optical Networking successfully completed interoperability testing and qualification of its FSP 3000 platform for a new version of IBM’s mainframe clustering protocol based on InfiniBand for the IBM System z™ Geographically Dispersed Parallel Sysplex (GDPS®) environments. Parallel Sysplex InfiniBand (PSIFB) has been introduced as the next-generation high-bandwidth, low-latency interconnect to further enhance synchronization of System z servers and follow-ons. These links can be extended over the ADVA FSP 3000 platform up to 100km and will be transparent and support auto negotiation between hosts running single data rate (SDR, 2.5Gbit/s) and double data rate (DDR, 5.0Gbit/s) link speeds.

“This qualification is the result of close cooperation with IBM,” explained Christian Illmer, senior director of enterprise application solution management at ADVA Optical Networking. “ADVA Optical Networking has always focused on introducing technologies that support high-bandwidth applications with the lowest latency for areas like real-time stock trading, medical image analysis, server clustering and more. For these areas, InfiniBand is a cost-effective solution, available today, with proven technology. This work represents investment protection for customers who already have ADVA Optical Networking solutions in IBM mainframe environments.”

Today’s announcement comes at a pivotal time when worldwide customers continue to invest in the IBM System z mainframe, which powers the top 50 banks worldwide. Customer demand for the mainframe has enabled IBM’s System z to nearly double its share this decade, according to IDC’s high-end server quarterly tracker.

ADVA Optical Networking has been qualified to use PSIFB to connect two mainframes across distances of up to 100km, an advancement in throughput from the 2Gbit/s data rate used in prior generations. Previously, IBM used PSIFB solely inside the mainframe. To connect multiple mainframes, conversion to another protocol like Inter System Channel Links (ISC) was required. The use of PSIFB now eliminates latency caused by protocol conversion. It also enables a more efficient transfer of data, more timely delivery and more accurate synchronization of information between mainframes.

A GDPS-based installation is designed to be a comprehensive disaster recovery and business continuity solution for large multi-site enterprises. It combines the characteristics of high availability and near-continuous operations to deliver high levels of service. It is based on geographical clusters and data mirroring (point-to-point remote copy, PPRC or Metro Mirror), including the latest 4Gbit/s Fibre Channel (FC) and 10Gbit/s FC Inter Switch Link (ISL) technology. These technologies form the backbone of a GDPS/PPRC solution, which is designed to manage and protect IT services by handling planned and unplanned exception conditions. GDPS/PPRC can also help maintain data integrity across multiple volumes and storage subsystems.

“InfiniBand-based links provide the ideal combination of high performance and low latency for our GDPS/PPRC environment,” explained Dr. Casimer DeCusatis, distinguished engineer of the IBM System and Technology Group. “These are must-have benefits when it comes to synchronous applications for high-end clustering, business continuity, disaster recovery and grid computing -- all of which are increasingly important services for the New Enterprise Data Center.”

InfiniBand was introduced into the data center in the early 2000s and has been used mainly in high-performance computing environments. Yet with the steady adoption of more powerful business continuity, disaster recovery and grid computing applications, many enterprises have turned to InfiniBand as an enabler of latency-intolerant, bandwidth-intensive applications across Wavelength Division Multiplexing (WDM) optical networks. Today, within the server, InfiniBand enables transport speeds of up to 40Gbit/s; it is anticipated that in two years time, the speed will increase to 120Gbit/s.



WebWireID81128





This news content was configured by WebWire editorial staff. Linking is permitted.

News Release Distribution and Press Release Distribution Services Provided by WebWire.