Print
Hits: 6862

Image How Low can you go?

Mellanox has just announced their new ConnectX HCA's that provide 1.2 μsecond MPI ping latency. Other features include, 10 or 20Gb/s InfiniBand ports CPU offload of transport operations, End-to-end QoS and congestion control, Hardware-based I/O virtualization, and TCP/UDP/IP stateless offload. The press release follows.

New Mellanox ConnectX IB Adapters Unleash Multi-core Processor Performance

Ultra-Low 1 Microsecond Application Latency and 20Gb/s Bandwidth Sets the Bar for High-Performance Computing, Data Center Agility, and Extreme Transaction Processing

SANTA CLARA, CA and YOKNEAM, ISRAEL – March 26, 2007 – Mellanox™ Technologies, Ltd. (NASDAQ: MLNX), a leading supplier of semiconductor-based high-performance interconnect products, today announced the availability of the industry’s only 10 and 20Gb/s InfiniBand I/O adapters that deliver ultra-low 1 microsecond (µs) application latencies. The ConnectX IB fourth-generation InfiniBand Host Channel Adapters (HCAs) provide unparalleled I/O connectivity performance for servers, storage, and embedded systems optimized for high throughput and latency-sensitive clusters, grids and virtualized environments.

“Today’s servers integrate multiple dual and quad-core processors with high bandwidth memory subsystems, yet the I/O limitations of Gigabit Ethernet and Fibre Channel effectively degrades the system’s overall performance,” said Eyal Waldman, chairman, president and CEO of Mellanox Technologies. “ConnectX IB 10 and 20Gb/s InfiniBand adapters balance I/O performance with powerful multi-core processors responsible for executing mission-critical functions that range from applications which optimize Fortune 500 business operations to those that enable the discovery of new disease treatments through medical and drug research.”

Building on the success of the widely deployed Mellanox InfiniHost adapter products, ConnectX IB HCAs extend InfiniBand’s value with new performance levels and capabilities. Leading performance: Industry’s only 10 and 20Gb/s I/O adapters with ultra-low 1µs RDMA write latency and 1.2µs MPI ping latency1, and a high uni-directional MPI message rate of 25 million messages-per-second2. The InfiniBand ports connect to the host processor through a PCI Express x8 interface.

Extended network processing offload and optimized traffic and fabric management: New capabilities including hardware reliable multicast, enhanced atomic operations, hardware-based congestion control and granular quality of service. Increased TCP/IP application performance: Integrated stateless-offload engines alleviate the host processor from compute-intensive protocol stack processing which optimizes application execution efficiency.

Higher Scalability: Scalable and reliable connected transport services and shared receive queues enhance scalability of high-performance applications to tens of thousands of nodes. Hardware-based I/O virtualization: Support for virtual services end-points, virtual address translation/DMA remapping, isolation and protection per virtual machine, and facilitating native InfiniBand performance to applications running in virtual servers for EDC agility and service oriented architectures (SOA).

Leading OEM Support

“Our high-performance BladeSystem c-Class customer applications are increasingly relying on lower interconnect latency to improve performance and keep costs in check,” said Mark Potter, vice president of the BladeSystem Division at HP. “With the promise of even better application latency, HP's c-Class blades featuring the forthcoming Mellanox ConnectX IB HCAs will further enhance HP's industry-leading 4X DDR InfiniBand capability, bringing new dimensions to how Fortune 500 companies deploy clusters and improve ROI.”

“Clearly InfiniBand is reaching market maturity with this fourth generation server host chip and adapter level interface technology from Mellanox,” said Bill Erdman, marketing director of Cisco Systems Server Virtualization Business Unit. “As we bring these host interface cards to market over the next several calendar quarters, as fully integrated with our scalable Server Fabric Switching product line, customers will see significant latency improvements, and greater end to end delivery reliability, especially when scaling large computing clusters with thousands of high end compute nodes.”

“Scaling high-performance applications and clusters without compromising performance is becoming a critical need, driven by ever-increasing computation needs,” said Andy Bechtolsheim, chief architect and senior vice president for Sun Microsystems. “ConnectX IB HCAs offer novel scalability features that complement our vision for delivering compelling solutions to our end users.”

“IT organizations in industries ranging from HPC to financial services are continually looking at ways to get the most out of their critical software applications,” said Patrick Guay, senior vice president of marketing at Voltaire. “The increased bandwidths and lower latencies delivered in Mellanox’s ConnectX InfiniBand adapters combined with Voltaire’s multi-service switching platforms will bring significantly greater application acceleration benefits to our customers.”

I/O as a Competitive Advantage The performance and capabilities of ConnectX IB HCAs support the most demanding high-performance computing applications while at the same time reduce research and development budgets.

“Today’s science demands continue to outpace the number of available engineers and their associated budgets, driving the need for more productivity per scientist,” said Shawn Hansen, director of marketing, Windows Server Division at Microsoft Corporation. “Technologies that improve I/O latencies and message rates, like ConnectX IB adapters, enhance the ability of Windows Compute Cluster Server to deliver high performance computing for the mainstream researcher and engineer.”

In addition, the volume of transactions and data transferred in Fortune 500 companies is increasing exponentially, jeopardizing profits and competitiveness for IT infrastructures that cannot scale to address the additional load.

“Extremely high volumes of concurrent users and increasingly complex transactions are making access to data one of the greatest bottlenecks to performance in grid computing,” said Geva Perry, chief marketing officer at GigaSpaces. “ConnectX IB InfiniBand HCAs offer leading latency, throughput and reliable performance that can help eliminate interconnect-related data latency degradations and is therefore a perfect complement to GigaSpaces’ products for increasing overall application performance and scalability.”

Enhanced Virtual Infrastructure Performance and ROI

ConnectX IB InfiniBand HCAs offer Channel I/O Virtualization (CIOV), which creates virtualized services end-points for virtual machines and SOA deployments. CIOV enables virtualized provisioning of all I/O services including clustering, communications, storage and management. CIOV enables accelerated hardware-based I/O virtualization and is complementary to CPU and memory virtualization technologies from Intel and AMD.

“When used with the Xen virtualization technology inside of SUSE Linux Enterprise Real Time, ConnectX IB InfiniBand adapters can lower I/O costs and improve I/O utilization,” said Holger Dyroff, vice president of SUSE Linux Enterprise product management at Novell. “Service-oriented architectures demand native I/O performance from virtual machines and Mellanox’s I/O virtualization architecture perfectly complements Novell's technical leadership in delivering mission-critical operating systems to our customers.”

Software Compatibility

ConnectX IB InfiniBand HCAs deliver leading performance while maintaining compatibility with operating systems and networking software stacks. For high-performance remote direct memory access (RDMA) based operations, the adapters are fully backward compatible to the OpenFabrics (www.openfabrics.org) Enterprise Distribution (OFED) and Microsoft WHQL-certified Windows InfiniBand (WinIB) protocol stacks, requiring only a device driver upgrade. RDMA and InfiniBand hardware transport offload is proven to deliver software-transparent, application performance improvements. For traditional TCP/IP-based applications, the adapters support standard operating system stacks, including stateless-offload and Intel QuickData technology enhancements.

“PCI Express and Intel QuickData technology provide a low disruption path to scaling I/O by respectively increasing bandwidth and efficiencies for I/O in Intel-based servers,” said Jim Pappas, Director of Technology Initiatives for Intel’s Digital Enterprise Group. “With innovative implementation of these technologies by companies like Mellanox, I/O on Intel’s enterprise platforms continues to be accelerated for the demanding multi-core application needs of today and the future.”

Pricing and Availability

10K volume pricing for ConnectX IB HCA silicon adapters is $165 (dual-port 10Gb/s) and $215 (dual-port 10 or 20Gb/s). 10K volume pricing for ConnectX IB HCA adapter cards is $369 (dual-port 10Gb/s) and $479 (dual port 10 or 20Gb/s). ConnectX IB InfiniBand HCA silicon devices and PCI Express-based adapter cards are sampling today, and general availability is expected in the second quarter of 2007. Value-added adapter solutions from OEM channels are expected soon after.

About Mellanox

Mellanox Technologies is a leading supplier of semiconductor-based, high-performance, InfiniBand interconnect products that facilitate data transmission between servers, communications infrastructure equipment, and storage systems. The company’s products are an integral part of a total solution focused on computing, storage and communication applications used in enterprise data centers, high-performance computing and embedded systems. In addition to supporting InfiniBand, Mellanox's next generation of products support the industry-standard Ethernet interconnect specification. Founded in 1999, Mellanox Technologies is headquartered in Santa Clara, California and Yokneam, Israel. For more information, visit Mellanox at www.mellanox.com

1.The performance data was measured with MVAPICH 0.9.7 MPI on Intel® quad-core Xeon™ 5300 series Bensley servers.
2.8-core, uni-directional MVAPICH 0.9.9 MPI message rate benchmark on Intel® quad-core Xeon™ 5300 series Bensley servers.

Safe Harbor Statement under the Private Securities Litigation Reform Act of 1995: All statements included or incorporated by reference in this release, other than statements or characterizations of historical fact, are forward-looking statements. These forward-looking statements are based on our current expectations, estimates and projections about our industry and business, management's beliefs, and certain assumptions made by us, all of which are subject to change. Forward-looking statements can often be identified by words such as "anticipates," "expects," "intends," "plans," "predicts," "believes," "seeks," "estimates," "may," "will," "should," "would," "could," "potential," "continue," "ongoing," similar expressions, and variations or negatives of these words. These forward-looking statements are not guarantees of future results and are subject to risks, uncertainties and assumptions that could cause our actual results to differ materially and adversely from those expressed in any forward-looking statement. The risks and uncertainties that could cause our results to differ materially from those expressed or implied by such forward-looking statements include our ability to react to trends and challenges in our business and the markets in which we operate; our ability to anticipate market needs or develop new or enhanced products to meet those needs; the adoption rate of our products; our ability to establish and maintain successful relationships with our distribution partners; our ability to compete in our industry; fluctuations in demand, sales cycles and prices for our products and services; our ability to protect our intellectual property rights; general political, economic and market conditions and events; and other risks and uncertainties described more fully in our documents filed with or furnished to the Securities and Exchange Commission. More information about the risks, uncertainties and assumptions that may impact our business are set forth in our Form 10-K filed with the SEC on March 23, 2007, including “Risk Factors”. All forward-looking statements in this press release are based on information available to us as of the date hereof, and we assume no obligation to update these forward-looking statements.

Mellanox is a registered trademark of Mellanox Technologies., and ConnectX, InfiniBlast, InfiniBridge, InfiniHost, InfiniRISC, InfiniScale, and InfiniPCI are trademarks of Mellanox Technologies. All other trademarks are property of their respective owners.