Good PCI-X Chipsets -- exist?
patrick at myri.com
Wed Mar 19 13:00:39 EST 2003
On Wed, 2003-03-19 at 03:00, Steffen Persvold wrote:
> On 18 Mar 2003, Patrick Geoffray wrote:
> > On Tue, 2003-03-18 at 04:37, Steffen Persvold wrote:
> > > I agree, impressive numbers ! What is the wire speed on these cards ?
> > This is the PCIXD card, with one link (2+2 Gb/s). The PCIXE card will
> > share the same PCI DMA chip but have 2 links on the faceplate, acting as
> > one virtual link (2x(2+2) Gb/s).
> Cool, so you still use the same switches (and cables) for these new cards
> or are there newer ones with higher bisection bandwidth ?
The link is the same, so the cables and the switches are the same. The
new switches will be based on a new crossbar wider than the current one
(32x32 instead of 16x16), with other small goodies. So the difference is
the number of ports you can fit on a blade (8->16) and in one box
(128->256). The other difference will be the enclosure design (vertical
blades, back/front airflow, hot-plug power). The link itself is the
same, 4X links are not coming before 2004.
> > For the rare rich people that would require 8 Gb/s (what's Infiniband
> > wrongly measures 10 Gp/s), it would be done by bounding 2 PCIXE cards.
> On which level is this "bounding" done ? Is it at HW level, GM level or
> MPI level ?
In the PCIX-E cards, with 2 links, this is done at the firmware level:
the decision to push packets on one of the 2 links is taken at the last
minute depending on the load of the link and its availability.
For NIC bounding, this is a driver/firmware combined support, it's below
the public API. And it won't be GM or GM-2.
Patrick Geoffray, PhD
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
More information about the Beowulf