Petabits/sec, and the like

Don Holmgren djholm at
Thu Nov 6 07:39:13 EST 2003

On Thu, 6 Nov 2003, John Hearns wrote:

> On Thu, 6 Nov 2003, Chris Samuel wrote:
> > To take an example, I know of a local group doing work at CERN. Apparently
> > they are participating in experiments that generate around 1TB a day of data
> > (no idea if that's compressed/uncompressed or how compressible it would be).
> >
> I THINK (though don't quote me) that this is the raw data rate.
> What happens in an HEP experiment is that raw data comes from the
> detector.
> It is passed through three levels of trigger processors, from
> a very simple (are 1st level inside the detector at LHC???) to a third
> level, which is run on PCs.
> I guess this 1TB rate is the raw event rate after the level 3 trigger.
> The data is then sent to a reconstruction farm, where the raw levels
> are combined into tracks and energy deposits, using the physical
> data and calibrations of the detector.
> The physicists then work on the resulting DST - data summary tape,
> which is much less data than the raw data.
> I'm not sure of the plans for processing raw data at LHC -
> maybe all is processed at the main site, maybe som is shipped off
> to the Tier 1 centres. I really don't know the answer here.

I was part of the team that implemented the level 3 trigger at the CDF
experiment at FNAL.  The order of magnitude data rate out of the
detector is 1 TByte/sec - collisions at O(1 MHz), O(1 million) data
channels, O(1 byte/channel).  That rate gets reduced through Level 1, 2,
and 3 triggers.  The level 3 trigger Linux computers do event building
(assembling full events from event fragments sent via an ATM switch) and
reconstruction (full events distributed via fast ethernet, data
"inverted" to produce particle tracks and energies).  Here were the
specifications we worked from in 1997 for L3:

- event rate into L3:  250 to 1000 Hz
- event size: 250 KB avg
- accept rate: 72 Hz

The accept rate translates into 18 MB/sec, written to mass storage.

At this 18 MB/sec (set by the tape budget, BTW), CDF currently writes ~
1.5 TB/day to tape.  The D0 experiment at Fermilab is writing a similar
amount.  On typical days, the Fermilab mass storage system moves 10's of
TB/day - I think the record is something like 35 TB/day.

I'm not sure of LHC design numbers, but believe they are more like 1
GB/sec to storage.

Don Holmgren
Beowulf mailing list, Beowulf at
To change your subscription (digest mode or unsubscribe) visit

More information about the Beowulf mailing list