[Linux-ia64] Itanium gets supercomputing software

Mikhail Kuzminsky kus at free.net
Mon Jul 23 12:44:16 EDT 2012

According to Alan Scheinine
>    I do not think there was a promise that getting efficiency would
> be easier with EPIC.  My understanding of the situation is that
> the logic of dynamic allocation of resources, that is, the various
> tricks done in silicon, could not scale to a large number of
> processing units on a chip.  That is, the complexity grows faster
> than linear, much faster.
  I beleive you are absolutely right. One of main reasons
of IA64/EPIC developmnet were difficulties just in development
hardware logic of superscalar out-of order calculations.
But pls look to the current (and nearest future) IA-64 chips.
The number of execution units don't increase: the main advantages
of McKinley in comparison w/Itanium (in microarchitectural sense) was
allowing to do more parallel/simultaneous instructions in pair
of bundles (elimination of a set of restrictions in Itanium) 
plus, of course, cache, frequency etc. The number of execution
unints in Madison will be, as I understand, the same. Next IA-64
chips will have >1 microprocessor cores, what means, by my opinion,
that every microprocessor core will have again the same number
of execution units. It looks that Intel increase size of cache,
frequency, insert simultaneous multi-threading etc, but I don't
see incerase of execution units number.
This means that some potential advantages of IA-64/EPIC
are not realized. IMHO, it may be simple because of compilers
problem. If compiler can't realize high average IPC
(instructions -per-cycle) value for real applications, 
why I'll add new execution units ?

Mikhail Kuzminksy
Zelinsky Institute of Organic Chemistry
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

More information about the Beowulf mailing list