[Beowulf] Pretty High Performance Computing

Prentice Bisbal prentice at ias.edu
Wed Sep 24 08:49:37 EDT 2008


Middleware is something that goes in between, hence the name middleware.
In the case of HPC, I would call ROCKS or Platform OCS middleware. These
software packages go in between the administrator and the actual cluster
software/OS configuration to make things easier to configure. They are
in the middle of the administrator, and the actual software that does
the real work (SGE, MPI, the operating system), so they are therefore
properly termed middleware.


This discussion has been about unnecessary services that have nothing to
do with cluster operations. Sendmail/postix, printing daemons, etc.,
have nothing to do with clustering, so they are not middleware, since
they are not in the middle of anything.  They are just on the side.

Prentice



Ellis Wilson wrote:
> I guess I don't quite understand why you disagree Prentice.  With the 
> exception that middleware doesn't strive to be a classification per se, 
> just a solution, it still consists of a "style of computing where you 
> sacrifice absolute high performance because of issues relating to any 
> combination of convenience, laziness, or lack of knowledge."
> 
> This assumes my understanding of middleware is correct in that it is a 
> package or entire system that simplifies things by being somewhat 
> blackboxed and ready to go.  Anything canned like tuna is bound to 
> contain too much salt.
> 
> Ellis
> 
> Prentice Bisbal wrote:
>> Vincent Diepeveen wrote:
>>> I'd argue we might know this already as middleware.
>> That makes absolutely no sense.
>>> Best regards from a hotel in Beijing,
>>> Vincent
>>>
>>> On Sep 23, 2008, at 10:32 PM, Jon Forrest wrote:
>>>
>>>> Given the recent discussion of whether running
>>>> multiple services and other such things affects
>>>> the running of a cluster, I'd like to propose
>>>> a new classification of computing.
>>>>
>>>> I call this Pretty High Performance Computing (PHPC).
>>>> This is a style of computing where you sacrifice
>>>> absolute high performance because of issues relating
>>>> to any combination of convenience, laziness, or lack
>>>> of knowledge.
>>>>
>>>> I know I've been guilty of all three but the funny
>>>> thing is that science seems to get done anyway.
>>>> There's no doubt computations would get done a little faster
>>>> if I or the scientists spent more time worrying
>>>> about microsecond latency, parallel barriers,
>>>> or XML overhead but reality always gets in the way.
>>>> In the future I hope to sin less often but it's a
>>>> growing experience. Reading this, and other, email
>>>> lists sometimes helps.
>>>>
>>>> Cordially,
>>>>
>>>> -- Jon Forrest
>>>> Research Computing Support
>>>> College of Chemistry
>>>> 173 Tan Hall
>>>> University of California Berkeley
>>>> Berkeley, CA
>>>> 94720-1460
>>>> 510-643-1032
>>>> jlforrest at berkeley.edu
>>>> _______________________________________________
>>>> Beowulf mailing list, Beowulf at beowulf.org
>>>> To change your subscription (digest mode or unsubscribe) visit 
>>>> http://www.beowulf.org/mailman/listinfo/beowulf
>>>>
>>> _______________________________________________
>>> Beowulf mailing list, Beowulf at beowulf.org
>>> To change your subscription (digest mode or unsubscribe) visit 
>>> http://www.beowulf.org/mailman/listinfo/beowulf
>>>
>> _______________________________________________
>> Beowulf mailing list, Beowulf at beowulf.org
>> To change your subscription (digest mode or unsubscribe) visit 
>> http://www.beowulf.org/mailman/listinfo/beowulf
>>
>>
> 
> 
> 
> 
> 
> 
> 
>       
> 

-- 
Prentice Bisbal
Linux Software Support Specialist/System Administrator
School of Natural Sciences
Institute for Advanced Study
Princeton, NJ
_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf



More information about the Beowulf mailing list