[Beowulf] large scratch space on cluster

Scott Atchley atchley at myri.com
Tue Sep 29 13:39:05 EDT 2009


On Sep 29, 2009, at 1:13 PM, Scott Atchley wrote:

> On Sep 29, 2009, at 10:09 AM, Jörg Saßmannshausen wrote:
>
>> However, I was wondering whether it does make any sense to somehow  
>> 'export'
>> that scratch space to other nodes (4 cores only). So, the idea  
>> behind that
>> is, if I need a vast amount of scratch space, I could use the one  
>> in the 8
>> core node (the one I mentioned above). I could do that with nfs but  
>> I got the
>> feeling it will be too slow. Also, I only got GB ethernet at hand,  
>> so I
>> cannot use some other networks here. Is there a good way of doing  
>> that? Some
>> words like i-scsi and cluster-FS come to mind but to be honest, up  
>> to now I
>> never really worked with them.
>>
>> Any ideas?
>>
>> All the best
>>
>> Jörg
>
> I am under the impression that NFS can saturate a gigabit link.
>
> If for some reason that it cannot, you might want to try PVFS2 (http://www.pvfs.org 
> ) over Open-MX (http://www.open-mx.org).

I should add that PVFS2 is meant to separate the metdata from IO and  
have mulitple IO servers. You can run it a single server with both  
metadata and IO, but it may not be much different than NFS.

Scott
_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf



More information about the Beowulf mailing list