[Beowulf] Parallel file systems

Joe Landman landman at scalableinformatics.com
Tue Jan 19 18:51:16 EST 2010


Jess Cannata wrote:
> 
> 
> On 01/13/2010 06:40 AM, tegner at renget.se wrote:
>> While starting to investigating different storage solutions I came across
>> gluster (www.gluster.com). I did a search on beowulf.org and came up with
>> nothing. gpfs, pvfs and lustre on the other resulted in lots of hits.
>>
>> Anyone with experience of gluster in HPC?
>>
>>    
> Yes, we've been using Glusterfs on one of our lightly used Infiniband 
> clusters (32-nodes, 256 cores). We have found it to be pretty easy to 
> configure and we have liked its performance. If you want more 
> information, you should e-mail Joe Landman, who is also on the list. 
> He's used it in several large setups.

How did I not see this ... mea culpa

Yes, we are using GlusterFS in multiple sites with multiple users. 
Getting excellent performance out of it, as long as the IB can keep up. 
  Long story ask me over beer some day ...

We are generating multiple quotes/RFP responses with it (one is going 
out literally right now).

Bug me offline if you'd like.

Joe

>> Regards,
>>
>> /jon
>>
>>    
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit 
> http://www.beowulf.org/mailman/listinfo/beowulf


-- 
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics Inc.
email: landman at scalableinformatics.com
web  : http://scalableinformatics.com
        http://scalableinformatics.com/jackrabbit
phone: +1 734 786 8423 x121
fax  : +1 866 888 3112
cell : +1 734 612 4615
_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf



More information about the Beowulf mailing list