From samuel at unimelb.edu.au Tue Jun 5 02:47:42 2012 From: samuel at unimelb.edu.au (Christopher Samuel) Date: Tue, 05 Jun 2012 16:47:42 +1000 Subject: [Beowulf] Forward: RE: In-Reply-To: <51944af13e135e02aa73095381c2ce2f.squirrel@mail.eadline.org> References: <51944af13e135e02aa73095381c2ce2f.squirrel@mail.eadline.org> Message-ID: <4FCDAB8E.2020707@unimelb.edu.au> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 03/05/12 22:40, Douglas Eadline wrote: > I know Penguin runs the list, but I'm not sure who to contact, I'll > forward it to the list. Hopefully someone will be able to provide > an answer. No answer, other than to confirm it's still down from here. :-( My only contact at Penguin moved to Apple a year or two back so I don't know anyone to contact there these days - anyone else ? cheers, Chris - -- Christopher Samuel - Senior Systems Administrator VLSCI - Victorian Life Sciences Computation Initiative Email: samuel at unimelb.edu.au Phone: +61 (0)3 903 55545 http://www.vlsci.unimelb.edu.au/ -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk/Nq44ACgkQO2KABBYQAh9XywCfWNPC8Xr/IFk076T5IkBR4yPc 24YAoJQC8vr6QGA5JdCOmSFPFZ/551m2 =EDuM -----END PGP SIGNATURE----- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From j.wender at science-computing.de Tue Jun 5 03:23:28 2012 From: j.wender at science-computing.de (Jan Wender) Date: Tue, 05 Jun 2012 09:23:28 +0200 Subject: [Beowulf] Forward: RE: In-Reply-To: <4FCDAB8E.2020707@unimelb.edu.au> References: <51944af13e135e02aa73095381c2ce2f.squirrel@mail.eadline.org> <4FCDAB8E.2020707@unimelb.edu.au> Message-ID: <4FCDB3F0.3020306@science-computing.de> Hi all, I asked Arend Dittmer, who works at Penguin whether he can help. Cheerio, Jan Christopher Samuel schrieb: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On 03/05/12 22:40, Douglas Eadline wrote: > >> I know Penguin runs the list, but I'm not sure who to contact, I'll >> forward it to the list. Hopefully someone will be able to provide >> an answer. > > No answer, other than to confirm it's still down from here. :-( > > My only contact at Penguin moved to Apple a year or two back so I > don't know anyone to contact there these days - anyone else ? > > cheers, > Chris > - -- > Christopher Samuel - Senior Systems Administrator > VLSCI - Victorian Life Sciences Computation Initiative > Email: samuel at unimelb.edu.au Phone: +61 (0)3 903 55545 > http://www.vlsci.unimelb.edu.au/ > > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.11 (GNU/Linux) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ > > iEYEARECAAYFAk/Nq44ACgkQO2KABBYQAh9XywCfWNPC8Xr/IFk076T5IkBR4yPc > 24YAoJQC8vr6QGA5JdCOmSFPFZ/551m2 > =EDuM > -----END PGP SIGNATURE----- > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing > To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- ---- Company Information ---- Vorstandsvorsitzender: Gerd-Lothar Leonhart Vorstand: Dr. Bernd Finkbeiner, Dr. Arno Steitz, Dr. Ingrid Zech Vorsitzender des Aufsichtsrats: Philippe Miltin Sitz: Tuebingen Registergericht: Stuttgart Registernummer: HRB 382196 -- Mailscanner: Clean -------------- next part -------------- A non-text attachment was scrubbed... Name: j_wender.vcf Type: text/x-vcard Size: 340 bytes Desc: not available URL: -------------- next part -------------- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From ntmoore at gmail.com Tue Jun 5 13:48:36 2012 From: ntmoore at gmail.com (Nathan Moore) Date: Tue, 5 Jun 2012 12:48:36 -0500 Subject: [Beowulf] Desktop fan reccommendation Message-ID: All, This is barely beowuf related... New desktop machine is a Shuttle SX79R5, http://us.shuttle.com/barebone/Models/SX79R5.html In the past, shuttles have been very quiet, but this one has a fairly loud variable speed fan on the CPU heat exchanger. I normally buy replacement parts from vendors like newegg, but their selection of 90mm case fans mainly seems to be described by CFM and whether the fan has LED lights mounted in it (FYI, that is not a selling point). So, is there an engineer's version of newegg that ya'll know about? There must be a super quiet 90mm fan out there that I can pick up for $10... Nathan Moore Physics, Winona State University -- Mailscanner: Clean -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From john.hearns at mclaren.com Tue Jun 5 14:08:23 2012 From: john.hearns at mclaren.com (Hearns, John) Date: Tue, 5 Jun 2012 19:08:23 +0100 Subject: [Beowulf] Desktop fan reccommendation References: Message-ID: <207BB2F60743C34496BE41039233A8090E6B93AB@MRL-PWEXCHMB02.mil.tagmclarengroup.com> Lenovo's workstation fans are extremely quiet. I was told by a Lenovo engineer that they are designed to resemble an owl's wings. Owls are pretty silent beasts - as they have to be to swoop on those unsuspecting mice. The contents of this email are confidential and for the exclusive use of the intended recipient. If you receive this email in error you should not copy it, retransmit it, use it or disclose its contents but should return it to the sender immediately and delete your copy. -- Mailscanner: Clean -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From ntmoore at gmail.com Tue Jun 5 14:21:40 2012 From: ntmoore at gmail.com (Nathan Moore) Date: Tue, 5 Jun 2012 13:21:40 -0500 Subject: [Beowulf] Desktop fan reccommendation In-Reply-To: <207BB2F60743C34496BE41039233A8090E6B93AB@MRL-PWEXCHMB02.mil.tagmclarengroup.com> References: <207BB2F60743C34496BE41039233A8090E6B93AB@MRL-PWEXCHMB02.mil.tagmclarengroup.com> Message-ID: I wonder if this is Lenovo's vendor? http://www.quietpcusa.com/Noctua-NF-B9-Vortex-Control-92mm-Quiet-Computer-Fan-P398C67.aspx On Tue, Jun 5, 2012 at 1:08 PM, Hearns, John wrote: > Lenovo?s workstation fans are extremely quiet.**** > > I was told by a Lenovo engineer that they are designed to resemble an > owl?s wings.**** > > Owls are pretty silent beasts ? as they have to be to swoop on those > unsuspecting mice.**** > > The contents of this email are confidential and for the exclusive use of > the intended recipient. If you receive this email in error you should not > copy it, retransmit it, use it or disclose its contents but should return > it to the sender immediately and delete your copy. > > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing > To change your subscription (digest mode or unsubscribe) visit > http://www.beowulf.org/mailman/listinfo/beowulf > > -- - - - - - - - - - - - - - - - - - - - - - Nathan Moore Winona, MN - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -- Mailscanner: Clean -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From Daniel.Pfenniger at unige.ch Wed Jun 6 05:38:24 2012 From: Daniel.Pfenniger at unige.ch (Daniel Pfenniger) Date: Wed, 06 Jun 2012 11:38:24 +0200 Subject: [Beowulf] Desktop fan reccommendation In-Reply-To: References: Message-ID: <4FCF2510.80502@unige.ch> Nathan Moore wrote: > All, > > This is barely beowuf related... > > New desktop machine is a Shuttle SX79R5, > http://us.shuttle.com/barebone/Models/SX79R5.html > > In the past, shuttles have been very quiet, but this one has a fairly loud > variable speed fan on the CPU heat exchanger. I normally buy replacement parts > from vendors like newegg, but their selection of 90mm case fans mainly seems to > be described by CFM and whether the fan has LED lights mounted in it (FYI, that > is not a selling point). > > So, is there an engineer's version of newegg that ya'll know about? There must > be a super quiet 90mm fan out there that I can pick up for $10... I remind ads for quiet and more efficient rotor-less fans for PC's but cannot find such products anymore. The idea was to maximize the air flow area by displacing the central motor to the blade edges. Not only the larger central area would allow a lower, quieter blade speed, but the blades being accelerated at their extremities by the circular motor would be mechanically more stable, less subject to vibrations. My guess is that such fans, although technically better, were too expensive in regard of the advantages. The Dyson bladeless and silent fans are based om a different principle, a cylindrical thin air layer carries along the inner air column, the air flow is then laminar (http://www.dyson.com/store/fans.asp). Dan _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From diep at xs4all.nl Wed Jun 6 07:38:03 2012 From: diep at xs4all.nl (Vincent Diepeveen) Date: Wed, 6 Jun 2012 13:38:03 +0200 Subject: [Beowulf] Desktop fan reccommendation In-Reply-To: References: Message-ID: <31570B96-9227-4CF1-BB79-910547DC1F94@xs4all.nl> On Jun 5, 2012, at 7:48 PM, Nathan Moore wrote: > All, > > This is barely beowuf related... > > New desktop machine is a Shuttle SX79R5, http://us.shuttle.com/ > barebone/Models/SX79R5.html > > In the past, shuttles have been very quiet, but this one has a > fairly loud variable speed fan on the CPU heat exchanger. You sure this one is easy to replace? It seems that it doesn't have a cooler at all for the cpu but as you say it's some sort of cheapskate thing tubing that pumps liquid through the socket and then seemingly with 1 fan that is doing cooling both for the PSU as well as the cpu fan at once. Then from inside it pushes that air seemingly to outside and there it goes through a tiny grill, which also will impact the airflow bigtime i'd guess. So whatever you do, you need a fan that delivers at least the same CFM and has the same airpressure. Most likely the replacement fan for 9CM there still will be something of at least a 5000 RPM or so, so very noisy, and just don't believe all the manufacturer specs there, they usually 'overclock' fans nowadays effectively generating a far higher RPM and *that* CFM they put on the box. If you'd get a tad bigger fan it's easy to get it more quiet, but it seems difficult to fit in. Maybe if you remove the grill and produce a kind of cardbox thing channels the air into the tiny PSU/heat exchanger. Just tape won't do i'd guess, as after a while that'll losen too much, so some glue also needed, then it would be a lot easier to get it quiet. It's a tiny fan for a CPU/PSU combo. Just 9 CM is not much. At 12 CM there is a wonderful fan i'm very happy about that's the aerocool shark fan. It's only there in red incarnation. Nowadays the fans that do not have leds are either more expensive or produce less CFM so you'll have no escape except to get one with leds, as they produce those massively. You'll have to measure though whether a bigger fan fits inside and if so what its maximum dimensions are and how it impacts your cardbox creature. > I normally buy replacement parts from vendors like newegg, but > their selection of 90mm case fans mainly seems to be described by > CFM and whether the fan has LED lights mounted in it (FYI, that is > not a selling point). > > So, is there an engineer's version of newegg that ya'll know about? > There must be a super quiet 90mm fan out there that I can pick up > for $10... > > Nathan Moore > Physics, Winona State University > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin > Computing > To change your subscription (digest mode or unsubscribe) visit > http://www.beowulf.org/mailman/listinfo/beowulf _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From diep at xs4all.nl Wed Jun 6 07:49:07 2012 From: diep at xs4all.nl (Vincent Diepeveen) Date: Wed, 6 Jun 2012 13:49:07 +0200 Subject: [Beowulf] Desktop fan reccommendation In-Reply-To: References: Message-ID: <307F35D1-83C8-426A-A877-3185B61BD09A@xs4all.nl> Nathan at closer look it seems there is on the outside a tiny fan as well for the psu. Probably that's the thing making the big noise. Possibly it's a 15k rpm fan or something? Hah at least 65 decibel i'd guess if you measure correctly. It'll have to deliver enough airflow to cool the psu part? Maybe you could do the same thing i'm doing, that's just put a huge fan at the outside, rewire the 120-230 volt wires and deliver that power in a different manner, it might be able to suck out enough, especially if you get rid of the grill of the 15k RPM fan, it'll be easier for it to suck out air there. Also the limiting grill of the heat exchanger if you cut it out it might work. You can just test it. whether it works. Should work ok i guess. You need a capable fan outside though. How much must it cool, probably a high clocked socket 2011 that's fulltime crunching in AVX will eat a 260 watt or so from the powertap? On Jun 5, 2012, at 7:48 PM, Nathan Moore wrote: > All, > > This is barely beowuf related... > > New desktop machine is a Shuttle SX79R5, http://us.shuttle.com/ > barebone/Models/SX79R5.html > > In the past, shuttles have been very quiet, but this one has a > fairly loud variable speed fan on the CPU heat exchanger. I > normally buy replacement parts from vendors like newegg, but their > selection of 90mm case fans mainly seems to be described by CFM and > whether the fan has LED lights mounted in it (FYI, that is not a > selling point). > > So, is there an engineer's version of newegg that ya'll know about? > There must be a super quiet 90mm fan out there that I can pick up > for $10... > > Nathan Moore > Physics, Winona State University > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin > Computing > To change your subscription (digest mode or unsubscribe) visit > http://www.beowulf.org/mailman/listinfo/beowulf _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From prentice at ias.edu Wed Jun 6 08:42:48 2012 From: prentice at ias.edu (Prentice Bisbal) Date: Wed, 06 Jun 2012 08:42:48 -0400 Subject: [Beowulf] Desktop fan reccommendation In-Reply-To: <4FCF2510.80502@unige.ch> References: <4FCF2510.80502@unige.ch> Message-ID: <4FCF5048.1020602@ias.edu> On 06/06/2012 05:38 AM, Daniel Pfenniger wrote: > Nathan Moore wrote: >> All, >> >> This is barely beowuf related... >> >> New desktop machine is a Shuttle SX79R5, >> http://us.shuttle.com/barebone/Models/SX79R5.html >> >> In the past, shuttles have been very quiet, but this one has a fairly loud >> variable speed fan on the CPU heat exchanger. I normally buy replacement parts >> from vendors like newegg, but their selection of 90mm case fans mainly seems to >> be described by CFM and whether the fan has LED lights mounted in it (FYI, that >> is not a selling point). >> >> So, is there an engineer's version of newegg that ya'll know about? There must >> be a super quiet 90mm fan out there that I can pick up for $10... > I remind ads for quiet and more efficient rotor-less fans for PC's but > cannot find such products anymore. > > The idea was to maximize the air flow area by displacing the central motor > to the blade edges. Not only the larger central area would allow a lower, > quieter blade speed, but the blades being accelerated at their extremities > by the circular motor would be mechanically more stable, less subject to > vibrations. My guess is that such fans, although technically better, were > too expensive in regard of the advantages. > I had one of these fans on one of my CPU heatsinks a few years ago. It was much quieter than the fan it replaced,but still not all that quiet when compared to a Dell or HP tower. I forget the name of the manufacturer or the model. The last time I looked, I couldn't find them anywhere. -- Prentice _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From james.p.lux at jpl.nasa.gov Wed Jun 6 09:24:50 2012 From: james.p.lux at jpl.nasa.gov (Lux, Jim (337C)) Date: Wed, 6 Jun 2012 13:24:50 +0000 Subject: [Beowulf] Desktop fan reccommendation In-Reply-To: <4FCF2510.80502@unige.ch> Message-ID: I used to work for a company that made fans... -> a very, very tiny fraction of the power into the fan goes into acoustic noise, so it's not a big driver of efficiency -> acoustics of fans are a black art. There are things that you know make it worse.. But once you avoid those, there's a lot of empiricism: Very tough to model accurately, even with a big cluster, and the price premium for a quieter fan isn't worth it. -> Do not have the number of blades be the same as the number of support struts or things in the way. In general, odd number of blades is better. Taken to an extreme, this is how you build a siren (two plates with holes, one spinning) -> big fan turning slower makes less noise than small fan turning fast -> noise is strongly dependent on the air speed. In HVAC design, the usual rule of thumb is to keep the airspeed below 1000 linear feet per minute (yeah, non SI unit.. But it's duct cfm/duct cross sectional area) -> having fan blade tips close to surrounding shroud makes it more efficient AND quieter, but requires tighter mechanical tolerances in manufacturing -> the spacing between blades is important, and a real challenge in any rotating fan. Near the hub, the trailing edge of one blade is closer to the leading edge of the next. AND, the tangential velocity of the blade through the air is different at the hub(root) than at the tip. Fans with large hubs are easier to optimize (smaller variation), BUT, you give up airflow area for a given outside dimensions. -> funky notches and swoops in the blades sometimes help, sometimes don't. I think mostly they're for patent protection. If I sell a fan with 3 asymmetric notches in each blade, and a container load of Chinese copies shows up at the port, it's easier to say that they infringe my patent. -> blade balance is important. Not only in terms of rotating mass, but in terms of aerodynamic balance. If the blade pitch is slightly different on each blade, then it will be noisier. -> well designed input and output vanes (particularly the latter) seem to make it quieter, but I don't know why. On 6/6/12 2:38 AM, "Daniel Pfenniger" wrote: >Nathan Moore wrote: >> All, >> >> This is barely beowuf related... >> >> New desktop machine is a Shuttle SX79R5, >> http://us.shuttle.com/barebone/Models/SX79R5.html >> >> In the past, shuttles have been very quiet, but this one has a fairly >>loud >> variable speed fan on the CPU heat exchanger. I normally buy >>replacement parts >> from vendors like newegg, but their selection of 90mm case fans mainly >>seems to >> be described by CFM and whether the fan has LED lights mounted in it >>(FYI, that >> is not a selling point). >> >> So, is there an engineer's version of newegg that ya'll know about? >>There must >> be a super quiet 90mm fan out there that I can pick up for $10... > >I remind ads for quiet and more efficient rotor-less fans for PC's but >cannot find such products anymore. > >The idea was to maximize the air flow area by displacing the central motor >to the blade edges. Not only the larger central area would allow a lower, >quieter blade speed, but the blades being accelerated at their extremities >by the circular motor would be mechanically more stable, less subject to >vibrations. My guess is that such fans, although technically better, were >too expensive in regard of the advantages. Yes.. Fans are a very cost sensitive product. For a lot of applications, nobody cares how noisy the fan is. > >The Dyson bladeless and silent fans are based om a different principle, >a cylindrical thin air layer carries along the inner air column, the >air flow is then laminar (http://www.dyson.com/store/fans.asp). But you still need a fan to generate the pressurized air for the slit. However, that fan can be hidden inside the base and can be baffled for noise reduction. > > Dan >_______________________________________________ >Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing >To change your subscription (digest mode or unsubscribe) visit >http://www.beowulf.org/mailman/listinfo/beowulf _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From diep at xs4all.nl Wed Jun 6 09:36:08 2012 From: diep at xs4all.nl (Vincent Diepeveen) Date: Wed, 6 Jun 2012 15:36:08 +0200 Subject: [Beowulf] Desktop fan reccommendation In-Reply-To: <4FCF5048.1020602@ias.edu> References: <4FCF2510.80502@unige.ch> <4FCF5048.1020602@ias.edu> Message-ID: How much airflow per square centimeter do they generate? As for the cluster here plenty of space available. To rent office space is around a 50 euro a square meter a year over here, not sure about there. So the cluster, some cardboard and huge fans of 14 and 18 CM are diong the job to cool the nodes and switch, mellanox of course. now as i understand the square meters they reserve for datacenters is always far too limited, causing space each node eats as important as well, yet that's not the problem here in my office. The thing that worries me more is the airflow to outside (and inside). Usually only have limited amount of square centimeters of tube there. The 'industrial' fans that have massive airflow, they're very very noisy. I'm already wondering about using some massive cardboard box and blow in air there using 8 fans (@ 100CFM each) or so and then behind them a second layer of fans, around 6 @ 100CFM, creating a massive overpressure, hoping that this will generate more airpressure, enough to blow in and blow out through some meters of tubing, but seems not like a perfect solution to me. On Jun 6, 2012, at 2:42 PM, Prentice Bisbal wrote: > > On 06/06/2012 05:38 AM, Daniel Pfenniger wrote: >> Nathan Moore wrote: >>> All, >>> >>> This is barely beowuf related... >>> >>> New desktop machine is a Shuttle SX79R5, >>> http://us.shuttle.com/barebone/Models/SX79R5.html >>> >>> In the past, shuttles have been very quiet, but this one has a >>> fairly loud >>> variable speed fan on the CPU heat exchanger. I normally buy >>> replacement parts >>> from vendors like newegg, but their selection of 90mm case fans >>> mainly seems to >>> be described by CFM and whether the fan has LED lights mounted in >>> it (FYI, that >>> is not a selling point). >>> >>> So, is there an engineer's version of newegg that ya'll know >>> about? There must >>> be a super quiet 90mm fan out there that I can pick up for $10... >> I remind ads for quiet and more efficient rotor-less fans for PC's >> but >> cannot find such products anymore. >> >> The idea was to maximize the air flow area by displacing the >> central motor >> to the blade edges. Not only the larger central area would allow >> a lower, >> quieter blade speed, but the blades being accelerated at their >> extremities >> by the circular motor would be mechanically more stable, less >> subject to >> vibrations. My guess is that such fans, although technically >> better, were >> too expensive in regard of the advantages. >> > I had one of these fans on one of my CPU heatsinks a few years ago. It > was much quieter than the fan it replaced,but still not all that quiet > when compared to a Dell or HP tower. I forget the name of the > manufacturer or the model. The last time I looked, I couldn't find > them > anywhere. > > -- > Prentice > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin > Computing > To change your subscription (digest mode or unsubscribe) visit > http://www.beowulf.org/mailman/listinfo/beowulf _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From james.p.lux at jpl.nasa.gov Wed Jun 6 10:56:22 2012 From: james.p.lux at jpl.nasa.gov (Lux, Jim (337C)) Date: Wed, 6 Jun 2012 14:56:22 +0000 Subject: [Beowulf] Desktop fan reccommendation In-Reply-To: Message-ID: On 6/6/12 6:36 AM, "Vincent Diepeveen" wrote: >How much airflow per square centimeter do they generate? That's not typically how fans are rated.. You'll have a curve of volume/time (e.g. Cubic feet/minute or cubic meters/hour) for a given back pressure (usually in "inches of water column") Fan ratings at zero backpressure are almost worthless. There's huge variation from the freeflow number to a backpressure number. You need the number at some decent backpressure (0.25" water column, for instance) A EBM Papst 4182 NX is nominally 105.9 CFM..at 0.1" it's about 90 CFM, at 0.2" it's about 50, and at the max backpressure for that fan. > >As for the cluster here plenty of space available. To rent office >space is around a 50 euro a square meter a year over here, >not sure about there. So the cluster, some cardboard and huge fans of >14 and 18 CM are diong the job to cool the nodes >and switch, mellanox of course. now as i understand the square meters >they reserve for datacenters is always far too limited, >causing space each node eats as important as well, yet that's not the >problem here in my office. > >The thing that worries me more is the airflow to outside (and >inside). Usually only have limited amount of square centimeters of >tube there. The 'industrial' fans that have massive airflow, they're >very very noisy. Not true... You can get VERY quiet fans that push a lot of air through a large duct. It's all about the air speed. You might want to look at a centrifugal blower rather than a axial fan. Axial fans don't do as well against high static pressures, and if you're doing a scheme with ducting, a centrifugal fan is usually a better choice. > >I'm already wondering about using some massive cardboard box and blow >in air there using 8 fans (@ 100CFM each) or so >and then behind them a second layer of fans, around 6 @ 100CFM, >creating a massive overpressure, hoping that this will >generate more airpressure, enough to blow in and blow out through >some meters of tubing, but seems not like a perfect solution to me. That sort of works, but the problem is that unless your "taper" is very, very long, you're basically just creating a pressurized plenum, and the fans will be inefficient working against that backpressure. What you are trying to do is combine multiple low speed flows into one high speed flow, and that's a tricky aerodynamics problem. That said, it does allow you to put a noisy fan somewhere else. IN general, high pressure fans are more noisy than low pressure fans, for the same flow or horsepower rating. Stacking fans doesn't work very well. The flow coming off the fan is twisting (unless you've got vanes to recover the rotational energy) so the second fan in the stack is working against a spiraling flow. Counter rotating sequential fans does work, but is trickier to design, and there's a lot fewer fans available with reverse rotation. > >On Jun 6, 2012, at 2:42 PM, Prentice Bisbal wrote: > >> >> On 06/06/2012 05:38 AM, Daniel Pfenniger wrote: >>> Nathan Moore wrote: >>>> All, >>>> >>>> This is barely beowuf related... >>>> >>>> New desktop machine is a Shuttle SX79R5, >>>> http://us.shuttle.com/barebone/Models/SX79R5.html >>>> >>>> In the past, shuttles have been very quiet, but this one has a >>>> fairly loud >>>> variable speed fan on the CPU heat exchanger. I normally buy >>>> replacement parts >>>> from vendors like newegg, but their selection of 90mm case fans >>>> mainly seems to >>>> be described by CFM and whether the fan has LED lights mounted in >>>> it (FYI, that >>>> is not a selling point). >>>> >>>> So, is there an engineer's version of newegg that ya'll know >>>> about? There must >>>> be a super quiet 90mm fan out there that I can pick up for $10... >>> I remind ads for quiet and more efficient rotor-less fans for PC's >>> but >>> cannot find such products anymore. >>> >>> The idea was to maximize the air flow area by displacing the >>> central motor >>> to the blade edges. Not only the larger central area would allow >>> a lower, >>> quieter blade speed, but the blades being accelerated at their >>> extremities >>> by the circular motor would be mechanically more stable, less >>> subject to >>> vibrations. My guess is that such fans, although technically >>> better, were >>> too expensive in regard of the advantages. >>> >> I had one of these fans on one of my CPU heatsinks a few years ago. It >> was much quieter than the fan it replaced,but still not all that quiet >> when compared to a Dell or HP tower. I forget the name of the >> manufacturer or the model. The last time I looked, I couldn't find >> them >> anywhere. >> >> -- >> Prentice >> _______________________________________________ >> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin >> Computing >> To change your subscription (digest mode or unsubscribe) visit >> http://www.beowulf.org/mailman/listinfo/beowulf > >_______________________________________________ >Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing >To change your subscription (digest mode or unsubscribe) visit >http://www.beowulf.org/mailman/listinfo/beowulf _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From mathog at caltech.edu Wed Jun 6 11:17:47 2012 From: mathog at caltech.edu (mathog) Date: Wed, 06 Jun 2012 08:17:47 -0700 Subject: [Beowulf] Desktop fan reccommendation In-Reply-To: References: Message-ID: <12a71f10d82f8c03a7c37e6003fb566f@saf.bio.caltech.edu> This isn't that hard a problem. Visit the major fan manufacturers sites, buy the one that will fit, moves at least as much air as the original, and is much quieter. The manufacturers all list their products size (ie, the ones that will fit) and then check cfm and noise. For instance, I have bought fans from these guys a couple of times: http://www.dynatron-corp.com/en/product_list.aspx?cv=20-72 Generally you have to go through a distributor and not buy direct, but that is no big deal. Jim Lux wrote: > -> acoustics of fans are a black art. Especially when they fail. We had a 20mm fan go bad in a sort of scanner recently. This itty bitty fan barely moves any air at the best of times (it cools a 486, which really doesn't need to be cooled) and under normal circumstances the fan is completely inaudible. The users contacted me and told me that scanner was making horrible mechanical failure sounds, as if the scan stage was scraping on something. I didn't measure it, but the sound was really loud, I'm guessing at least 85 decibels, and it really did sound like the end of the world. The sound came in bursts, with no noise in between. I'm guessing a bearing moving around in the fan, between a noisy position and a quiet one, or maybe it had developed some sort of resonance. All that racket was from one tiny fan. Regards, David Mathog mathog at caltech.edu Manager, Sequence Analysis Facility, Biology Division, Caltech _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From Daniel.Pfenniger at unige.ch Wed Jun 6 13:33:20 2012 From: Daniel.Pfenniger at unige.ch (Daniel Pfenniger) Date: Wed, 06 Jun 2012 19:33:20 +0200 Subject: [Beowulf] Desktop fan reccommendation In-Reply-To: <53bed0488afa16de22a4e9a658ec1290.squirrel@th.physik.uni-frankfurt.de> References: <4FCF2510.80502@unige.ch> <53bed0488afa16de22a4e9a658ec1290.squirrel@th.physik.uni-frankfurt.de> Message-ID: <4FCF9460.10500@unige.ch> holway at th.physik.uni-frankfurt.de wrote: > >> The Dyson bladeless and silent fans are based om a different principle, >> a cylindrical thin air layer carries along the inner air column, the >> air flow is then laminar (http://www.dyson.com/store/fans.asp). > > Which is not good if your trying to cool stuff..... Well, the fans we are discussing expel air *out* of the box so the heat carried by the air doesn't care about the downstream laminar or turbulent state of the airflow. However noise generation does depend on the airflow state, since the acoustic power is proportional to the 8th power of the turbulence eddy speed (Lighthill 1952, 1954). This is why jet planes are noisy, as their turbulence is almost sonic. The airplane or helicopter propeller tips, or the fan blade ends move closer to the sound speed, so most of the sound is generated there. The conclusion is that to keep a computer quiet one has advantage to use large fans rotating at low speed. For the same air/heat output one gets much less noise, especially if the airflow is laminar. Dan _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From diep at xs4all.nl Wed Jun 6 12:56:20 2012 From: diep at xs4all.nl (Vincent Diepeveen) Date: Wed, 6 Jun 2012 18:56:20 +0200 Subject: [Beowulf] Desktop fan reccommendation In-Reply-To: References: Message-ID: <158B5F81-672F-4A09-B3E6-F66DD85E7E8A@xs4all.nl> On Jun 6, 2012, at 4:56 PM, Lux, Jim (337C) wrote: > > > On 6/6/12 6:36 AM, "Vincent Diepeveen" wrote: > >> How much airflow per square centimeter do they generate? > > That's not typically how fans are rated.. Yeah that was a creative way it seems to mean airspeed :) For a small diameter tube one needs a massive airspeed to still push through some hundreds of CFM. Note the new generation fans really improved a lot. I'm very happy with that aerocool shark of 12 CM. It's 7 euro in shops here a piece (includes 19% VAT which soon gets btw 21% here). Happen to have a link for the type of fan you mean that fits in a small tube of around a 10CM diameter or so and which is centrifugal, big CFM and low noise? Will be interesting to toy with! V-sign! (for the non-insiders - 6th of june is D-Day) > > You'll have a curve of volume/time (e.g. Cubic feet/minute or cubic > meters/hour) for a given back pressure (usually in "inches of water > column") > > Fan ratings at zero backpressure are almost worthless. There's huge > variation from the freeflow number to a backpressure number. You > need the > number at some decent backpressure (0.25" water column, for instance) > > > A EBM Papst 4182 NX is nominally 105.9 CFM..at 0.1" it's about 90 > CFM, at > 0.2" it's about 50, and at the max backpressure for that fan. > > > >> >> As for the cluster here plenty of space available. To rent office >> space is around a 50 euro a square meter a year over here, >> not sure about there. So the cluster, some cardboard and huge fans of >> 14 and 18 CM are diong the job to cool the nodes >> and switch, mellanox of course. now as i understand the square meters >> they reserve for datacenters is always far too limited, >> causing space each node eats as important as well, yet that's not the >> problem here in my office. >> >> The thing that worries me more is the airflow to outside (and >> inside). Usually only have limited amount of square centimeters of >> tube there. The 'industrial' fans that have massive airflow, they're >> very very noisy. > > > Not true... You can get VERY quiet fans that push a lot of air > through a > large duct. It's all about the air speed. > > You might want to look at a centrifugal blower rather than a axial > fan. > Axial fans don't do as well against high static pressures, and if > you're > doing a scheme with ducting, a centrifugal fan is usually a better > choice. > >> >> I'm already wondering about using some massive cardboard box and blow >> in air there using 8 fans (@ 100CFM each) or so >> and then behind them a second layer of fans, around 6 @ 100CFM, >> creating a massive overpressure, hoping that this will >> generate more airpressure, enough to blow in and blow out through >> some meters of tubing, but seems not like a perfect solution to me. > > > That sort of works, but the problem is that unless your "taper" is > very, > very long, you're basically just creating a pressurized plenum, and > the > fans will be inefficient working against that backpressure. What > you are > trying to do is combine multiple low speed flows into one high > speed flow, > and that's a tricky aerodynamics problem. That said, it does > allow you > to put a noisy fan somewhere else. > > IN general, high pressure fans are more noisy than low pressure > fans, for > the same flow or horsepower rating. > > Stacking fans doesn't work very well. The flow coming off the fan is > twisting (unless you've got vanes to recover the rotational energy) > so the > second fan in the stack is working against a spiraling flow. Counter > rotating sequential fans does work, but is trickier to design, and > there's > a lot fewer fans available with reverse rotation. > > >> >> On Jun 6, 2012, at 2:42 PM, Prentice Bisbal wrote: >> >>> >>> On 06/06/2012 05:38 AM, Daniel Pfenniger wrote: >>>> Nathan Moore wrote: >>>>> All, >>>>> >>>>> This is barely beowuf related... >>>>> >>>>> New desktop machine is a Shuttle SX79R5, >>>>> http://us.shuttle.com/barebone/Models/SX79R5.html >>>>> >>>>> In the past, shuttles have been very quiet, but this one has a >>>>> fairly loud >>>>> variable speed fan on the CPU heat exchanger. I normally buy >>>>> replacement parts >>>>> from vendors like newegg, but their selection of 90mm case fans >>>>> mainly seems to >>>>> be described by CFM and whether the fan has LED lights mounted in >>>>> it (FYI, that >>>>> is not a selling point). >>>>> >>>>> So, is there an engineer's version of newegg that ya'll know >>>>> about? There must >>>>> be a super quiet 90mm fan out there that I can pick up for $10... >>>> I remind ads for quiet and more efficient rotor-less fans for PC's >>>> but >>>> cannot find such products anymore. >>>> >>>> The idea was to maximize the air flow area by displacing the >>>> central motor >>>> to the blade edges. Not only the larger central area would allow >>>> a lower, >>>> quieter blade speed, but the blades being accelerated at their >>>> extremities >>>> by the circular motor would be mechanically more stable, less >>>> subject to >>>> vibrations. My guess is that such fans, although technically >>>> better, were >>>> too expensive in regard of the advantages. >>>> >>> I had one of these fans on one of my CPU heatsinks a few years >>> ago. It >>> was much quieter than the fan it replaced,but still not all that >>> quiet >>> when compared to a Dell or HP tower. I forget the name of the >>> manufacturer or the model. The last time I looked, I couldn't find >>> them >>> anywhere. >>> >>> -- >>> Prentice >>> _______________________________________________ >>> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin >>> Computing >>> To change your subscription (digest mode or unsubscribe) visit >>> http://www.beowulf.org/mailman/listinfo/beowulf >> >> _______________________________________________ >> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin >> Computing >> To change your subscription (digest mode or unsubscribe) visit >> http://www.beowulf.org/mailman/listinfo/beowulf > _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From ntmoore at gmail.com Wed Jun 6 13:00:15 2012 From: ntmoore at gmail.com (Nathan Moore) Date: Wed, 6 Jun 2012 12:00:15 -0500 Subject: [Beowulf] Desktop fan reccommendation In-Reply-To: <31570B96-9227-4CF1-BB79-910547DC1F94@xs4all.nl> References: <31570B96-9227-4CF1-BB79-910547DC1F94@xs4all.nl> Message-ID: > > You sure this one is easy to replace? > Yes, very easy to replace. About 8 phillips screws. Unfortunately though, the shroud is fixed at 92mm or so, so a bigger, slower fan is not possible. It seems that it doesn't have a cooler at all for the cpu but as you say > it's some sort of cheapskate thing tubing that pumps liquid through the socket and then seemingly with 1 fan that is doing cooling both > for the PSU as well as the cpu fan at once. Sort-of. I think the "heat pipe" is essentially 4-5 copper tubes that run to a fine-finned radiator. The fan vents the radiator. It is actually a fairly elegant, compact, and reliable design. > > -- Mailscanner: Clean -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From diep at xs4all.nl Wed Jun 6 13:10:45 2012 From: diep at xs4all.nl (Vincent Diepeveen) Date: Wed, 6 Jun 2012 19:10:45 +0200 Subject: [Beowulf] Desktop fan reccommendation In-Reply-To: References: <31570B96-9227-4CF1-BB79-910547DC1F94@xs4all.nl> Message-ID: <6547CA52-944E-4A84-B8F6-B995E5E9F5EC@xs4all.nl> How many RPM is that 9 CM fan? How about that small tiny fan of the psu isn't that one very noisy? Cooling a psu that has to deliver a 220 watt or so sure needs lots of CFM to cool and the tiny fans that i know from rackmounts that create a 20+ CFM they're all 50+ decibel rated, add some aluminium around 'em and it's 65 decibel... On Jun 6, 2012, at 7:00 PM, Nathan Moore wrote: > You sure this one is easy to replace? > > Yes, very easy to replace. About 8 phillips screws. Unfortunately > though, the shroud is fixed at 92mm or so, so a bigger, slower fan > is not possible. > > It seems that it doesn't have a cooler at all for the cpu but as > you say it's some sort of cheapskate thing tubing that pumps liquid > through the socket and then seemingly with 1 fan that is doing > cooling both for the PSU as well as the cpu fan at once. > > Sort-of. I think the "heat pipe" is essentially 4-5 copper tubes > that run to a fine-finned radiator. The fan vents the radiator. > It is actually a fairly elegant, compact, and reliable design. > > _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From james.p.lux at jpl.nasa.gov Wed Jun 6 13:28:00 2012 From: james.p.lux at jpl.nasa.gov (Lux, Jim (337C)) Date: Wed, 6 Jun 2012 17:28:00 +0000 Subject: [Beowulf] Desktop fan reccommendation In-Reply-To: <4FCF9460.10500@unige.ch> References: <4FCF2510.80502@unige.ch> <53bed0488afa16de22a4e9a658ec1290.squirrel@th.physik.uni-frankfurt.de> <4FCF9460.10500@unige.ch> Message-ID: I'm not sure that the acoustic noise from fans is from the actual aerodynamic noises (e.g. not like a jet engine, or the pressure/shock waves from the blades). The blade tips are probably operating in a low speed incompressible flow regime. For low speed fans typical of this application, noise is much more from incidental flow behavior and mechanical transmission (e.g. the airflow from the blade hitting a stationary object and creating a pulsed flow which then hits the package side and makes it vibrate). There's also surprisingly high noise in some fans from the DC brushless motor (a cheap controller uses square edge pulses to the windings, so the torque has pulses, which then are mechanically transmitted to the housing.. a nice "whine" source for a little 6000 rpm motor with a lot of poles) Actually, not all fans are set up to suck out of the box. Blowing in works better for heat transfer (you're pushing cold dense air, rather than sucking warm undense air).. Most test equipment uses the "suck in through a filter and pressurize the box" design approach. I think PCs evolved the other way because the single fan was in the power supply, and you didn't want to blow hot air, preheated by the power supply, through the rest of the system. So it is set up as an "exhaust from PS box" fan. And a lot of higher performance PCs (like the Dell sitting on my desk) use centrifugal fans (with variable speed, to boot) Jim Lux -----Original Message----- From: beowulf-bounces at beowulf.org [mailto:beowulf-bounces at beowulf.org] On Behalf Of Daniel Pfenniger Sent: Wednesday, June 06, 2012 10:33 AM To: holway at th.physik.uni-frankfurt.de Cc: Beowulf Mailing List Subject: Re: [Beowulf] Desktop fan reccommendation holway at th.physik.uni-frankfurt.de wrote: > >> The Dyson bladeless and silent fans are based om a different >> principle, a cylindrical thin air layer carries along the inner air >> column, the air flow is then laminar (http://www.dyson.com/store/fans.asp). > > Which is not good if your trying to cool stuff..... Well, the fans we are discussing expel air *out* of the box so the heat carried by the air doesn't care about the downstream laminar or turbulent state of the airflow. However noise generation does depend on the airflow state, since the acoustic power is proportional to the 8th power of the turbulence eddy speed (Lighthill 1952, 1954). This is why jet planes are noisy, as their turbulence is almost sonic. The airplane or helicopter propeller tips, or the fan blade ends move closer to the sound speed, so most of the sound is generated there. The conclusion is that to keep a computer quiet one has advantage to use large fans rotating at low speed. For the same air/heat output one gets much less noise, especially if the airflow is laminar. Dan _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From ntmoore at gmail.com Wed Jun 6 13:36:29 2012 From: ntmoore at gmail.com (Nathan Moore) Date: Wed, 6 Jun 2012 12:36:29 -0500 Subject: [Beowulf] Desktop fan reccommendation In-Reply-To: References: <4FCF2510.80502@unige.ch> <53bed0488afa16de22a4e9a658ec1290.squirrel@th.physik.uni-frankfurt.de> <4FCF9460.10500@unige.ch> Message-ID: > > Actually, not all fans are set up to suck out of the box. Ha! > Blowing in works better for heat transfer (you're pushing cold dense air, > rather than sucking warm undense air).. Most test equipment uses the "suck > in through a filter and pressurize the box" design approach. I think PCs > evolved the other way because the single fan was in the power supply, and > you didn't want to blow hot air, preheated by the power supply, through the > rest of the system. So it is set up as an "exhaust from PS box" fan. > > And a lot of higher performance PCs (like the Dell sitting on my desk) use > centrifugal fans (with variable speed, to boot) > > Jim Lux > > -----Original Message----- > From: beowulf-bounces at beowulf.org [mailto:beowulf-bounces at beowulf.org] On > Behalf Of Daniel Pfenniger > Sent: Wednesday, June 06, 2012 10:33 AM > To: holway at th.physik.uni-frankfurt.de > Cc: Beowulf Mailing List > Subject: Re: [Beowulf] Desktop fan reccommendation > > holway at th.physik.uni-frankfurt.de wrote: > > > >> The Dyson bladeless and silent fans are based om a different > >> principle, a cylindrical thin air layer carries along the inner air > >> column, the air flow is then laminar ( > http://www.dyson.com/store/fans.asp). > > > > Which is not good if your trying to cool stuff..... > > Well, the fans we are discussing expel air *out* of the box so the heat > carried by the air doesn't care about the downstream laminar or turbulent > state of the airflow. > > However noise generation does depend on the airflow state, since the > acoustic power is proportional to the 8th power of the turbulence eddy > speed (Lighthill 1952, 1954). This is why jet planes are noisy, as their > turbulence is almost sonic. The airplane or helicopter propeller tips, or > the fan blade ends move closer to the sound speed, so most of the sound is > generated there. > > The conclusion is that to keep a computer quiet one has advantage to use > large fans rotating at low speed. For the same air/heat output one gets > much less noise, especially if the airflow is laminar. > > > Dan > > > > > > > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing > To change your subscription (digest mode or unsubscribe) visit > http://www.beowulf.org/mailman/listinfo/beowulf > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing > To change your subscription (digest mode or unsubscribe) visit > http://www.beowulf.org/mailman/listinfo/beowulf > -- - - - - - - - - - - - - - - - - - - - - - Nathan Moore Winona, MN - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -- Mailscanner: Clean -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From diep at xs4all.nl Wed Jun 6 13:45:45 2012 From: diep at xs4all.nl (Vincent Diepeveen) Date: Wed, 6 Jun 2012 19:45:45 +0200 Subject: [Beowulf] Desktop fan reccommendation In-Reply-To: References: <4FCF2510.80502@unige.ch> <53bed0488afa16de22a4e9a658ec1290.squirrel@th.physik.uni-frankfurt.de> <4FCF9460.10500@unige.ch> Message-ID: <451F3CB3-B159-47D2-BAF1-0E62264A72EB@xs4all.nl> On Jun 6, 2012, at 7:28 PM, Lux, Jim (337C) wrote: > I'm not sure that the acoustic noise from fans is from the actual > aerodynamic noises (e.g. not like a jet engine, or the pressure/ > shock waves from the blades). The blade tips are probably > operating in a low speed incompressible flow regime. > > For low speed fans typical of this application, noise is much more > from incidental flow behavior and mechanical transmission (e.g. the > airflow from the blade hitting a stationary object and creating a > pulsed flow which then hits the package side and makes it > vibrate). There's also surprisingly high noise in some fans from > the DC brushless motor (a cheap controller uses square edge pulses > to the windings, so the torque has pulses, which then are > mechanically transmitted to the housing.. a nice "whine" source for > a little 6000 rpm motor with a lot of poles) > > Actually, not all fans are set up to suck out of the box. > Blowing in works better for heat transfer (you're pushing cold > dense air, rather than sucking warm undense air).. Most test > equipment uses the "suck in through a filter and pressurize the > box" design approach. I think PCs evolved the other way because > the single fan was in the power supply, and you didn't want to blow > hot air, preheated by the power supply, through the rest of the > system. So it is set up as an "exhaust from PS box" fan. Exhausting for PC's is most effective for what you probably call 'low airspeed' fans when i measured some years ago with a dual k7 machine. It was far more effective than blowing in some air. The ballgame changes when you blow in at some massive mercilious CFM as getting the lower temperature sooner to the cpu is going to make a difference then. This is not so interesting for computers though. I blew in with far over moped sounds using some delta fans. Yet it's already cooled really well by then so not such an interesting difference. At that huge blow in rate, it was very effective indeed, yet that difference i could only measure when total overkilling the machine with those fans. Actually the machines thin aluminium started to bend under that huge airpressure, but i figured that out only long after the experiment, but that's for another time to discuss :) > > And a lot of higher performance PCs (like the Dell sitting on my > desk) use centrifugal fans (with variable speed, to boot) > When i googled on centrifugal fans, i saw huge prices in the hundreds of dollars. Would mean the centrifugal fans are more expensive than the entire cluster which seems a tad odd. So it's gonna be the cheapskate cardboard solution with some duct tape and glue and relative cheap fans. > Jim Lux > > -----Original Message----- > From: beowulf-bounces at beowulf.org [mailto:beowulf- > bounces at beowulf.org] On Behalf Of Daniel Pfenniger > Sent: Wednesday, June 06, 2012 10:33 AM > To: holway at th.physik.uni-frankfurt.de > Cc: Beowulf Mailing List > Subject: Re: [Beowulf] Desktop fan reccommendation > > holway at th.physik.uni-frankfurt.de wrote: >> >>> The Dyson bladeless and silent fans are based om a different >>> principle, a cylindrical thin air layer carries along the inner air >>> column, the air flow is then laminar (http://www.dyson.com/store/ >>> fans.asp). >> >> Which is not good if your trying to cool stuff..... > > Well, the fans we are discussing expel air *out* of the box so the > heat carried by the air doesn't care about the downstream laminar > or turbulent state of the airflow. > > However noise generation does depend on the airflow state, since > the acoustic power is proportional to the 8th power of the > turbulence eddy speed (Lighthill 1952, 1954). This is why jet > planes are noisy, as their turbulence is almost sonic. The > airplane or helicopter propeller tips, or the fan blade ends move > closer to the sound speed, so most of the sound is generated there. > > The conclusion is that to keep a computer quiet one has advantage > to use large fans rotating at low speed. For the same air/heat > output one gets much less noise, especially if the airflow is laminar. > > > Dan > > > > > > > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin > Computing To change your subscription (digest mode or unsubscribe) > visit http://www.beowulf.org/mailman/listinfo/beowulf > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin > Computing > To change your subscription (digest mode or unsubscribe) visit > http://www.beowulf.org/mailman/listinfo/beowulf _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From james.p.lux at jpl.nasa.gov Wed Jun 6 16:07:10 2012 From: james.p.lux at jpl.nasa.gov (Lux, Jim (337C)) Date: Wed, 6 Jun 2012 20:07:10 +0000 Subject: [Beowulf] Desktop fan reccommendation In-Reply-To: <451F3CB3-B159-47D2-BAF1-0E62264A72EB@xs4all.nl> References: <4FCF2510.80502@unige.ch> <53bed0488afa16de22a4e9a658ec1290.squirrel@th.physik.uni-frankfurt.de> <4FCF9460.10500@unige.ch> <451F3CB3-B159-47D2-BAF1-0E62264A72EB@xs4all.nl> Message-ID: -----Original Message----- From: Vincent Diepeveen [mailto:diep at xs4all.nl] Sent: Wednesday, June 06, 2012 10:46 AM To: Lux, Jim (337C) Cc: Daniel.Pfenniger at unige.ch; holway at th.physik.uni-frankfurt.de; Beowulf Mailing List Subject: Re: [Beowulf] Desktop fan reccommendation On Jun 6, 2012, at 7:28 PM, Lux, Jim (337C) wrote: > And a lot of higher performance PCs (like the Dell sitting on my > desk) use centrifugal fans (with variable speed, to boot) > When i googled on centrifugal fans, i saw huge prices in the hundreds of dollars. Would mean the centrifugal fans are more expensive than the entire cluster which seems a tad odd. So it's gonna be the cheapskate cardboard solution with some duct tape and glue and relative cheap fans. --- The fan in my dell is plastic and cheap.. I've seen them surplus for under $5.. But what you want is often sold as a "squirrel cage blower".. The advantages are: a) good performance against backpressure b) lots of very small blades, so the "blade repetition rate" noise is high frequency, low amplitude and easily absorbed c) they give decent performance at low rotation rates (500-1000 RPM) they are the dominant device used in, for instance, heating and airconditioning. A good cheap source is from automotive scrap yards. The blower that pushes the air through the heater core and all the various ducts in a car is well suited to pushing a lot of air through a lot of loss. 12VDC typically. Make sure you get the housing too, not just the squirrel cage and motor. This may require a bit of hacksawing on modern cars. Ones from upscale cars are quieter than more downscale cars. So find that Mercedes scrap, not the stuff from the DDR (Do Trabants even have heaters, or do you wear your good socialist overcoat) Here's a typical item on eBay http://www.ebay.com/itm/Squirrel-Cage-Shaded-Pole-Blower-Fan-220-CFM-Dayton-60-available-/190685633240?pt=LH_DefaultDomain_0&hash=item2c65bfdad8 Here's a 12VDC one http://www.ebay.com/itm/454-CFM-12-VOLT-DC-SPAL-007-A42-32D-3-SPEED-CAB-FAN-BLOWER-16-1406-/270917027943?_trksid=p4340.m1982&_trkparms=aid%3D555000%26algo%3DPW.CURRENT%26ao%3D1%26asc%3D10%26meid%3D8950398135996031222%26pid%3D100009%26prg%3D1005%26rk%3D1 I've also seen somewhat larger versions of this as an appliance.. plastic housing, designed to be set on the floor to blow air for cooling or helping to dry recently mopped floors or wet carpets. Here's one from a computer http://www.surpluscenter.com/item.asp?item=16-1151&catname=electric I should point out that they make these in huge sizes (as in 1 million CFM) for applications like underground mine ventilation _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From prentice at ias.edu Wed Jun 6 16:18:42 2012 From: prentice at ias.edu (Prentice Bisbal) Date: Wed, 06 Jun 2012 16:18:42 -0400 Subject: [Beowulf] Desktop fan reccommendation In-Reply-To: References: <4FCF2510.80502@unige.ch> <53bed0488afa16de22a4e9a658ec1290.squirrel@th.physik.uni-frankfurt.de> <4FCF9460.10500@unige.ch> <451F3CB3-B159-47D2-BAF1-0E62264A72EB@xs4all.nl> Message-ID: <4FCFBB22.7050608@ias.edu> On 06/06/2012 04:07 PM, Lux, Jim (337C) wrote: > > > > I've also seen somewhat larger versions of this as an appliance.. plastic housing, designed to be set on the floor to blow air for cooling or helping to dry recently mopped floors or wet carpets. > > You should be able to get one of these plastic housing ones from a janitorial supply company or an emergency services supply company. Janitors use them to dry wet floors, and firefighters use them to vent a house when CO limits are too high, or there's too much smoke in a house. You can get one from McMaster-Carr for $346.67 or $451.67. Click on "portable blowers" in the link below: http://www.mcmaster.com/#standard-air-blowers/ -- Prentice _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From diep at xs4all.nl Thu Jun 7 02:47:50 2012 From: diep at xs4all.nl (Vincent Diepeveen) Date: Thu, 7 Jun 2012 08:47:50 +0200 Subject: [Beowulf] Desktop fan reccommendation In-Reply-To: References: <4FCF2510.80502@unige.ch> <53bed0488afa16de22a4e9a658ec1290.squirrel@th.physik.uni-frankfurt.de> <4FCF9460.10500@unige.ch> <451F3CB3-B159-47D2-BAF1-0E62264A72EB@xs4all.nl> Message-ID: <586C6315-BB53-4599-8AE0-85044C5D4AC6@xs4all.nl> This is all huge decibel junk man. So any centrifugal fan is out of the question if it produces that much noise right next to my desk :) In general spoken i don't understand why so many persons accept from manufacturers those huge soundlevels. The fans i got here, the 18 CM ones are 700 RPM @ 19 decibel, you don't hear them. The 1500 RPM aerocools 12 CM, you do hear them, but with their own rubbers and 1.5 meters away it's acceptable noise, though it depends upon the heatsinks you got a lot and i had to buy some floor isolating material that absorbs a lot of decibels, to quiet down things more. Requirement 1 is : it must be low noise of course. Would be very bad to have something of 60-100 decibel next to professional sound equipment. On Jun 6, 2012, at 10:07 PM, Lux, Jim (337C) wrote: > > > > -----Original Message----- > From: Vincent Diepeveen [mailto:diep at xs4all.nl] > Sent: Wednesday, June 06, 2012 10:46 AM > To: Lux, Jim (337C) > Cc: Daniel.Pfenniger at unige.ch; holway at th.physik.uni-frankfurt.de; > Beowulf Mailing List > Subject: Re: [Beowulf] Desktop fan reccommendation > > > On Jun 6, 2012, at 7:28 PM, Lux, Jim (337C) wrote: >> And a lot of higher performance PCs (like the Dell sitting on my >> desk) use centrifugal fans (with variable speed, to boot) >> > > When i googled on centrifugal fans, i saw huge prices in the hundreds > of dollars. > > Would mean the centrifugal fans are more expensive than the entire > cluster which seems a tad odd. > > So it's gonna be the cheapskate cardboard solution with some duct > tape and glue and relative cheap fans. > > --- > > The fan in my dell is plastic and cheap.. I've seen them surplus > for under $5.. > > But what you want is often sold as a "squirrel cage blower".. > > The advantages are: > a) good performance against backpressure > b) lots of very small blades, so the "blade repetition rate" noise > is high frequency, low amplitude and easily absorbed > c) they give decent performance at low rotation rates (500-1000 RPM) > > they are the dominant device used in, for instance, heating and > airconditioning. > > A good cheap source is from automotive scrap yards. The blower > that pushes the air through the heater core and all the various > ducts in a car is well suited to pushing a lot of air through a lot > of loss. 12VDC typically. Make sure you get the housing too, not > just the squirrel cage and motor. This may require a bit of > hacksawing on modern cars. > > Ones from upscale cars are quieter than more downscale cars. So > find that Mercedes scrap, not the stuff from the DDR (Do Trabants > even have heaters, or do you wear your good socialist overcoat) > > > Here's a typical item on eBay > http://www.ebay.com/itm/Squirrel-Cage-Shaded-Pole-Blower-Fan-220- > CFM-Dayton-60-available-/190685633240? > pt=LH_DefaultDomain_0&hash=item2c65bfdad8 > > Here's a 12VDC one > http://www.ebay.com/itm/454-CFM-12-VOLT-DC-SPAL-007-A42-32D-3-SPEED- > CAB-FAN-BLOWER-16-1406-/270917027943? > _trksid=p4340.m1982&_trkparms=aid%3D555000%26algo%3DPW.CURRENT%26ao% > 3D1%26asc%3D10%26meid%3D8950398135996031222%26pid%3D100009%26prg% > 3D1005%26rk%3D1 > > > I've also seen somewhat larger versions of this as an appliance.. > plastic housing, designed to be set on the floor to blow air for > cooling or helping to dry recently mopped floors or wet carpets. > > Here's one from a computer > http://www.surpluscenter.com/item.asp?item=16-1151&catname=electric > > I should point out that they make these in huge sizes (as in 1 > million CFM) for applications like underground mine ventilation _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From bill at cse.ucdavis.edu Fri Jun 8 20:06:19 2012 From: bill at cse.ucdavis.edu (Bill Broadley) Date: Fri, 08 Jun 2012 17:06:19 -0700 Subject: [Beowulf] Torrents for HPC Message-ID: <4FD2937B.6010408@cse.ucdavis.edu> I've built Myrinet, SDR, DDR, and QDR clusters ( no FDR yet), but I still have users whose use cases and budgets still only justify GigE. I've setup a 160TB hadoop cluster is working well, but haven't found justification for the complexity/cost related to lustre. I have high hopes for Ceph, but it seems not quite ready yet. I'd happy to hear otherwise. A new user on one of my GigE clusters submits batches of 500 jobs that need to randomly read a 30-60GB dataset. They aren't the only user of said cluster so each job will be waiting in the queue with a mix of others. As you might imagine that hammers a central GigE connected NFS server pretty hard. This cluster has 38 computes/304 cores/608 threads. I thought torrent might be a good way to publish such a dataset to the compute nodes (thus avoiding the GigE bottleneck). So I wrote a small/simple bittorrent client and made a 16GB example data set and measured the performance pushing it to 38 compute nodes: http://cse.ucdavis.edu/bill/btbench-2.png The slow ramp up is partially because I'm launching torrent clients with a crude for i in { ssh $i launch_torrent.sh }. I get approximately 2.5GB/sec sustained when writing to 38 compute nodes. So 38 nodes * 16GB = 608GB to distribute @ 2.5 GHz sec = 240 seconds or so. The clients definitely see MUCH faster performance when access a local copy instead of a small share of the performance/bandwidth of a central file server. Do you think it's worth bundling up for others to use? This is how it works: 1) User runs publish before they start submitting jobs. 2) The publish command makes a torrent of that directory and starts seeding that torrent. 3) The user submits an arbitrary number of jobs that needs that directory. Inside the job they "$ subscribe " 4) The subscribe command launches one torrent client per node (not per j job) and blocks until the directory is completely downloaded 5) /scratch// has the users data Not nearly as convenient as having a fast parallel filesystem, but seems potentially useful for those who have large read only datasets, GigE and NFS. Thoughts? _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From jlb17 at duke.edu Mon Jun 11 13:49:23 2012 From: jlb17 at duke.edu (Joshua Baker-LePain) Date: Mon, 11 Jun 2012 13:49:23 -0400 (EDT) Subject: [Beowulf] Torrents for HPC In-Reply-To: <4FD2937B.6010408@cse.ucdavis.edu> References: <4FD2937B.6010408@cse.ucdavis.edu> Message-ID: On Fri, 8 Jun 2012 at 5:06pm, Bill Broadley wrote > Do you think it's worth bundling up for others to use? > > This is how it works: > 1) User runs publish before they start submitting > jobs. > 2) The publish command makes a torrent of that directory and starts > seeding that torrent. > 3) The user submits an arbitrary number of jobs that needs that > directory. Inside the job they "$ subscribe " > 4) The subscribe command launches one torrent client per node (not per j > job) and blocks until the directory is completely downloaded > 5) /scratch// has the users data > > Not nearly as convenient as having a fast parallel filesystem, but seems > potentially useful for those who have large read only datasets, GigE and > NFS. > > Thoughts? I would definitely be interested in a tool like this. Our situation is about as you describe -- we don't have the budget or workload to justify any interconnect higher-end than GigE, but have folks who pound our central storage to get at DBs stored there. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From beckerjes at mail.nih.gov Mon Jun 11 14:02:43 2012 From: beckerjes at mail.nih.gov (Jesse Becker) Date: Mon, 11 Jun 2012 14:02:43 -0400 Subject: [Beowulf] Torrents for HPC In-Reply-To: References: <4FD2937B.6010408@cse.ucdavis.edu> Message-ID: <20120611180243.GN38490@mail.nih.gov> On Mon, Jun 11, 2012 at 01:49:23PM -0400, Joshua Baker-LePain wrote: >On Fri, 8 Jun 2012 at 5:06pm, Bill Broadley wrote > >> Do you think it's worth bundling up for others to use? >> >> This is how it works: >> 1) User runs publish before they start submitting >> jobs. >> 2) The publish command makes a torrent of that directory and starts >> seeding that torrent. >> 3) The user submits an arbitrary number of jobs that needs that >> directory. Inside the job they "$ subscribe " >> 4) The subscribe command launches one torrent client per node (not per j >> job) and blocks until the directory is completely downloaded >> 5) /scratch// has the users data >> >> Not nearly as convenient as having a fast parallel filesystem, but seems >> potentially useful for those who have large read only datasets, GigE and >> NFS. >> >> Thoughts? > >I would definitely be interested in a tool like this. Our situation is >about as you describe -- we don't have the budget or workload to justify >any interconnect higher-end than GigE, but have folks who pound our >central storage to get at DBs stored there. I looked into doing something like this on 50-node cluster to synchronize several hundred GB of semi-static data used in /scratch. I found that the time to build the torrent files--calculating checksums and such--was *far* more time consuming than the actual file distribution. This is on top of the rather severe IO hit on the "seed" box as well. I fought with it for a while, but came to the conclusion that *for _this_ data*, and how quickly it changed, torrents weren't the way to go--largely because of the cost of creating the torrent in the first place. However, I do think that similar systems could be very useful, if perhaps a bit less strict in their tests. The peer-to-peer model is uselful, and (in some cases) simple size/date check could be enough to determine when (re)copying a file. One thing torrent's don't handle are file deletions, which opens up a few new problems. Eventually, I moved to a distrbuted rsync tree, which worked for a while, but was slightly fragile. Eventually, we dropped the whole thing when we purchased a sufficiently fast storage system. -- Jesse Becker NHGRI Linux support (Digicon Contractor) _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From landman at scalableinformatics.com Mon Jun 11 14:10:35 2012 From: landman at scalableinformatics.com (Joe Landman) Date: Mon, 11 Jun 2012 14:10:35 -0400 Subject: [Beowulf] Torrents for HPC In-Reply-To: <20120611180243.GN38490@mail.nih.gov> References: <4FD2937B.6010408@cse.ucdavis.edu> <20120611180243.GN38490@mail.nih.gov> Message-ID: <4FD6349B.6000604@scalableinformatics.com> On 06/11/2012 02:02 PM, Jesse Becker wrote: > I looked into doing something like this on 50-node cluster to > synchronize several hundred GB of semi-static data used in /scratch. > I found that the time to build the torrent files--calculating checksums > and such--was *far* more time consuming than the actual file > distribution. This is on top of the rather severe IO hit on the "seed" > box as well. > A long while ago, we developed 'xcp' which did data distribution from 1 machine to many machines, and was quite fast (non-broadcast). Specifically for moving some genomic/proteomic databases to remote nodes. Didn't see much interest in it, so we shelved it. It worked like this xcp file remote_path [--nodes node1[,node2....]] [--all] We were working on generalizing it for directories and other things as well, but as I noted, people were starting to talk (breathlessly at the time) about torrents for distribution, so we pushed it off and forgot about it. > I fought with it for a while, but came to the conclusion that *for > _this_ data*, and how quickly it changed, torrents weren't the way to > go--largely because of the cost of creating the torrent in the first > place. > > However, I do think that similar systems could be very useful, if > perhaps a bit less strict in their tests. The peer-to-peer model is > uselful, and (in some cases) simple size/date check could be enough to > determine when (re)copying a file. > > One thing torrent's don't handle are file deletions, which opens up a > few new problems. > > Eventually, I moved to a distrbuted rsync tree, which worked for a > while, but was slightly fragile. Eventually, we dropped the whole > thing when we purchased a sufficiently fast storage system. This is one of the things that drove us to building fast storage systems. Data motion is hard, and a good fast storage unit with some serious data movement cannons and high power storage can solve the problem with greater ease/elegance. -- Joseph Landman, Ph.D Founder and CEO Scalable Informatics Inc. email: landman at scalableinformatics.com web : http://scalableinformatics.com http://scalableinformatics.com/sicluster phone: +1 734 786 8423 x121 fax : +1 866 888 3112 cell : +1 734 612 4615 _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From bernard at vanhpc.org Mon Jun 11 14:17:53 2012 From: bernard at vanhpc.org (Bernard Li) Date: Mon, 11 Jun 2012 11:17:53 -0700 Subject: [Beowulf] Torrents for HPC In-Reply-To: <4FD6349B.6000604@scalableinformatics.com> References: <4FD2937B.6010408@cse.ucdavis.edu> <20120611180243.GN38490@mail.nih.gov> <4FD6349B.6000604@scalableinformatics.com> Message-ID: Hi all: I'd also like to point you guys to pcp: http://www.theether.org/pcp/ It's a bit old, but should still build on modern systems. It would be nice if somebody picks up development after all these years (hint hint) :-) Cheers, Bernard On Mon, Jun 11, 2012 at 11:10 AM, Joe Landman wrote: > On 06/11/2012 02:02 PM, Jesse Becker wrote: > >> I looked into doing something like this on 50-node cluster to >> synchronize several hundred GB of semi-static data used in /scratch. >> I found that the time to build the torrent files--calculating checksums >> and such--was *far* more time consuming than the actual file >> distribution. ?This is on top of the rather severe IO hit on the "seed" >> box as well. >> > > A long while ago, we developed 'xcp' which did data distribution from 1 > machine to many machines, and was quite fast (non-broadcast). > Specifically for moving some genomic/proteomic databases to remote > nodes. ?Didn't see much interest in it, so we shelved it. ?It worked > like this > > ? ? ? ?xcp file remote_path [--nodes node1[,node2....]] [--all] > > We were working on generalizing it for directories and other things as > well, but as I noted, people were starting to talk (breathlessly at the > time) about torrents for distribution, so we pushed it off and forgot > about it. > >> I fought with it for a while, but came to the conclusion that *for >> _this_ data*, and how quickly it changed, torrents weren't the way to >> go--largely because of the cost of creating the torrent in the first >> place. >> >> However, I do think that similar systems could be very useful, if >> perhaps a bit less strict in their tests. ?The peer-to-peer model is >> uselful, and (in some cases) simple size/date check could be enough to >> determine when (re)copying a file. >> >> One thing torrent's don't handle are file deletions, which opens up a >> few new problems. >> >> Eventually, I moved to a distrbuted rsync tree, which worked for a >> while, but was slightly fragile. ?Eventually, we dropped the whole >> thing when we purchased a sufficiently fast storage system. > > This is one of the things that drove us to building fast storage > systems. ?Data motion is hard, and a good fast storage unit with some > serious data movement cannons and high power storage can solve the > problem with greater ease/elegance. > > > -- > Joseph Landman, Ph.D > Founder and CEO > Scalable Informatics Inc. > email: landman at scalableinformatics.com > web ?: http://scalableinformatics.com > ? ? ? ?http://scalableinformatics.com/sicluster > phone: +1 734 786 8423 x121 > fax ?: +1 866 888 3112 > cell : +1 734 612 4615 > > > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing > To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From stewart at serissa.com Mon Jun 11 17:37:03 2012 From: stewart at serissa.com (Lawrence Stewart) Date: Mon, 11 Jun 2012 17:37:03 -0400 Subject: [Beowulf] Torrents for HPC In-Reply-To: <4FD6349B.6000604@scalableinformatics.com> References: <4FD2937B.6010408@cse.ucdavis.edu> <20120611180243.GN38490@mail.nih.gov> <4FD6349B.6000604@scalableinformatics.com> Message-ID: Another one of these file distribution things is "sbcast" from the slurm resource manager. It was amazingly fast to distribute a modest size file to all 972 nodes of the large Sicortex machine. I didn't try it with large files. _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From skylar.thompson at gmail.com Mon Jun 11 20:34:35 2012 From: skylar.thompson at gmail.com (Skylar Thompson) Date: Mon, 11 Jun 2012 17:34:35 -0700 Subject: [Beowulf] Torrents for HPC In-Reply-To: <4FD2937B.6010408@cse.ucdavis.edu> References: <4FD2937B.6010408@cse.ucdavis.edu> Message-ID: <4FD68E9B.6010204@gmail.com> On 6/8/2012 5:06 PM, Bill Broadley wrote: > > I've built Myrinet, SDR, DDR, and QDR clusters ( no FDR yet), but I > still have users whose use cases and budgets still only justify GigE. > > I've setup a 160TB hadoop cluster is working well, but haven't found > justification for the complexity/cost related to lustre. I have high > hopes for Ceph, but it seems not quite ready yet. I'd happy to hear > otherwise. > > A new user on one of my GigE clusters submits batches of 500 jobs that > need to randomly read a 30-60GB dataset. They aren't the only user of > said cluster so each job will be waiting in the queue with a mix of others. > > As you might imagine that hammers a central GigE connected NFS server > pretty hard. This cluster has 38 computes/304 cores/608 threads. > > I thought torrent might be a good way to publish such a dataset to the > compute nodes (thus avoiding the GigE bottleneck). So I wrote a > small/simple bittorrent client and made a 16GB example data set and > measured the performance pushing it to 38 compute nodes: > http://cse.ucdavis.edu/bill/btbench-2.png > > The slow ramp up is partially because I'm launching torrent clients with > a crude for i in { ssh $i launch_torrent.sh }. > > I get approximately 2.5GB/sec sustained when writing to 38 compute > nodes. So 38 nodes * 16GB = 608GB to distribute @ 2.5 GHz sec = 240 > seconds or so. > > The clients definitely see MUCH faster performance when access a local > copy instead of a small share of the performance/bandwidth of a central > file server. > > Do you think it's worth bundling up for others to use? > > This is how it works: > 1) User runs publish before they start submitting > jobs. > 2) The publish command makes a torrent of that directory and starts > seeding that torrent. > 3) The user submits an arbitrary number of jobs that needs that > directory. Inside the job they "$ subscribe " > 4) The subscribe command launches one torrent client per node (not per j > job) and blocks until the directory is completely downloaded > 5) /scratch// has the users data > > Not nearly as convenient as having a fast parallel filesystem, but seems > potentially useful for those who have large read only datasets, GigE and > NFS. > > Thoughts? We've run into a similar need for a solution at $WORK. I work in a large genomics research department and we have cluster users who want to copy large data files (20GB-500GB) to hundreds of cluster nodes at once. Since the people that need this tend to run MPI anyways, I wrote an MPI utility that copies a file once to every node in the job, taking care to make sure each node only gets one copy of the file and to copy the file only if its SHA1 hash changes. Skylar _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From atp at piskorski.com Tue Jun 12 01:54:10 2012 From: atp at piskorski.com (Andrew Piskorski) Date: Tue, 12 Jun 2012 01:54:10 -0400 Subject: [Beowulf] Torrents for HPC In-Reply-To: <4FD6349B.6000604@scalableinformatics.com> References: <4FD6349B.6000604@scalableinformatics.com> Message-ID: <20120612055410.GA45268@piskorski.com> On Mon, Jun 11, 2012 at 02:10:35PM -0400, Joe Landman wrote: > A long while ago, we developed 'xcp' which did data distribution from 1 > machine to many machines, and was quite fast (non-broadcast). Sounds very similar to nettee. Can you compare/contrast the two? http://saf.bio.caltech.edu/nettee.html > We were working on generalizing it for directories and other things as > well, Ah. Nettee can only handle that sort of thing by playing games with tar, which isn't terribly user friendly. -- Andrew Piskorski http://www.piskorski.com/ _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From atp at piskorski.com Tue Jun 12 02:37:35 2012 From: atp at piskorski.com (Andrew Piskorski) Date: Tue, 12 Jun 2012 02:37:35 -0400 Subject: [Beowulf] Torrents for HPC In-Reply-To: <20120611180243.GN38490@mail.nih.gov> References: <20120611180243.GN38490@mail.nih.gov> Message-ID: <20120612063735.GB45268@piskorski.com> On Mon, Jun 11, 2012 at 02:02:43PM -0400, Jesse Becker wrote: > I found that the time to build the torrent files--calculating checksums > and such--was *far* more time consuming than the actual file > distribution. This is on top of the rather severe IO hit on the "seed" > box as well. Hm, I wonder if zsync does better: http://zsync.moria.org.uk/ Just now with zsync v0.6.1 (from 2009), running zsyncmake on a 696 MB *.iso file took 9.7 seconds on my (rather pedestrian) desktop. That was reading from and writing to the same SATA disk, and it used one cpu core at about 80% the whole time. When I ran two zsyncmakes at once, each one took twice as long and only used 40% cpu, so that 70 MB/s clearly seems to limited by disk-IO on this machine, not cpu. -- Andrew Piskorski http://www.piskorski.com/ _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From dnlombar at ichips.intel.com Tue Jun 12 11:19:08 2012 From: dnlombar at ichips.intel.com (David N. Lombard) Date: Tue, 12 Jun 2012 08:19:08 -0700 Subject: [Beowulf] Torrents for HPC In-Reply-To: References: <4FD2937B.6010408@cse.ucdavis.edu> <20120611180243.GN38490@mail.nih.gov> <4FD6349B.6000604@scalableinformatics.com> Message-ID: <20120612151908.GA18824@nlxcldnl2.cl.intel.com> On Mon, Jun 11, 2012 at 11:17:53AM -0700, Bernard Li wrote: > Hi all: > > I'd also like to point you guys to pcp: > > http://www.theether.org/pcp/ > > It's a bit old, but should still build on modern systems. It would be > nice if somebody picks up development after all these years (hint > hint) :-) > +1 for pcp It's one of my /favorites/ from the past. As it did a pipeline file transfer over a tree, it was only a tad slower than a single point-to-point copy. Brent Chun (the author), also wrote a related amazingly fast parallel execution utility, gexec. -- David N. Lombard, Intel, Irvine, CA I do not speak for Intel Corporation; all comments are strictly my own. _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From ellis at cse.psu.edu Tue Jun 12 13:56:22 2012 From: ellis at cse.psu.edu (Ellis H. Wilson III) Date: Tue, 12 Jun 2012 13:56:22 -0400 Subject: [Beowulf] Torrents for HPC In-Reply-To: <4FD2937B.6010408@cse.ucdavis.edu> References: <4FD2937B.6010408@cse.ucdavis.edu> Message-ID: <4FD782C6.7050704@cse.psu.edu> On 06/08/12 20:06, Bill Broadley wrote: > A new user on one of my GigE clusters submits batches of 500 jobs that > need to randomly read a 30-60GB dataset. They aren't the only user of > said cluster so each job will be waiting in the queue with a mix of others. With a 160TB cluster and only a 30-60GB dataset, is there any reason why the user isn't simply storing their dataset in HDFS? Does the data change frequently via a non-MapReduce framework such that it needs to be pulled from NFS before every job? If the dataset is in a few dozen files and in HDFS in the cluster, there is no reason why MapReduce shouldn't spawn it's tasks directly "on" the data, without need (most of the time) for moving all of the data to every node as you mention. > The clients definitely see MUCH faster performance when access a local > copy instead of a small share of the performance/bandwidth of a central > file server. This makes perfect sense, and is in fact exactly what Hadoop already attempts to do by trying to co-locate MapReduce tasks with pre-placed data in HDFS. Hadoop tries to move the computation to the data in this case, rather than what you are trying to do: Move the data to the computation, which tends to be /way/ harder unless you've got killer storage. All of this said, it is unclear from your email whether this user is using Hadoop or if that was just a side-node and they are operating in a totally different cluster with a different framework (MPI?). Best, ellis _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From orion at cora.nwra.com Tue Jun 12 17:06:19 2012 From: orion at cora.nwra.com (Orion Poplawski) Date: Tue, 12 Jun 2012 15:06:19 -0600 Subject: [Beowulf] Torrents for HPC In-Reply-To: References: <4FD2937B.6010408@cse.ucdavis.edu> <20120611180243.GN38490@mail.nih.gov> <4FD6349B.6000604@scalableinformatics.com> Message-ID: <4FD7AF4B.7030500@cora.nwra.com> On 06/11/2012 12:17 PM, Bernard Li wrote: > Hi all: > > I'd also like to point you guys to pcp: > > http://www.theether.org/pcp/ > > It's a bit old, but should still build on modern systems. It would be > nice if somebody picks up development after all these years (hint > hint) :-) Hmm, the home page indicates it went into ganglia, but it's not there now. Anyone know what happened? -- Orion Poplawski Technical Manager 303-415-9701 x222 NWRA, Boulder Office FAX: 303-415-9702 3380 Mitchell Lane orion at nwra.com Boulder, CO 80301 http://www.nwra.com _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From bernard at vanhpc.org Tue Jun 12 17:27:41 2012 From: bernard at vanhpc.org (Bernard Li) Date: Tue, 12 Jun 2012 14:27:41 -0700 Subject: [Beowulf] Torrents for HPC In-Reply-To: <4FD7AF4B.7030500@cora.nwra.com> References: <4FD2937B.6010408@cse.ucdavis.edu> <20120611180243.GN38490@mail.nih.gov> <4FD6349B.6000604@scalableinformatics.com> <4FD7AF4B.7030500@cora.nwra.com> Message-ID: Hi Orion: On Tue, Jun 12, 2012 at 2:06 PM, Orion Poplawski wrote: > Hmm, the home page indicates it went into ganglia, but it's not there now. > Anyone know what happened? The code is here: http://ganglia.svn.sf.net/viewvc/ganglia/trunk/gexec/pcp/ Perhaps Brent could update the page with the direct link? Thanks, Bernard _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From bill at cse.ucdavis.edu Tue Jun 12 18:42:47 2012 From: bill at cse.ucdavis.edu (Bill Broadley) Date: Tue, 12 Jun 2012 15:42:47 -0700 Subject: [Beowulf] Torrents for HPC In-Reply-To: <4FD2937B.6010408@cse.ucdavis.edu> References: <4FD2937B.6010408@cse.ucdavis.edu> Message-ID: <4FD7C5E7.3020803@cse.ucdavis.edu> Many thanks for the online and offline feedback. I've been reviewing the mentioned alternatives. From what I can tell none of them allow nodes to join/leave at random. Our problem is that a user might submit 500-50,000 jobs that depend on a particular dataset and have a variable number of jobs/nodes running at any given time. So ideally each node that a job lands on would do something like: 1) Is this node subscribed to this dataset? If not start a client. 2) Is the dataset completely downloaded? If not wait. Because of the node churn we didn't want the send approach. We also wanted to handle multiple file transfers of multiple directories for multiple users at once. From what I tell, most (all?) other approaches assume a mostly idle network and don't robustly handle cases where 1/3rd of the nodes have highly contended links. Because we are using the links for MPI, NFS, and torrents we didn't want to use an approach that wasn't robust with highly variable per node bandwidth. Any comments on how well the various alternatives work with a busy network? Seems like any tree based approach would have problems. As far as the torrent creation process. My small 5 disk RAID manages 300-400MB/sec and manages around 80% of that for creating torrents. It looks single threaded, parallel friendly, and easy to parallelize. But from what I can tell torrent creation is I/O limited at least for us. I already have some parallel checksumming code around for another project, I could likely tweak it to create torrents if people out there thing this is a real bottleneck. I like the torrent behavior of guaranteed file integrity and self-healing files. Using MPI does make quite a bit of sense for clusters with high speed interconnects. Although I suspect that being network bound for IO is less of a problem. I'd consider it though, I do have sdr/ddr/qdr clusters around, but so far (knock on wood) not IO limited. I've done a fair bit of MPI programming, but I'm not sure it's easy/possible to have nodes dynamically join/leave. Worst case I guess you could launch a thread/process for each pair of peers that wanted to trade blocks and still use TCP for swapping metadata about what peers to connect to and block to trade. _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From skylar.thompson at gmail.com Tue Jun 12 18:47:14 2012 From: skylar.thompson at gmail.com (Skylar Thompson) Date: Tue, 12 Jun 2012 15:47:14 -0700 Subject: [Beowulf] Torrents for HPC In-Reply-To: <4FD7C5E7.3020803@cse.ucdavis.edu> References: <4FD2937B.6010408@cse.ucdavis.edu> <4FD7C5E7.3020803@cse.ucdavis.edu> Message-ID: <4FD7C6F2.20003@gmail.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 06/12/2012 03:42 PM, Bill Broadley wrote: > Using MPI does make quite a bit of sense for clusters with high > speed interconnects. Although I suspect that being network bound > for IO is less of a problem. I'd consider it though, I do have > sdr/ddr/qdr clusters around, but so far (knock on wood) not IO > limited. I've done a fair bit of MPI programming, but I'm not sure > it's easy/possible to have nodes dynamically join/leave. Worst > case I guess you could launch a thread/process for each pair of > peers that wanted to trade blocks and still use TCP for swapping > metadata about what peers to connect to and block to trade. We manage this by having users run this in the same Grid Engine parallel environment they run their job in. This means they're guaranteed to run the sync job on the same nodes their actual job runs on. The copied files change so slowly that even on 1GbE network is rarely a bottleneck, since we only transfer files that are changed. Skylar -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk/XxvAACgkQsc4yyULgN4b6dACfb5KIcql9wAbcudIKiO+IMrHX xS4An1caTjSp0MOCgb4Ach6h8ynQe7CF =LE07 -----END PGP SIGNATURE----- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From bill at cse.ucdavis.edu Tue Jun 12 18:59:46 2012 From: bill at cse.ucdavis.edu (Bill Broadley) Date: Tue, 12 Jun 2012 15:59:46 -0700 Subject: [Beowulf] Torrents for HPC In-Reply-To: <4FD7C6F2.20003@gmail.com> References: <4FD2937B.6010408@cse.ucdavis.edu> <4FD7C5E7.3020803@cse.ucdavis.edu> <4FD7C6F2.20003@gmail.com> Message-ID: <4FD7C9E2.4020506@cse.ucdavis.edu> On 06/12/2012 03:47 PM, Skylar Thompson wrote: > We manage this by having users run this in the same Grid Engine > parallel environment they run their job in. This means they're > guaranteed to run the sync job on the same nodes their actual job runs > on. The copied files change so slowly that even on 1GbE network is > rarely a bottleneck, since we only transfer files that are changed. Our problem is we have many users and don't want 50,000 30 minute jobs to turn into a giant jobs that defeats the priority system while running. With an array job users can get 100% of the cluster if it's idle and quickly decay to their fair share when other higher priority jobs run. That way we can have the cluster 100% utilized, but new jobs (from users using less than their fair share) can get through the queue (which might well be months long) quickly. _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From prentice at ias.edu Wed Jun 13 09:16:23 2012 From: prentice at ias.edu (Prentice Bisbal) Date: Wed, 13 Jun 2012 09:16:23 -0400 Subject: [Beowulf] Torrents for HPC In-Reply-To: <4FD2937B.6010408@cse.ucdavis.edu> References: <4FD2937B.6010408@cse.ucdavis.edu> Message-ID: <4FD892A7.4010207@ias.edu> Bill, Thanks for sharing this. I've often wondered if using BitTorrent in this way would be useful for HPC. Thanks for answering that question! Prentice On 06/08/2012 08:06 PM, Bill Broadley wrote: > I've built Myrinet, SDR, DDR, and QDR clusters ( no FDR yet), but I > still have users whose use cases and budgets still only justify GigE. > > I've setup a 160TB hadoop cluster is working well, but haven't found > justification for the complexity/cost related to lustre. I have high > hopes for Ceph, but it seems not quite ready yet. I'd happy to hear > otherwise. > > A new user on one of my GigE clusters submits batches of 500 jobs that > need to randomly read a 30-60GB dataset. They aren't the only user of > said cluster so each job will be waiting in the queue with a mix of others. > > As you might imagine that hammers a central GigE connected NFS server > pretty hard. This cluster has 38 computes/304 cores/608 threads. > > I thought torrent might be a good way to publish such a dataset to the > compute nodes (thus avoiding the GigE bottleneck). So I wrote a > small/simple bittorrent client and made a 16GB example data set and > measured the performance pushing it to 38 compute nodes: > http://cse.ucdavis.edu/bill/btbench-2.png > > The slow ramp up is partially because I'm launching torrent clients with > a crude for i in { ssh $i launch_torrent.sh }. > > I get approximately 2.5GB/sec sustained when writing to 38 compute > nodes. So 38 nodes * 16GB = 608GB to distribute @ 2.5 GHz sec = 240 > seconds or so. > > The clients definitely see MUCH faster performance when access a local > copy instead of a small share of the performance/bandwidth of a central > file server. > > Do you think it's worth bundling up for others to use? > > This is how it works: > 1) User runs publish before they start submitting > jobs. > 2) The publish command makes a torrent of that directory and starts > seeding that torrent. > 3) The user submits an arbitrary number of jobs that needs that > directory. Inside the job they "$ subscribe " > 4) The subscribe command launches one torrent client per node (not per j > job) and blocks until the directory is completely downloaded > 5) /scratch// has the users data > > Not nearly as convenient as having a fast parallel filesystem, but seems > potentially useful for those who have large read only datasets, GigE and > NFS. > > Thoughts? > > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing > To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf > _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From bs_lists at aakef.fastmail.fm Wed Jun 13 09:40:09 2012 From: bs_lists at aakef.fastmail.fm (Bernd Schubert) Date: Wed, 13 Jun 2012 15:40:09 +0200 Subject: [Beowulf] Torrents for HPC In-Reply-To: <4FD2937B.6010408@cse.ucdavis.edu> References: <4FD2937B.6010408@cse.ucdavis.edu> Message-ID: <4FD89839.2040904@aakef.fastmail.fm> On 06/09/2012 02:06 AM, Bill Broadley wrote: > > I've built Myrinet, SDR, DDR, and QDR clusters ( no FDR yet), but I > still have users whose use cases and budgets still only justify GigE. > > I've setup a 160TB hadoop cluster is working well, but haven't found > justification for the complexity/cost related to lustre. I have high > hopes for Ceph, but it seems not quite ready yet. I'd happy to hear > otherwise. > What about an easy to setup cluster file system such as FhGFS? As one of its developers I'm a bit biased of course, but then I'm also familiar with Lustre, an I think FhGFS is far more easiy to setup. We also do not have the problem to run clients and servers on the same node and so of our customers make heavy use of that and use their compute nodes as storage servers. That should a provide the same or better throughput as your torrent system. Cheers, Bernd _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From landman at scalableinformatics.com Wed Jun 13 09:55:39 2012 From: landman at scalableinformatics.com (Joe Landman) Date: Wed, 13 Jun 2012 09:55:39 -0400 Subject: [Beowulf] Torrents for HPC In-Reply-To: <4FD89839.2040904@aakef.fastmail.fm> References: <4FD2937B.6010408@cse.ucdavis.edu> <4FD89839.2040904@aakef.fastmail.fm> Message-ID: <4FD89BDB.4050100@scalableinformatics.com> On 06/13/2012 09:40 AM, Bernd Schubert wrote: > On 06/09/2012 02:06 AM, Bill Broadley wrote: >> >> I've built Myrinet, SDR, DDR, and QDR clusters ( no FDR yet), but I >> still have users whose use cases and budgets still only justify GigE. >> >> I've setup a 160TB hadoop cluster is working well, but haven't found >> justification for the complexity/cost related to lustre. I have high >> hopes for Ceph, but it seems not quite ready yet. I'd happy to hear >> otherwise. >> > > What about an easy to setup cluster file system such as FhGFS? As one of > its developers I'm a bit biased of course, but then I'm also familiar > with Lustre, an I think FhGFS is far more easiy to setup. We also do not > have the problem to run clients and servers on the same node and so of > our customers make heavy use of that and use their compute nodes as > storage servers. That should a provide the same or better throughput as > your torrent system. I'd like to chime in and note that we have customers re-implementing storage with FhGFS. Ceph will be good. You can build a reasonable system today with xfs as the backing store. The RADOS device is an excellent basis for building reliable systems. Generally speaking none of the cluster file systems will solve the specific problem in the original post, though some of the cluster file systems (and various implementation details) will make the problem indicated to be much less of a problem. -- Joseph Landman, Ph.D Founder and CEO Scalable Informatics Inc. email: landman at scalableinformatics.com web : http://scalableinformatics.com http://scalableinformatics.com/sicluster phone: +1 734 786 8423 x121 fax : +1 866 888 3112 cell : +1 734 612 4615 _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From prentice at ias.edu Wed Jun 13 10:04:47 2012 From: prentice at ias.edu (Prentice Bisbal) Date: Wed, 13 Jun 2012 10:04:47 -0400 Subject: [Beowulf] Status of beowulf.org? Message-ID: <4FD89DFF.9020708@ias.edu> I know this came up recently. I just wanted to see if any new information has surfaced. Does anyone know what the status of beowulf.org is? I will be starting a new job in few weeks, and I'm in the process of unsubscribing from all the mailing lists I subscribe to at my current job. Following the link to the beowulf.org mailman page to control my subscription results in The connection has timed out The server at www.beowulf.org is taking too long to respond. Looks like I'll be unsubscribing through e-mail commands, but I'm worried about how difficult it will be to re-subscribe once I start the new job. -- Prentice _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From landman at scalableinformatics.com Wed Jun 13 10:11:08 2012 From: landman at scalableinformatics.com (Joe Landman) Date: Wed, 13 Jun 2012 10:11:08 -0400 Subject: [Beowulf] Status of beowulf.org? In-Reply-To: <4FD89DFF.9020708@ias.edu> References: <4FD89DFF.9020708@ias.edu> Message-ID: <4FD89F7C.2070302@scalableinformatics.com> On 06/13/2012 10:04 AM, Prentice Bisbal wrote: > I know this came up recently. I just wanted to see if any new > information has surfaced. > > Does anyone know what the status of beowulf.org is? I will be starting a This is part of Penguin Computing, and may have whithered a bit since Don Becker left. > new job in few weeks, and I'm in the process of unsubscribing from all > the mailing lists I subscribe to at my current job. Following the link > to the beowulf.org mailman page to control my subscription results in > > The connection has timed out > The server at www.beowulf.org is taking too long to respond. > > > Looks like I'll be unsubscribing through e-mail commands, but I'm > worried about how difficult it will be to re-subscribe once I start the > new job. If Penguin doesn't want to handle hosting it anymore, please let us know (and feel free to contact me offline, we'd be happy to either host it, or set it up on EC2 or sumthin). > -- Joseph Landman, Ph.D Founder and CEO Scalable Informatics Inc. email: landman at scalableinformatics.com web : http://scalableinformatics.com http://scalableinformatics.com/sicluster phone: +1 734 786 8423 x121 fax : +1 866 888 3112 cell : +1 734 612 4615 _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From eagles051387 at gmail.com Wed Jun 13 10:13:33 2012 From: eagles051387 at gmail.com (Jonathan Aquilina) Date: Wed, 13 Jun 2012 16:13:33 +0200 Subject: [Beowulf] Status of beowulf.org? In-Reply-To: <4FD89F7C.2070302@scalableinformatics.com> References: <4FD89DFF.9020708@ias.edu> <4FD89F7C.2070302@scalableinformatics.com> Message-ID: <72779032-28C9-447F-A852-E38D3529D9CF@gmail.com> I too am willing to host it granted I am only a lurker, clustering is something that still highly interests me. Regards Jonathan Aquilina On 13 Jun 2012, at 16:11, Joe Landman wrote: > On 06/13/2012 10:04 AM, Prentice Bisbal wrote: >> I know this came up recently. I just wanted to see if any new >> information has surfaced. >> >> Does anyone know what the status of beowulf.org is? I will be starting a > > This is part of Penguin Computing, and may have whithered a bit since > Don Becker left. > >> new job in few weeks, and I'm in the process of unsubscribing from all >> the mailing lists I subscribe to at my current job. Following the link >> to the beowulf.org mailman page to control my subscription results in >> >> The connection has timed out >> The server at www.beowulf.org is taking too long to respond. >> >> >> Looks like I'll be unsubscribing through e-mail commands, but I'm >> worried about how difficult it will be to re-subscribe once I start the >> new job. > > If Penguin doesn't want to handle hosting it anymore, please let us know > (and feel free to contact me offline, we'd be happy to either host it, > or set it up on EC2 or sumthin). > >> > > > -- > Joseph Landman, Ph.D > Founder and CEO > Scalable Informatics Inc. > email: landman at scalableinformatics.com > web : http://scalableinformatics.com > http://scalableinformatics.com/sicluster > phone: +1 734 786 8423 x121 > fax : +1 866 888 3112 > cell : +1 734 612 4615 > > > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing > To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From j.wender at science-computing.de Wed Jun 13 10:28:19 2012 From: j.wender at science-computing.de (Jan Wender) Date: Wed, 13 Jun 2012 16:28:19 +0200 Subject: [Beowulf] Status of beowulf.org? In-Reply-To: <72779032-28C9-447F-A852-E38D3529D9CF@gmail.com> References: <4FD89DFF.9020708@ias.edu> <4FD89F7C.2070302@scalableinformatics.com> <72779032-28C9-447F-A852-E38D3529D9CF@gmail.com> Message-ID: <4FD8A383.9030904@science-computing.de> Hi all, I tried again to reach Arend at Penguin, now using another email adress. Will keep you posted. Cheerio, Jan -- ---- Company Information ---- Vorstandsvorsitzender: Gerd-Lothar Leonhart Vorstand: Dr. Bernd Finkbeiner, Dr. Arno Steitz, Dr. Ingrid Zech Vorsitzender des Aufsichtsrats: Philippe Miltin Sitz: Tuebingen Registergericht: Stuttgart Registernummer: HRB 382196 -- Mailscanner: Clean -------------- next part -------------- A non-text attachment was scrubbed... Name: j_wender.vcf Type: text/x-vcard Size: 340 bytes Desc: not available URL: -------------- next part -------------- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From pc7 at sanger.ac.uk Wed Jun 13 10:59:58 2012 From: pc7 at sanger.ac.uk (Peter) Date: Wed, 13 Jun 2012 15:59:58 +0100 Subject: [Beowulf] Torrents for HPC In-Reply-To: <4FD782C6.7050704@cse.psu.edu> References: <4FD2937B.6010408@cse.ucdavis.edu> <4FD782C6.7050704@cse.psu.edu> Message-ID: <4FD8AAEE.8060103@sanger.ac.uk> On 12/06/12 18:56, Ellis H. Wilson III wrote: > On 06/08/12 20:06, Bill Broadley wrote: >> A new user on one of my GigE clusters submits batches of 500 jobs that >> need to randomly read a 30-60GB dataset. They aren't the only user of >> said cluster so each job will be waiting in the queue with a mix of others. > With a 160TB cluster and only a 30-60GB dataset, is there any reason why > the user isn't simply storing their dataset in HDFS? Does the data > change frequently via a non-MapReduce framework such that it needs to be > pulled from NFS before every job? If the dataset is in a few dozen > files and in HDFS in the cluster, there is no reason why MapReduce > shouldn't spawn it's tasks directly "on" the data, without need (most of > the time) for moving all of the data to every node as you mention. From experience this can have varied results and still requires careful management/thought. With HDFS if the replicate number is 3 (often the default case) and the 30 node cluster has 500 jobs then either an initial step is required to replicate the data to all other cluster nodes and then perform the analysis (this imposes the expected network / disk IO impact and job start up latency already in place). Alternatively keep the replication at 3 (or a.n.other defined number) and limit the number of jobs to the available resources where the data replicates pre-exist. The challenge is finding the sweet spot for the work in progress and as always nothing is ever free. So HDFS does not remove the replication process although it helps to hide the processes involved. The other joy encountered with HDFS is that we found it can be less than stable in a multi user environment, this has been confirmed by various others so as always care is required during testing. There are alternatives to HDFS which can be used in conjunction with Hadoop but I'm afraid I'm not able to recommend any in particular as it's been a while since I last kicked the tyres. Is this something that others have more recent experience with and can recommend an alternative ? Pete -- The Wellcome Trust Sanger Institute is operated by Genome Research Limited, a charity registered in England with number 1021457 and a company registered in England with number 2742969, whose registered office is 215 Euston Road, London, NW1 2BE. _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From pc7 at sanger.ac.uk Wed Jun 13 11:13:01 2012 From: pc7 at sanger.ac.uk (Peter) Date: Wed, 13 Jun 2012 16:13:01 +0100 Subject: [Beowulf] Torrents for HPC In-Reply-To: <4FD89839.2040904@aakef.fastmail.fm> References: <4FD2937B.6010408@cse.ucdavis.edu> <4FD89839.2040904@aakef.fastmail.fm> Message-ID: <4FD8ADFD.4070707@sanger.ac.uk> > What about an easy to setup cluster file system such as FhGFS? As one of > its developers I'm a bit biased of course, but then I'm also familiar > with Lustre, an I think FhGFS is far more easiy to setup. We also do not > have the problem to run clients and servers on the same node and so of > our customers make heavy use of that and use their compute nodes as > storage servers. That should a provide the same or better throughput as > your torrent system. > > Cheers, > Bernd An interesting idea. There is at least one storage vendor which has more cores on it's controllers than are required to provide access to the disk subsystems. They have made various inroads in placing a virtualisation layer over these and making them available for other tasks... compute, irods etc etc. Add this to something like the above or stork (http://stork.cse.buffalo.edu/) could be interesting. Pete -- The Wellcome Trust Sanger Institute is operated by Genome Research Limited, a charity registered in England with number 1021457 and a company registered in England with number 2742969, whose registered office is 215 Euston Road, London, NW1 2BE. _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From ellis at cse.psu.edu Wed Jun 13 07:21:58 2012 From: ellis at cse.psu.edu (Ellis H. Wilson III) Date: Wed, 13 Jun 2012 07:21:58 -0400 Subject: [Beowulf] Torrents for HPC In-Reply-To: <4FD8AAEE.8060103@sanger.ac.uk> References: <4FD2937B.6010408@cse.ucdavis.edu> <4FD782C6.7050704@cse.psu.edu> <4FD8AAEE.8060103@sanger.ac.uk> Message-ID: <4FD877D6.5030404@cse.psu.edu> On 06/13/12 10:59, Peter wrote: > On 12/06/12 18:56, Ellis H. Wilson III wrote: >> On 06/08/12 20:06, Bill Broadley wrote: >>> A new user on one of my GigE clusters submits batches of 500 jobs that >>> need to randomly read a 30-60GB dataset. They aren't the only user of >>> said cluster so each job will be waiting in the queue with a mix of others. >> With a 160TB cluster and only a 30-60GB dataset, is there any reason why >> the user isn't simply storing their dataset in HDFS? Does the data >> change frequently via a non-MapReduce framework such that it needs to be >> pulled from NFS before every job? If the dataset is in a few dozen >> files and in HDFS in the cluster, there is no reason why MapReduce >> shouldn't spawn it's tasks directly "on" the data, without need (most of >> the time) for moving all of the data to every node as you mention. > > From experience this can have varied results and still requires careful > management/thought. With HDFS if the replicate number is 3 (often the > default case) and the 30 node cluster has 500 jobs then either an > initial step is required to replicate the data to all other cluster > nodes and then perform the analysis (this imposes the expected network / > disk IO impact and job start up latency already in place). > It really shouldn't require much management, nor initial data movement at all. BTW, I understood 500 jobs to be totally agnostic about each other, as if they were calculating different things using the same dataset. If these are 500 tasks within the same job, well, that's an entirely different matter. If they are just jobs, it really doesn't matter if there are 5 or 500, as by default with Hadoop 0.20 at least jobs are executed in FIFO order. Further, if the user programmed his or her application to be configurable for number of mappers and reducers, it is trivial to match the number of mappers to the slots in the system and reducers similarly (though often reducers is something much lower, like 1 per node). Assuming the 30GB dataset is in 30 1GB files, which shouldn't be hard to guarantee or achieve, each node will get 1 of these files. Therefore the user simply specifies that he or she wants (let's assume 2 map slots per node) 60 map tasks, and Hadoop will silently try to make sure each task ends up on one of the three nodes (assuming default triplication) that have a local data copy. > Alternatively keep the replication at 3 (or a.n.other defined number) > and limit the number of jobs to the available resources where the data > replicates pre-exist. The challenge is finding the sweet spot for the > work in progress and as always nothing is ever free. With only 30 nodes and 30 to 60GB of data, I think it is safe to assume the data exists /everywhere/ in the cluster. Even if Hadoop was stupid and randomly selected a node there would be a 1/10 chance the data was already there, and it's not stupid, so it will check all three of the nodes with replicas before spawning the task elsewhere. Now if there are 1000 nodes and just 30GB of data, then Hadoop will make sure your tasks are prioritized on the nodes that have your data or at least, in the same rack as the nodes that have it. > So HDFS does not remove the replication process although it helps to > hide the processes involved. As I've said, if you set things up properly, there shouldn't be much, if any, replication, and Hadoop doesn't help to hide the replication -- it totally obscures the process. You have no hand in doing so. > The other joy encountered with HDFS is that we found it can be less than > stable in a multi user environment, this has been confirmed by various > others so as always care is required during testing. I'll concede that original configuration can be tough, but I've assisted with management of an HDFS instance that stored ~60TB of data and over 10 million files, both as scratch and for users home dirs. It is certainly stable enough for day to day use. > There are alternatives to HDFS which can be used in conjunction with > Hadoop but I'm afraid I'm not able to recommend any in particular as > it's been a while since I last kicked the tyres. Is this something that > others have more recent experience with and can recommend an alternative ? I'm working on an alternative to HDFS as we speak, which bypasses HDFS entirely and allows people using MapReduce to run directly against multiple NAS boxes as if they were a single federated storage system. I'll be sending something out to this list about the source when I release it. Best, ellis _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From alscheinine at tuffmail.us Wed Jun 13 11:30:57 2012 From: alscheinine at tuffmail.us (Alan Louis Scheinine) Date: Wed, 13 Jun 2012 10:30:57 -0500 Subject: [Beowulf] Status of beowulf.org? In-Reply-To: <4FD89F7C.2070302@scalableinformatics.com> References: <4FD89DFF.9020708@ias.edu> <4FD89F7C.2070302@scalableinformatics.com> Message-ID: <4FD8B231.8010407@tuffmail.us> The message archive at the web site would be valuable for those interested in Beowulf clusters. I've read almost every message for many years, but when a problem or question arises I need to go back to the archive to get details. -- Alan Scheinine 200 Georgann Dr., Apt. E6 Vicksburg, MS 39180 Email: alscheinine at tuffmail.us Mobile phone: 225 288 4176 http://www.flickr.com/photos/ascheinine _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From eagles051387 at gmail.com Wed Jun 13 11:40:01 2012 From: eagles051387 at gmail.com (Jonathan Aquilina) Date: Wed, 13 Jun 2012 17:40:01 +0200 Subject: [Beowulf] Easy clustering Message-ID: Is there something out there that is gui based that can be run from ones linux mac or win box to easily manage a linux cluster? -- Mailscanner: Clean -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From pc7 at sanger.ac.uk Wed Jun 13 11:43:53 2012 From: pc7 at sanger.ac.uk (Peter) Date: Wed, 13 Jun 2012 16:43:53 +0100 Subject: [Beowulf] Torrents for HPC In-Reply-To: <4FD877D6.5030404@cse.psu.edu> References: <4FD2937B.6010408@cse.ucdavis.edu> <4FD782C6.7050704@cse.psu.edu> <4FD8AAEE.8060103@sanger.ac.uk> <4FD877D6.5030404@cse.psu.edu> Message-ID: <4FD8B539.4060007@sanger.ac.uk> On 13/06/12 12:21, Ellis H. Wilson III wrote: > On 06/13/12 10:59, Peter wrote: >> On 12/06/12 18:56, Ellis H. Wilson III wrote: >>> On 06/08/12 20:06, Bill Broadley wrote: >>>> A new user on one of my GigE clusters submits batches of 500 jobs that >>>> need to randomly read a 30-60GB dataset. They aren't the only user of >>>> said cluster so each job will be waiting in the queue with a mix of others. >>> With a 160TB cluster and only a 30-60GB dataset, is there any reason why >>> the user isn't simply storing their dataset in HDFS? Does the data >>> change frequently via a non-MapReduce framework such that it needs to be >>> pulled from NFS before every job? If the dataset is in a few dozen >>> files and in HDFS in the cluster, there is no reason why MapReduce >>> shouldn't spawn it's tasks directly "on" the data, without need (most of >>> the time) for moving all of the data to every node as you mention. >> From experience this can have varied results and still requires careful >> management/thought. With HDFS if the replicate number is 3 (often the >> default case) and the 30 node cluster has 500 jobs then either an > > initial step is required to replicate the data to all other cluster > > nodes and then perform the analysis (this imposes the expected network / > > disk IO impact and job start up latency already in place). > > > > It really shouldn't require much management, nor initial data movement > at all. BTW, I understood 500 jobs to be totally agnostic about each > other, as if they were calculating different things using the same > dataset. If these are 500 tasks within the same job, well, that's an > entirely different matter. If they are just jobs, it really doesn't > matter if there are 5 or 500, as by default with Hadoop 0.20 at least > jobs are executed in FIFO order. Further, if the user programmed his or > her application to be configurable for number of mappers and reducers, > it is trivial to match the number of mappers to the slots in the system > and reducers similarly (though often reducers is something much lower, > like 1 per node). > > Assuming the 30GB dataset is in 30 1GB files, which shouldn't be hard to > guarantee or achieve, each node will get 1 of these files. Therefore > the user simply specifies that he or she wants (let's assume 2 map slots > per node) 60 map tasks, and Hadoop will silently try to make sure each > task ends up on one of the three nodes (assuming default triplication) > that have a local data copy. > >> Alternatively keep the replication at 3 (or a.n.other defined number) >> and limit the number of jobs to the available resources where the data >> replicates pre-exist. The challenge is finding the sweet spot for the >> work in progress and as always nothing is ever free. > With only 30 nodes and 30 to 60GB of data, I think it is safe to assume > the data exists /everywhere/ in the cluster. Even if Hadoop was stupid > and randomly selected a node there would be a 1/10 chance the data was > already there, and it's not stupid, so it will check all three of the > nodes with replicas before spawning the task elsewhere. Now if there > are 1000 nodes and just 30GB of data, then Hadoop will make sure your > tasks are prioritized on the nodes that have your data or at least, in > the same rack as the nodes that have it. > >> So HDFS does not remove the replication process although it helps to >> hide the processes involved. > As I've said, if you set things up properly, there shouldn't be much, if > any, replication, and Hadoop doesn't help to hide the replication -- it > totally obscures the process. You have no hand in doing so. > >> The other joy encountered with HDFS is that we found it can be less than >> stable in a multi user environment, this has been confirmed by various >> others so as always care is required during testing. > I'll concede that original configuration can be tough, but I've assisted > with management of an HDFS instance that stored ~60TB of data and over > 10 million files, both as scratch and for users home dirs. It is > certainly stable enough for day to day use. > >> There are alternatives to HDFS which can be used in conjunction with >> Hadoop but I'm afraid I'm not able to recommend any in particular as >> it's been a while since I last kicked the tyres. Is this something that >> others have more recent experience with and can recommend an alternative ? > I'm working on an alternative to HDFS as we speak, which bypasses HDFS > entirely and allows people using MapReduce to run directly against > multiple NAS boxes as if they were a single federated storage system. > I'll be sending something out to this list about the source when I > release it. > > Best, > Many thanks for your comments Ellis, I read the initial Q that the full data set may be required by any job so an upgrade to my personal filters may be required :). If this were the case then post job submission it becomes a wait until a node with the data becomes available or alternatively a copy to a.n.other node needs to take place before it can be used for the task at hand. At this point it's sort of a balance between how many nodes are available immediately for the task and how long do you wish to wait, either for the FIFO tasks to complete on a subset of available nodes or the copy to take place. Given that 30-60Gb is small enough copy everywhere, that sort of takes things full circle to the initial rsync options (and variants) previously discussed to local disk. Although I apologise if I'm miss-interpreting the above. The comment regarding the obscuring the replication process was directed more towards the user experience, they don't need to know it automagically happens BUT behind the scenes the copies are happening all the same, with the expected impact incurred on IO etc. So HDFS doesn't make the process impact free. If you are able to send more to the list regarding HDFS plan B that would be great and certainly something I'd be interested in hearing more about. Do you have a blog or similar with references regarding any of the above ? If so that would be much appreciated. Thanks again and good luck with the multiple NAS option. Pete -- The Wellcome Trust Sanger Institute is operated by Genome Research Limited, a charity registered in England with number 1021457 and a company registered in England with number 2742969, whose registered office is 215 Euston Road, London, NW1 2BE. _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From landman at scalableinformatics.com Wed Jun 13 11:54:32 2012 From: landman at scalableinformatics.com (Joe Landman) Date: Wed, 13 Jun 2012 11:54:32 -0400 Subject: [Beowulf] Easy clustering In-Reply-To: References: Message-ID: <30e82854-3e21-45e4-9e69-39ef3dbbdf7f@email.android.com> Bright computing product. Uses their own cluster tools. -- Sent from an android device. Please excuse brevity and typos Jonathan Aquilina wrote: Is there something out there that is gui based that can be run from ones linux mac or win box to easily manage a linux cluster? -- Mailscanner: Clean -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From ellis at cse.psu.edu Wed Jun 13 08:07:07 2012 From: ellis at cse.psu.edu (Ellis H. Wilson III) Date: Wed, 13 Jun 2012 08:07:07 -0400 Subject: [Beowulf] Torrents for HPC In-Reply-To: <4FD8B539.4060007@sanger.ac.uk> References: <4FD2937B.6010408@cse.ucdavis.edu> <4FD782C6.7050704@cse.psu.edu> <4FD8AAEE.8060103@sanger.ac.uk> <4FD877D6.5030404@cse.psu.edu> <4FD8B539.4060007@sanger.ac.uk> Message-ID: <4FD8826B.3050903@cse.psu.edu> On 06/13/12 11:43, Peter wrote: > I read the initial Q that the full data set may be required by any job > so an upgrade to my personal filters may be required :). If this were No, you are correct about that, or at least, that's what I understood it to mean as well. So for instance, Job1 has Task1-30 and the 30GB DataSet has Chunk1-30, each 1GB in size, spread over the entire cluster. Hadoop just matches Task1 to the chunk it wants to work on. Yes, this means there at least must be parts of the process that are emb. parallel, but that's pretty much taken for granted with big data computation. The serial parts are typically handled by the shuffle and reduce phases at the end. > Given that 30-60Gb is small enough copy everywhere, that sort of takes I wouldn't expect much performance improvement going from 3 to all 30 chunks on a given node, unless you are incredibly unlucky or something is terribly misconfigured with your Hadoop instance. While 30GB isn't too bad to copy elsewhere, it's incredibly poor use of storage resources, having 30 copies of the data all over. > The comment regarding the obscuring the replication process was directed > more towards the user experience, they don't need to know it > automagically happens BUT behind the scenes the copies are happening all > the same, with the expected impact incurred on IO etc. So HDFS doesn't > make the process impact free. Making 30 copies of a 30GB dataset composed of 30 1GB files is quite different than 3 copies of each file, in size and work passed onto the user to manage. Even if you get unlucky and one of your tasks does require remote data, Hadoop handles streaming it to the task while it needs it and cleans up afterwards. It's going to be far more considerate about storage resources than any human being will be. > If you are able to send more to the list regarding HDFS plan B that > would be great and certainly something I'd be interested in hearing more > about. Do you have a blog or similar with references regarding any of > the above ? If so that would be much appreciated. Not yet. Working on a website as well -- will let you know as soon as that completes. Best, ellis _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From Greg at Keller.net Wed Jun 13 12:25:12 2012 From: Greg at Keller.net (Greg Keller) Date: Wed, 13 Jun 2012 11:25:12 -0500 Subject: [Beowulf] Beowulf Digest, Vol 100, Issue 10 In-Reply-To: References: Message-ID: > What about an easy to setup cluster file system such as FhGFS? As one of > its developers I'm a bit biased of course, but then I'm also familiar > with Lustre, an I think FhGFS is far more easiy to setup. We also do not > have the problem to run clients and servers on the same node and so of > our customers make heavy use of that and use their compute nodes as > storage servers. That should a provide the same or better throughput as > your torrent system. > > Cheers, > Bernd We've been curious about FhGFS but the licensing did not leave us confident we would always have access to it if we integrated it into our business and made available to our users. Serious success could essentially cause an epic failure if the license made it expensive to us (as commercial users) suddenly. As a "cloud" based hpc provider I thought it was too risky and have been happy with Lustre and it's affiliates. Specifically this clause could be a problem: 3.2 LICENSEE may NOT: ... - rent or lease the LICENSED SOFTWARE and DOCUMENTATION to any third party ... Does anyone think the license was intended to block cloud providers making it available as part of a cloud based HPC solution? Am I mis-interpreting this? Not looking for a legal-ese battle but I am wondering if other licenses commonly used in cloud contexts have similar language. Anyone think the FS is fantastic enough that I should fight (spend money on lawyers and licenses) to put it in front of "Cloud" HPC users? Cheers! Greg -- Mailscanner: Clean -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From bs_lists at aakef.fastmail.fm Wed Jun 13 13:17:18 2012 From: bs_lists at aakef.fastmail.fm (Bernd Schubert) Date: Wed, 13 Jun 2012 19:17:18 +0200 Subject: [Beowulf] Torrents for HPC In-Reply-To: <4FD89BDB.4050100@scalableinformatics.com> References: <4FD2937B.6010408@cse.ucdavis.edu> <4FD89839.2040904@aakef.fastmail.fm> <4FD89BDB.4050100@scalableinformatics.com> Message-ID: <4FD8CB1E.2090103@aakef.fastmail.fm> On 06/13/2012 03:55 PM, Joe Landman wrote: > On 06/13/2012 09:40 AM, Bernd Schubert wrote: >> On 06/09/2012 02:06 AM, Bill Broadley wrote: >>> >>> I've built Myrinet, SDR, DDR, and QDR clusters ( no FDR yet), but I >>> still have users whose use cases and budgets still only justify GigE. >>> >>> I've setup a 160TB hadoop cluster is working well, but haven't found >>> justification for the complexity/cost related to lustre. I have high >>> hopes for Ceph, but it seems not quite ready yet. I'd happy to hear >>> otherwise. >>> >> >> What about an easy to setup cluster file system such as FhGFS? As one of >> its developers I'm a bit biased of course, but then I'm also familiar >> with Lustre, an I think FhGFS is far more easiy to setup. We also do not >> have the problem to run clients and servers on the same node and so of >> our customers make heavy use of that and use their compute nodes as >> storage servers. That should a provide the same or better throughput as >> your torrent system. Arg, so many mistakes, why do I never notice those before sending the mail? :( > > I'd like to chime in and note that we have customers re-implementing > storage with FhGFS. > > Ceph will be good. You can build a reasonable system today with xfs as > the backing store. The RADOS device is an excellent basis for building > reliable systems. While the op does not need IB, most cluster nowadays do have IB. I think Ceph still does not support that, does it? Cheers, Bernd _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From deadline at eadline.org Wed Jun 13 15:35:29 2012 From: deadline at eadline.org (Douglas Eadline) Date: Wed, 13 Jun 2012 15:35:29 -0400 Subject: [Beowulf] Status of beowulf.org? In-Reply-To: <4FD89DFF.9020708@ias.edu> References: <4FD89DFF.9020708@ias.edu> Message-ID: <7a0f510c425b98deb2853d215a3784ed.squirrel@mail.eadline.org> This is a question that is going to need an answer sooner than later and some input from Penguin would be nice -- nudge Certainly there are others that can help with this effort if Penguin is too busy or do not have the resources. -- Doug > I know this came up recently. I just wanted to see if any new > information has surfaced. > > Does anyone know what the status of beowulf.org is? I will be starting a > new job in few weeks, and I'm in the process of unsubscribing from all > the mailing lists I subscribe to at my current job. Following the link > to the beowulf.org mailman page to control my subscription results in > > The connection has timed out > The server at www.beowulf.org is taking too long to respond. > > > Looks like I'll be unsubscribing through e-mail commands, but I'm > worried about how difficult it will be to re-subscribe once I start the > new job. > > -- > Prentice > > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing > To change your subscription (digest mode or unsubscribe) visit > http://www.beowulf.org/mailman/listinfo/beowulf > > -- > Mailscanner: Clean > -- Doug -- Mailscanner: Clean _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From bill at cse.ucdavis.edu Wed Jun 13 17:59:16 2012 From: bill at cse.ucdavis.edu (Bill Broadley) Date: Wed, 13 Jun 2012 14:59:16 -0700 Subject: [Beowulf] Torrents for HPC In-Reply-To: <4FD89839.2040904@aakef.fastmail.fm> References: <4FD2937B.6010408@cse.ucdavis.edu> <4FD89839.2040904@aakef.fastmail.fm> Message-ID: <4FD90D34.5090501@cse.ucdavis.edu> On 06/13/2012 06:40 AM, Bernd Schubert wrote: > What about an easy to setup cluster file system such as FhGFS? Great suggestion. I'm all for a generally useful parallel file systems instead of torrent solution with a very narrow use case. > As one of > its developers I'm a bit biased of course, but then I'm also familiar I think this list is exactly the place where a developer should jump in and suggest/explain their solutions as it related to use in HPC clusters. > with Lustre, an I think FhGFS is far more easiy to setup. We also do not > have the problem to run clients and servers on the same node and so of > our customers make heavy use of that and use their compute nodes as > storage servers. That should a provide the same or better throughput as > your torrent system. I found the wiki, the "view flyer", FAQ, and related. I had a few questions, I found this link http://www.fhgfs.com/wiki/wikka.php?wakka=FAQ#ha_support but was not sure of the details. What happens when a metadata server dies? What happens when a storage server dies? If either above is data loss/failure/unreadable files is there a description of how to improve against this with drbd+heartbeat or equivalent? Sounds like source is not available, and only binaries for CentOS? Looks like it does need a kernel module, does that mean only old 2.6.X CentOS kernels are supported? Does it work with mainline ofed on qlogic and mellanox hardware? From a sysadmin point of view I'm also interested in: * Do blocks auto balance across storage nodes? * Is managing disk space, inodes (or equiv) and related capacity planning complex? Or does df report useful/obvious numbers? * Can storage nodes be added/removed easily by migrating on/off of hardware? * Is FhGFS handle 100% of the distributed file system responsibilities or does it layer on top of xfs/ext4 or related? (like ceph) * With large files does performance scale reasonably with storage servers? * With small files does performance scale reasonably with metadata servers? BTW, if anyone is current on any other parallel file system I'd (and I suspect others on list) would find it very valuable. I run a hadoop cluster, but I suspect there are others on list that could provide better answer than I. My lustre knowledge is second hand and dated. _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From samuel at unimelb.edu.au Thu Jun 14 02:11:25 2012 From: samuel at unimelb.edu.au (Christopher Samuel) Date: Thu, 14 Jun 2012 16:11:25 +1000 Subject: [Beowulf] Torrents for HPC In-Reply-To: <4FD8CB1E.2090103@aakef.fastmail.fm> References: <4FD2937B.6010408@cse.ucdavis.edu> <4FD89839.2040904@aakef.fastmail.fm> <4FD89BDB.4050100@scalableinformatics.com> <4FD8CB1E.2090103@aakef.fastmail.fm> Message-ID: <4FD9808D.6010602@unimelb.edu.au> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 14/06/12 03:17, Bernd Schubert wrote: > While the op does not need IB, most cluster nowadays do have IB. I > think Ceph still does not support that, does it? Well, if it works over an IP network then it should work with IPoIB, even if it doesn't have native IB. - -- Christopher Samuel - Senior Systems Administrator VLSCI - Victorian Life Sciences Computation Initiative Email: samuel at unimelb.edu.au Phone: +61 (0)3 903 55545 http://www.vlsci.unimelb.edu.au/ -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk/ZgI0ACgkQO2KABBYQAh8rsACeKtEMTjdR7Ldt8Us+vQd444lr SCcAoIGCpmh0sf7jhpwAVzCZ2hI2Bxq9 =PGcV -----END PGP SIGNATURE----- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From eagles051387 at gmail.com Thu Jun 14 04:03:57 2012 From: eagles051387 at gmail.com (Jonathan Aquilina) Date: Thu, 14 Jun 2012 10:03:57 +0200 Subject: [Beowulf] Easy clustering In-Reply-To: <30e82854-3e21-45e4-9e69-39ef3dbbdf7f@email.android.com> References: <30e82854-3e21-45e4-9e69-39ef3dbbdf7f@email.android.com> Message-ID: <41A8BCA1-03EF-4825-A986-D608F6EC4268@gmail.com> What i was thinking is there an easy front end UI that one can install lets say on their normal mac pc to manage their cluster and all sorts of aspects of the cluster. Regards Jonathan Aquilina On 13 Jun 2012, at 17:54, Joe Landman wrote: > Bright computing product. Uses their own cluster tools. > -- > Sent from an android device. Please excuse brevity and typos > > Jonathan Aquilina wrote: > Is there something out there that is gui based that can be run from ones linux mac or win box to easily manage a linux cluster? > -- Mailscanner: Clean -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From reuti at staff.uni-marburg.de Thu Jun 14 06:26:40 2012 From: reuti at staff.uni-marburg.de (Reuti) Date: Thu, 14 Jun 2012 12:26:40 +0200 Subject: [Beowulf] Easy clustering In-Reply-To: <41A8BCA1-03EF-4825-A986-D608F6EC4268@gmail.com> References: <30e82854-3e21-45e4-9e69-39ef3dbbdf7f@email.android.com> <41A8BCA1-03EF-4825-A986-D608F6EC4268@gmail.com> Message-ID: <10695FDD-DF83-4AC9-B9EC-316028BD845B@staff.uni-marburg.de> Am 14.06.2012 um 10:03 schrieb Jonathan Aquilina: > What i was thinking is there an easy front end UI that one can install lets say on their normal mac pc to manage their cluster and all sorts of aspects of the cluster. How do you define "manage"? Remote KVM, installation by PXE, control by ipmitools, queue control,... -- Reuti > Regards > > Jonathan Aquilina > > > > On 13 Jun 2012, at 17:54, Joe Landman wrote: > >> Bright computing product. Uses their own cluster tools. >> -- >> Sent from an android device. Please excuse brevity and typos >> >> Jonathan Aquilina wrote: >> Is there something out there that is gui based that can be run from ones linux mac or win box to easily manage a linux cluster? >> > > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing > To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From eagles051387 at gmail.com Thu Jun 14 08:29:36 2012 From: eagles051387 at gmail.com (Jonathan Aquilina) Date: Thu, 14 Jun 2012 14:29:36 +0200 Subject: [Beowulf] Easy clustering In-Reply-To: <10695FDD-DF83-4AC9-B9EC-316028BD845B@staff.uni-marburg.de> References: <30e82854-3e21-45e4-9e69-39ef3dbbdf7f@email.android.com> <41A8BCA1-03EF-4825-A986-D608F6EC4268@gmail.com> <10695FDD-DF83-4AC9-B9EC-316028BD845B@staff.uni-marburg.de> Message-ID: Reuti, what i mean I have used webmin, but I hear mixed reviews about in terms of security vulnerabilities. I wonder how a python based web framework would work in this type of environment. Has anyone tried out in ubuntu 12.04 the Metal as a service (MAAS) stuff? Regards Jonathan Aquilina On 14 Jun 2012, at 12:26, Reuti wrote: > Am 14.06.2012 um 10:03 schrieb Jonathan Aquilina: > >> What i was thinking is there an easy front end UI that one can install lets say on their normal mac pc to manage their cluster and all sorts of aspects of the cluster. > > How do you define "manage"? Remote KVM, installation by PXE, control by ipmitools, queue control,... > > -- Reuti > > >> Regards >> >> Jonathan Aquilina >> >> >> >> On 13 Jun 2012, at 17:54, Joe Landman wrote: >> >>> Bright computing product. Uses their own cluster tools. >>> -- >>> Sent from an android device. Please excuse brevity and typos >>> >>> Jonathan Aquilina wrote: >>> Is there something out there that is gui based that can be run from ones linux mac or win box to easily manage a linux cluster? >>> >> >> _______________________________________________ >> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing >> To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf > -- Mailscanner: Clean -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From landman at scalableinformatics.com Thu Jun 14 11:24:18 2012 From: landman at scalableinformatics.com (Joe Landman) Date: Thu, 14 Jun 2012 11:24:18 -0400 Subject: [Beowulf] Easy clustering In-Reply-To: References: <30e82854-3e21-45e4-9e69-39ef3dbbdf7f@email.android.com> <41A8BCA1-03EF-4825-A986-D608F6EC4268@gmail.com> <10695FDD-DF83-4AC9-B9EC-316028BD845B@staff.uni-marburg.de> Message-ID: <4FDA0222.9050609@scalableinformatics.com> On 06/14/2012 08:29 AM, Jonathan Aquilina wrote: > Reuti, what i mean > > I have used webmin, but I hear mixed reviews about in terms of security > vulnerabilities. I wonder how a python based web framework would work in > this type of environment. Has anyone tried out in ubuntu 12.04 the Metal > as a service (MAAS) stuff? I think there are several different things being mixed in here. Clustering as in Beowulf clustering? Clustering as in building/managing a group of related machines, but not necessarily beowulf? Then you asked about Ubuntu. Ok ... I think we need clarification on what sort of cluster you are talking about ... but I can answer the ubuntu question. We are currently running 2x Ubuntu 12.04 servers in the amazon cloud to handle mail and web for us. Started right before our move to our new digs (c.f. http://scalableinformatics.com/location ), and we are continuing to run it there. Basically this is for seamless continuity more than anything else. Once we get our second network line into the facility, we'll probably "retire" one of these, and use the other as a smaller instance for mail forwarding. Since we are doing this as virtualized instances, we wouldn't do serious/significant resource intensive computing on it. Works great for a web/mail server though. If we were doing hard core computing, we'd go with one of the other instance types. I manage these through CLI. Certificate based ssh access. > > Regards > > Jonathan Aquilina > > > > On 14 Jun 2012, at 12:26, Reuti wrote: > >> Am 14.06.2012 um 10:03 schrieb Jonathan Aquilina: >> >>> What i was thinking is there an easy front end UI that one can >>> install lets say on their normal mac pc to manage their cluster and >>> all sorts of aspects of the cluster. >> >> How do you define "manage"? Remote KVM, installation by PXE, control >> by ipmitools, queue control,... >> >> -- Reuti >> >> >>> Regards >>> >>> Jonathan Aquilina >>> >>> >>> >>> On 13 Jun 2012, at 17:54, Joe Landman wrote: >>> >>>> Bright computing product. Uses their own cluster tools. >>>> -- >>>> Sent from an android device. Please excuse brevity and typos >>>> >>>> Jonathan Aquilina >>> > wrote: >>>> Is there something out there that is gui based that can be run from >>>> ones linux mac or win box to easily manage a linux cluster? >>>> >>> >>> _______________________________________________ >>> Beowulf mailing list, Beowulf at beowulf.org >>> sponsored by Penguin Computing >>> To change your subscription (digest mode or unsubscribe) visit >>> http://www.beowulf.org/mailman/listinfo/beowulf >> > -- Joseph Landman, Ph.D Founder and CEO Scalable Informatics Inc. email: landman at scalableinformatics.com web : http://scalableinformatics.com http://scalableinformatics.com/sicluster phone: +1 734 786 8423 x121 fax : +1 866 888 3112 cell : +1 734 612 4615 _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From bs_lists at aakef.fastmail.fm Thu Jun 14 12:30:45 2012 From: bs_lists at aakef.fastmail.fm (Bernd Schubert) Date: Thu, 14 Jun 2012 18:30:45 +0200 Subject: [Beowulf] Torrents for HPC In-Reply-To: <4FD89839.2040904@aakef.fastmail.fm> References: <4FD2937B.6010408@cse.ucdavis.edu> <4FD89839.2040904@aakef.fastmail.fm> Message-ID: <4FDA11B5.8010403@aakef.fastmail.fm> [I'm moving that from your digest answer to the general discussion thread] > On 06/13/2012 06:25 PM, Greg Keller wrote:> >> What about an easy to setup cluster file system such as FhGFS? As one of >> its developers I'm a bit biased of course, but then I'm also familiar >> with Lustre, an I think FhGFS is far more easiy to setup. We also do not >> have the problem to run clients and servers on the same node and so of >> our customers make heavy use of that and use their compute nodes as >> storage servers. That should a provide the same or better throughput as >> your torrent system. >> >> Cheers, >> Bernd > > We've been curious about FhGFS but the licensing did not leave us > confident we would always have access to it if we integrated it into our > business and made available to our users. Serious success could > essentially cause an epic failure if the license made it expensive to us > (as commercial users) suddenly. As a "cloud" based hpc provider I > thought it was too risky and have been happy with Lustre and it's > affiliates. > > Specifically this clause could be a problem: > > 3.2 LICENSEE may NOT: > > ... > > - rent or lease the LICENSED SOFTWARE and DOCUMENTATION to any third party > > ... > > Does anyone think the license was intended to block cloud providers making > it available as part of a cloud based HPC solution? Am I mis-interpreting this? > Not looking for a legal-ese battle but I am wondering if other licenses commonly > used in cloud contexts have similar language. Anyone think the FS is fantastic > enough that I should fight (spend money on lawyers and licenses) to put it in > front of "Cloud" HPC users? Arg, such issues are exactly the reason why I don't like contracts and laws written by lawyers. Instead of writing with 'normal' words understandable by everyone, they have their own language, which nobody can understand is entirely unclear. I'm not sure if they do understand themselves what they have written... Given the high number of useless lawsuits probably not. This clause is about charging for the licensed software (i.e. fhgfs), not about services around fhgfs. Neither this clause nor any other clause in the EULA is intended or prohibits that you provide fhgfs to cloud users. So this particular clause just says that you are not allowed to charge money for allowing people to use fhgfs. So it actually protects users from paying for a software, which is in fact free to use for everyone, no matter if it's a commercial user or not. On the other hand, you are still free to charge customers for services around fhgfs, e.g. you might charge your cloud customers for installing fhgfs or maintaining it or something like that - if that's what you have in mind. Please let us know if this is sufficient for you to consider FhGFS in the future or if we again should work with our Fraunhofer lawyers to improve the license. Thanks, Bernd _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From bs_lists at aakef.fastmail.fm Thu Jun 14 12:14:27 2012 From: bs_lists at aakef.fastmail.fm (Bernd Schubert) Date: Thu, 14 Jun 2012 18:14:27 +0200 Subject: [Beowulf] Torrents for HPC In-Reply-To: <4FD90D34.5090501@cse.ucdavis.edu> References: <4FD2937B.6010408@cse.ucdavis.edu> <4FD89839.2040904@aakef.fastmail.fm> <4FD90D34.5090501@cse.ucdavis.edu> Message-ID: <4FDA0DE3.9060803@aakef.fastmail.fm> On 06/13/2012 11:59 PM, Bill Broadley wrote: > On 06/13/2012 06:40 AM, Bernd Schubert wrote: >> What about an easy to setup cluster file system such as FhGFS? > > Great suggestion. I'm all for a generally useful parallel file systems > instead of torrent solution with a very narrow use case. > >> As one of >> its developers I'm a bit biased of course, but then I'm also familiar > > I think this list is exactly the place where a developer should jump in > and suggest/explain their solutions as it related to use in HPC clusters. > >> with Lustre, an I think FhGFS is far more easiy to setup. We also do not >> have the problem to run clients and servers on the same node and so of >> our customers make heavy use of that and use their compute nodes as >> storage servers. That should a provide the same or better throughput as >> your torrent system. > > I found the wiki, the "view flyer", FAQ, and related. > > I had a few questions, I found this link > http://www.fhgfs.com/wiki/wikka.php?wakka=FAQ#ha_support but was not > sure of the details. > > What happens when a metadata server dies? > > What happens when a storage server dies? Right, those two issues we are presently actively working on. So the current release relies on hardware raid. But later on this year there will be meta data mirroring. After that data mirroring will follow. > > If either above is data loss/failure/unreadable files is there a > description of how to improve against this with drbd+heartbeat or > equivalent? During the next weeks we will test fhgfs-ocf scripts for an HA (pacemaker) installation. As we are going to be paid for the installation, I do no know yet when we will make those scripts publically available. Generally drbd+heartbeat as mirroring solution is possible. > > Sounds like source is not available, and only binaries for CentOS? Well, RHEL5 / RHEL6 based, SLES10 / SLES11 and Debian. And sorry, the server daemons are not open source yet. I think the more people asking to open it, the faster this process will be. Especially if those people also are going to buy support contracts :) > > Looks like it does need a kernel module, does that mean only old 2.6.X > CentOS kernels are supported? Oh, on the contrary. We basically support any kernel beginning with 2.6.16 onwards. Even support for most recent vanilla kernels is usually done within a few weeks after its release. > > Does it work with mainline ofed on qlogic and mellanox hardware? Definitely works with both and RDMA (ibverbs) transfers. As QLogic has some problems with ibverbs, we had a cooperation with QLogic to improve performance on their hardware. Recent QLogic OFED stacks do include performance fixes. Please also see http://www.fhgfs.com/wiki/wikka.php?wakka=NativeInfinibandSupport for (QLogic) tuning advises. > > From a sysadmin point of view I'm also interested in: > * Do blocks auto balance across storage nodes? Actually files are balanced. The default file stripe count is 4, but can be adjusted by the admin. So assuming you would have only one target per server, a large file would be distributed over 4 nodes. The default chunk size is 512kB. For files smaller than that size there is no stripe-overhead. > * Is managing disk space, inodes (or equiv) and related capacity > planning complex? Or does df report useful/obvious numbers? Hmm, right now (unix) "df -i" does not report the inode usage yet for fhgfs. We will fix that in later releases. At least for traditional storage severs we recommend to use ext4 on meta-data partitions for performance reasons. For storage partitions we usually recommend XFS, again for performance. Also, storage and meta-data can be on the very same partion, you just need configure the path were to find those data in the corresponding config files. If you are going to use all your client nodes as fhgfs servers and those already have XFS as scratch partion, XFS is probably also fine. However, due a severe XFS performance issue, you should either need a kernel to have this issue fixed or you should disable meta-data-as-xattr (in fhgfs-meta.conf: storeUseExtendedAttribs = false). Also please see here for a discussion and benchmarks http://oss.sgi.com/archives/xfs/2011-08/msg00233.html Christoph Hellwig then fixed the unlink issue later on and this patch should be in all recent linux-stable kernels. I have not checked RHEL5/RHEL6, though. Anyway, if you are going use ext4 on your meta-data partition, you need to make sure yourself you do have sufficient inodes available. Our wiki has recommendations for mkfs.ext4 options. > * Can storage nodes be added/removed easily by migrating on/off of > hardware? Adding storage nodes on the fly works perfectly fine. Our fhgfs-ctl tool also has a mode to migrate files off a storage node. However, we really recommend not to do that while clients are writing to the file system right now. Reason is that we do not lock files-in-migration yet and a client then might write to unlinked files, which would result in silent data loss. We have on-the-fly data migration on our todo list, but I cannot say yet, when that is going to come. If you are going to use your clients as storage nodes, you could specify that system as preferred system to write files to. That would easily allow to remove that node... > * Is FhGFS handle 100% of the distributed file system responsibilities > or does it layer on top of xfs/ext4 or related? (like ceph) Like ceph on top of other file systems, such as xfs or ext4. > * With large files does performance scale reasonably with storage > servers? Yes, you may also adjust the stripe count by your needs. Default stripe count is 4, which approximately provides the performance of 4 storage targets. > * With small files does performance scale reasonably with metadata > servers? Striping over different meta data servers is done on a per-directory basis. As most users and applications work in different directories, meta data performance usually scales linearily with the number of metadata servers. Please note: Our wiki has tuning advices for meta data performance and with our next major release we also should see a greatly improved meta data performance. Hope it helps and please let me know if you have further questions! Cheers, Bernd PS: We have a GUI, which should help you to just try it out within a few minutes. Please see here: http://www.fhgfs.com/wiki/wikka.php?wakka=GUIbasedInstallation _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From Greg at Keller.net Thu Jun 14 19:17:17 2012 From: Greg at Keller.net (Greg Keller) Date: Thu, 14 Jun 2012 18:17:17 -0500 Subject: [Beowulf] Torrents for HPC In-Reply-To: <4FDA11B5.8010403@aakef.fastmail.fm> References: <4FD2937B.6010408@cse.ucdavis.edu> <4FD89839.2040904@aakef.fastmail.fm> <4FDA11B5.8010403@aakef.fastmail.fm> Message-ID: On Thu, Jun 14, 2012 at 11:30 AM, Bernd Schubert wrote: > [I'm moving that from your digest answer to the general discussion thread] > Doh, Thanks! > > On 06/13/2012 06:25 PM, Greg Keller wrote:> >> >>> What about an easy to setup cluster file system such as FhGFS? As one of >>> its developers I'm a bit biased of course, but then I'm also familiar >>> with Lustre, an I think FhGFS is far more easiy to setup. We also do not >>> have the problem to run clients and servers on the same node and so of >>> our customers make heavy use of that and use their compute nodes as >>> storage servers. That should a provide the same or better throughput as >>> your torrent system. >>> >>> Cheers, >>> Bernd >>> >> >> We've been curious about FhGFS but the licensing did not leave us >> confident we would always have access to it if we integrated it into our >> business and made available to our users. Serious success could >> essentially cause an epic failure if the license made it expensive to us >> (as commercial users) suddenly. As a "cloud" based hpc provider I >> thought it was too risky and have been happy with Lustre and it's >> affiliates. >> >> Specifically this clause could be a problem: >> >> 3.2 LICENSEE may NOT: >> >> ... >> >> - rent or lease the LICENSED SOFTWARE and DOCUMENTATION to any third party >> >> ... >> >> Does anyone think the license was intended to block cloud providers making >> it available as part of a cloud based HPC solution? Am I >> mis-interpreting this? >> Not looking for a legal-ese battle but I am wondering if other licenses >> commonly >> used in cloud contexts have similar language. Anyone think the FS is >> fantastic >> > > enough that I should fight (spend money on lawyers and licenses) to put > it in > > front of "Cloud" HPC users? > > Arg, such issues are exactly the reason why I don't like contracts and > laws written by lawyers. Instead of writing with 'normal' words > understandable by everyone, they have their own language, which nobody can > understand is entirely unclear. I'm not sure if they do understand > themselves what they have written... Given the high number of useless > lawsuits probably not. > > This clause is about charging for the licensed software (i.e. fhgfs), not > about services around fhgfs. Neither this clause nor any other clause in > the EULA is intended or prohibits that you provide fhgfs to cloud users. > That's good to hear. The license has evolved and simplified a lot since I first read it long ago. > > So this particular clause just says that you are not allowed to charge > money for allowing people to use fhgfs. So it actually protects users from > paying for a software, which is in fact free to use for everyone, no matter > if it's a commercial user or not. > > On the other hand, you are still free to charge customers for services > around fhgfs, e.g. you might charge your cloud customers for installing > fhgfs or maintaining it or something like that - if that's what you have in > mind. > We generally just charge for CPU hours, and bundle as much in as we can for that price (Network, Disk, etc). We hate "Gotcha" pricing models and our customers live comfortably in the "Best Effort" support we can offer on free software. If we ever have exotic requirements (100+TB) we work out something special. Any ISP or Software licensing is usually passed through or handled directly between the user and the IP owner, and we host whatever is required to keep the licensing people happy :) Our parallel file-system choices have been limited because our customers are usually not long term commited, so paying annual licenses or buying dedicated storage systems rarely makes sense financially. It's always scratch and backed up at their location, so we can skate on the edge without much risk. And if they really like it they may put it on their internal systems. > Please let us know if this is sufficient for you to consider FhGFS in the > future or if we again should work with our Fraunhofer lawyers to improve > the license. > We will do some initial testing as time permits and get back on the licensing piece if need be then. I appreciate the intent of the licensing line is difficult to communicate, and look forward to learning more. Cheers! Greg > > > -- Mailscanner: Clean -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From j.wender at science-computing.de Fri Jun 15 15:25:33 2012 From: j.wender at science-computing.de (Jan Wender) Date: Fri, 15 Jun 2012 21:25:33 +0200 Subject: [Beowulf] Status of beowulf.org? In-Reply-To: <7a0f510c425b98deb2853d215a3784ed.squirrel@mail.eadline.org> References: <4FD89DFF.9020708@ias.edu> <7a0f510c425b98deb2853d215a3784ed.squirrel@mail.eadline.org> Message-ID: <4FDB8C2D.9030707@science-computing.de> Hi all, Arend from Penguin replied and they are looking for the list. They would like to continue hosting the list, but would ask for some volunteers to administrate it. Cheerio, Jan -- ---- Company Information ---- Vorstandsvorsitzender: Gerd-Lothar Leonhart Vorstand: Dr. Bernd Finkbeiner, Dr. Arno Steitz, Dr. Ingrid Zech Vorsitzender des Aufsichtsrats: Philippe Miltin Sitz: Tuebingen Registergericht: Stuttgart Registernummer: HRB 382196 -- Mailscanner: Clean -------------- next part -------------- A non-text attachment was scrubbed... Name: j_wender.vcf Type: text/x-vcard Size: 340 bytes Desc: not available URL: -------------- next part -------------- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From bernard at vanhpc.org Fri Jun 15 15:30:39 2012 From: bernard at vanhpc.org (Bernard Li) Date: Fri, 15 Jun 2012 12:30:39 -0700 Subject: [Beowulf] Status of beowulf.org? In-Reply-To: <4FDB8C2D.9030707@science-computing.de> References: <4FD89DFF.9020708@ias.edu> <7a0f510c425b98deb2853d215a3784ed.squirrel@mail.eadline.org> <4FDB8C2D.9030707@science-computing.de> Message-ID: Hi Jan: On Fri, Jun 15, 2012 at 12:25 PM, Jan Wender wrote: > Arend from Penguin replied and they are looking for the list. They would > like to continue hosting the list, but would ask for some volunteers to > administrate it. Do you think you can elaborate on what they need help with? Moderating emails? Thanks, Bernard _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From eagles051387 at gmail.com Fri Jun 15 17:22:53 2012 From: eagles051387 at gmail.com (Jonathan Aquilina) Date: Fri, 15 Jun 2012 23:22:53 +0200 Subject: [Beowulf] Status of beowulf.org? In-Reply-To: References: <4FD89DFF.9020708@ias.edu> <7a0f510c425b98deb2853d215a3784ed.squirrel@mail.eadline.org> <4FDB8C2D.9030707@science-computing.de> Message-ID: I would love to help moderate the list :) Regards Jonathan Aquilina On 15 Jun 2012, at 21:30, Bernard Li wrote: > Hi Jan: > > On Fri, Jun 15, 2012 at 12:25 PM, Jan Wender > wrote: > >> Arend from Penguin replied and they are looking for the list. They would >> like to continue hosting the list, but would ask for some volunteers to >> administrate it. > > Do you think you can elaborate on what they need help with? Moderating emails? > > Thanks, > > Bernard > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing > To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From samuel at unimelb.edu.au Fri Jun 15 18:28:20 2012 From: samuel at unimelb.edu.au (Chris Samuel) Date: Sat, 16 Jun 2012 08:28:20 +1000 Subject: [Beowulf] Status of beowulf.org? In-Reply-To: <4FDB8C2D.9030707@science-computing.de> References: <4FD89DFF.9020708@ias.edu> <7a0f510c425b98deb2853d215a3784ed.squirrel@mail.eadline.org> <4FDB8C2D.9030707@science-computing.de> Message-ID: <201206160828.20892.samuel@unimelb.edu.au> On Saturday 16 June 2012 05:25:33 Jan Wender wrote: > Hi all, Hi Jan, > Arend from Penguin replied and they are looking for the list. They > would like to continue hosting the list, but would ask for some > volunteers to administrate it. I've been (and still am) the list owner/admin of various Mailman lists for many years, happy to help out if need be. Did they say anything about the beowulf.org website ? cheers, Chris -- Christopher Samuel - Senior Systems Administrator VLSCI - Victorian Life Sciences Computation Initiative Email: samuel at unimelb.edu.au Phone: +61 (0)3 903 55545 http://www.vlsci.unimelb.edu.au/ _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From bill at cse.ucdavis.edu Fri Jun 15 18:49:27 2012 From: bill at cse.ucdavis.edu (Bill Broadley) Date: Fri, 15 Jun 2012 15:49:27 -0700 Subject: [Beowulf] Status of beowulf.org? In-Reply-To: <4FDB8C2D.9030707@science-computing.de> References: <4FD89DFF.9020708@ias.edu> <7a0f510c425b98deb2853d215a3784ed.squirrel@mail.eadline.org> <4FDB8C2D.9030707@science-computing.de> Message-ID: <4FDBBBF7.1020802@cse.ucdavis.edu> On 06/15/2012 12:25 PM, Jan Wender wrote: > Hi all, > > Arend from Penguin replied and they are looking for the list. They would > like to continue hosting the list, but would ask for some volunteers to > administrate it. Well if they are doing such a poor job and aren't willing to administrate it we should move it elsewhere. _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From landman at scalableinformatics.com Fri Jun 15 19:10:26 2012 From: landman at scalableinformatics.com (Joe Landman) Date: Fri, 15 Jun 2012 19:10:26 -0400 Subject: [Beowulf] Status of beowulf.org? In-Reply-To: <4FDBBBF7.1020802@cse.ucdavis.edu> References: <4FD89DFF.9020708@ias.edu> <7a0f510c425b98deb2853d215a3784ed.squirrel@mail.eadline.org> <4FDB8C2D.9030707@science-computing.de> <4FDBBBF7.1020802@cse.ucdavis.edu> Message-ID: <4FDBC0E2.9090706@scalableinformatics.com> On 06/15/2012 06:49 PM, Bill Broadley wrote: > On 06/15/2012 12:25 PM, Jan Wender wrote: >> Hi all, >> >> Arend from Penguin replied and they are looking for the list. They would >> like to continue hosting the list, but would ask for some volunteers to >> administrate it. > > Well if they are doing such a poor job and aren't willing to > administrate it we should move it elsewhere. Hmmm ... I pinged my contact within Penguin and was told they were working on it. This said, I seem to remember that beowulf.org was Scyld property before the acquisition by Penguin. Looking at the whois output somewhat confirms ownership. If this is the case, "we" can't move it "elsewhere" without the owners (Penguin's) permission. I think that part of why its fallen by the wayside at Penguin is due to Don taking up residence at Nvidia, and no one either stepping up to it or being assigned to it. All of this said, if a reasonable proposal is made to Penguin about helping to run/administer it, I think they might be willing to consider it. If, on the other hand, it is approached in a somewhat more brusque manner, I wouldn't hold a refusal to consider proposals against them. So far, we, Chris Samuel, Doug Eadline, Jon A, and a few others have indicated a willingness to help. I can't say I like mailman very much (set many up, royal PIA to deal with IMO), but Chris Samuel has good mailman-foo. Might make sense to enable admin by Chris and a small group of mailman-gurus. I've got (whether I like it or not) web-foo ... and mail server foo ... and would be happy to help there. Doug/Jon/... have foo of all sorts, and would certainly help out. If we needed distributed carbon-bots for moderation, this is doable (Chris might be able to comment on this). We would (my company) be happy to setup/donate a small server with storage to run this if Penguin wants to get completely out. We could host it as well at our site. Could also run it on EC2, though I can tell you that this is not nearly as cheap as Amazon might wish you to think. The cost benefit doesn't really work so well for this ... Lots of possibilities. Seems to me though, that one of the natural leaders of this would be Doug Eadline. Don't know where ClusterMonkey sits, but that is a well run site. Just sayin... -- Joseph Landman, Ph.D Founder and CEO Scalable Informatics Inc. email: landman at scalableinformatics.com web : http://scalableinformatics.com http://scalableinformatics.com/sicluster phone: +1 734 786 8423 x121 fax : +1 866 888 3112 cell : +1 734 612 4615 _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From lbickley at bickleywest.com Fri Jun 15 19:22:24 2012 From: lbickley at bickleywest.com (Lyle Bickley) Date: Fri, 15 Jun 2012 16:22:24 -0700 Subject: [Beowulf] Status of beowulf.org? In-Reply-To: <4FDBBBF7.1020802@cse.ucdavis.edu> References: <4FD89DFF.9020708@ias.edu> <7a0f510c425b98deb2853d215a3784ed.squirrel@mail.eadline.org> <4FDB8C2D.9030707@science-computing.de> <4FDBBBF7.1020802@cse.ucdavis.edu> Message-ID: <20120615162224.3f0cd6ca@core2.bcwi.net> On Fri, 15 Jun 2012 15:49:27 -0700 Bill Broadley wrote: > On 06/15/2012 12:25 PM, Jan Wender wrote: > > Hi all, > > > > Arend from Penguin replied and they are looking for the list. They > > would like to continue hosting the list, but would ask for some > > volunteers to administrate it. > > Well if they are doing such a poor job and aren't willing to > administrate it we should move it elsewhere. I'll second that! Cheers, Lyle -- Lyle Bickley Bickley Consulting West Inc. http://bickleywest.com "Black holes are where God is dividing by zero" _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From samuel at unimelb.edu.au Fri Jun 15 20:08:53 2012 From: samuel at unimelb.edu.au (Chris Samuel) Date: Sat, 16 Jun 2012 10:08:53 +1000 Subject: [Beowulf] Status of beowulf.org? In-Reply-To: <4FDBC0E2.9090706@scalableinformatics.com> References: <4FD89DFF.9020708@ias.edu> <4FDBBBF7.1020802@cse.ucdavis.edu> <4FDBC0E2.9090706@scalableinformatics.com> Message-ID: <201206161008.53854.samuel@unimelb.edu.au> On Saturday 16 June 2012 09:10:26 Joe Landman wrote: > If we needed distributed carbon-bots for moderation, this is doable > (Chris might be able to comment on this). Quite doable, you can list a number of people as moderators (or admins) of Mailman lists. Admins can attend to moderation requests too (so don't have to be listed twice). -- Christopher Samuel - Senior Systems Administrator VLSCI - Victorian Life Sciences Computation Initiative Email: samuel at unimelb.edu.au Phone: +61 (0)3 903 55545 http://www.vlsci.unimelb.edu.au/ _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From bernard at vanhpc.org Fri Jun 15 20:47:12 2012 From: bernard at vanhpc.org (Bernard Li) Date: Fri, 15 Jun 2012 17:47:12 -0700 Subject: [Beowulf] Status of beowulf.org? In-Reply-To: <201206161008.53854.samuel@unimelb.edu.au> References: <4FD89DFF.9020708@ias.edu> <4FDBBBF7.1020802@cse.ucdavis.edu> <4FDBC0E2.9090706@scalableinformatics.com> <201206161008.53854.samuel@unimelb.edu.au> Message-ID: Hi all: Before we get too deep in this discussion regarding moderation, I'd like to ask two questions: 1) Is this list moderated? And if so, for what specifically? 2) Is it still necessary to moderate the list, moving forward? Thanks, Bernard On Fri, Jun 15, 2012 at 5:08 PM, Chris Samuel wrote: > On Saturday 16 June 2012 09:10:26 Joe Landman wrote: > >> If we needed distributed carbon-bots for moderation, this is doable >> (Chris might be ?able to comment on this). > > Quite doable, you can list a number of people as moderators (or > admins) of Mailman lists. ?Admins can attend to moderation requests > too (so don't have to be listed twice). > > -- > ? Christopher Samuel - Senior Systems Administrator > ?VLSCI - Victorian Life Sciences Computation Initiative > ?Email: samuel at unimelb.edu.au Phone: +61 (0)3 903 55545 > ? ? ? ? http://www.vlsci.unimelb.edu.au/ > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing > To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From samuel at unimelb.edu.au Sat Jun 16 02:35:44 2012 From: samuel at unimelb.edu.au (Chris Samuel) Date: Sat, 16 Jun 2012 16:35:44 +1000 Subject: [Beowulf] Status of beowulf.org? In-Reply-To: References: <4FD89DFF.9020708@ias.edu> <201206161008.53854.samuel@unimelb.edu.au> Message-ID: <201206161635.44453.samuel@unimelb.edu.au> On Saturday 16 June 2012 10:47:12 Bernard Li wrote: > Hi all: > > Before we get too deep in this discussion regarding moderation, I'd > like to ask two questions: > > 1) Is this list moderated? And if so, for what specifically? No idea, it used to be that new subscribers posts were delayed (as if moderated) and at some point that would magically disappear and your posts would go straight through. It's something that's very easy to do with Mailman. > 2) Is it still necessary to moderate the list, moving forward That's another question altogether.. :-) -- Christopher Samuel - Senior Systems Administrator VLSCI - Victorian Life Sciences Computation Initiative Email: samuel at unimelb.edu.au Phone: +61 (0)3 903 55545 http://www.vlsci.unimelb.edu.au/ _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From eugen at leitl.org Sat Jun 16 05:19:43 2012 From: eugen at leitl.org (Eugen Leitl) Date: Sat, 16 Jun 2012 11:19:43 +0200 Subject: [Beowulf] Status of beowulf.org? In-Reply-To: References: <4FD89DFF.9020708@ias.edu> <7a0f510c425b98deb2853d215a3784ed.squirrel@mail.eadline.org> <4FDB8C2D.9030707@science-computing.de> Message-ID: <20120616091943.GW17120@leitl.org> On Fri, Jun 15, 2012 at 11:22:53PM +0200, Jonathan Aquilina wrote: > I would love to help moderate the list :) As I'm already moderating a bunch of lists another one wouldn't be a problem for me. > Regards > > Jonathan Aquilina > > > > On 15 Jun 2012, at 21:30, Bernard Li wrote: > > > Hi Jan: > > > > On Fri, Jun 15, 2012 at 12:25 PM, Jan Wender > > wrote: > > > >> Arend from Penguin replied and they are looking for the list. They would > >> like to continue hosting the list, but would ask for some volunteers to > >> administrate it. > > > > Do you think you can elaborate on what they need help with? Moderating emails? > > > > Thanks, > > > > Bernard > > _______________________________________________ > > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing > > To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf > > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing > To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From herbert.fruchtl at st-andrews.ac.uk Sat Jun 16 06:50:38 2012 From: herbert.fruchtl at st-andrews.ac.uk (Herbert Fruchtl) Date: Sat, 16 Jun 2012 11:50:38 +0100 Subject: [Beowulf] Status of beowulf.org? In-Reply-To: References: Message-ID: <4FDC64FE.4040201@st-andrews.ac.uk> As a lurker of many years' standing, I am vehemently opposed to moderation. It slows down traffic (in the rare case that I do pose a question, it's because I'm desperate and want a response IMMEDIATELY!), is open to abuse, and it comes with a legal minefield (if let's say, a corporate lawyer at Intel/AMD/NVIDIA thinks they have been unfairly slagged off, they may go after the list owner). Having said that, the current situation is obscure to say the least. I know a colleague at a British university, who is on the list, but whose posts always bounce. His attempts at contacting any list owners via the mailman interface were never answered. Back to lurking in my cave... Herbert On 16/06/12 01:47, beowulf-request at beowulf.org wrote: > Send Beowulf mailing list submissions to > beowulf at beowulf.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://www.beowulf.org/mailman/listinfo/beowulf > or, via email, send a message with subject or body 'help' to > beowulf-request at beowulf.org > > You can reach the person managing the list at > beowulf-owner at beowulf.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of Beowulf digest..." > > > Today's Topics: > > 1. Re: Status of beowulf.org? (Jan Wender) > 2. Re: Status of beowulf.org? (Bernard Li) > 3. Re: Status of beowulf.org? (Jonathan Aquilina) > 4. Re: Status of beowulf.org? (Chris Samuel) > 5. Re: Status of beowulf.org? (Bill Broadley) > 6. Re: Status of beowulf.org? (Joe Landman) > 7. Re: Status of beowulf.org? (Lyle Bickley) > 8. Re: Status of beowulf.org? (Chris Samuel) > 9. Re: Status of beowulf.org? (Bernard Li) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Fri, 15 Jun 2012 21:25:33 +0200 > From: Jan Wender > Subject: Re: [Beowulf] Status of beowulf.org? > To: Beowulf Mailing List > Message-ID:<4FDB8C2D.9030707 at science-computing.de> > Content-Type: text/plain; charset="iso-8859-1" > > Hi all, > > Arend from Penguin replied and they are looking for the list. They would > like to continue hosting the list, but would ask for some volunteers to > administrate it. > > Cheerio, Jan -- Herbert Fruchtl Senior Scientific Computing Officer School of Chemistry, School of Mathematics and Statistics University of St Andrews -- The University of St Andrews is a charity registered in Scotland: No SC013532 _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From eugen at leitl.org Sat Jun 16 07:26:59 2012 From: eugen at leitl.org (Eugen Leitl) Date: Sat, 16 Jun 2012 13:26:59 +0200 Subject: [Beowulf] Status of beowulf.org? In-Reply-To: <4FDC64FE.4040201@st-andrews.ac.uk> References: <4FDC64FE.4040201@st-andrews.ac.uk> Message-ID: <20120616112659.GG17120@leitl.org> On Sat, Jun 16, 2012 at 11:50:38AM +0100, Herbert Fruchtl wrote: > As a lurker of many years' standing, I am vehemently opposed to moderation. It > slows down traffic (in the rare case that I do pose a question, it's because I'm > desperate and want a response IMMEDIATELY!), is open to abuse, and it comes with > a legal minefield (if let's say, a corporate lawyer at Intel/AMD/NVIDIA thinks > they have been unfairly slagged off, they may go after the list owner). Moderation for Mailman typically means that new members are moderated by default, and unmoderated after the first post which is not spam. Only chronical offenders are typically put back on moderation. So there is no delay for list traffic, but for new subscribers. Typically, this takes a day or two. > Having said that, the current situation is obscure to say the least. I know a > colleague at a British university, who is on the list, but whose posts always > bounce. His attempts at contacting any list owners via the mailman interface > were never answered. This is not how this is supposed to work. > Back to lurking in my cave... _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From landman at scalableinformatics.com Sat Jun 16 12:33:22 2012 From: landman at scalableinformatics.com (Joe Landman) Date: Sat, 16 Jun 2012 12:33:22 -0400 Subject: [Beowulf] Status of beowulf.org? In-Reply-To: <4FDC64FE.4040201@st-andrews.ac.uk> References: <4FDC64FE.4040201@st-andrews.ac.uk> Message-ID: <4FDCB552.1000506@scalableinformatics.com> On 06/16/2012 06:50 AM, Herbert Fruchtl wrote: > As a lurker of many years' standing, I am vehemently opposed to moderation. It > slows down traffic (in the rare case that I do pose a question, it's because I'm > desperate and want a response IMMEDIATELY!), is open to abuse, and it comes with > a legal minefield (if let's say, a corporate lawyer at Intel/AMD/NVIDIA thinks > they have been unfairly slagged off, they may go after the list owner). so ... moderation stops this (going after the list owner) ... how? I am not generally a huge fan of moderation. However, I've seen some cases where various other list participants generally contribute nothing substantive to discussions, and only serve to annoy and inflame discussions with regular participants. If these people cannot respect the list, the participants, they can either choose leaving or being moderated. This said, in the past, on another list, I've been personally threatened with moderation, and had it enforced. The list owners (wrongly IMO) felt I had done them a grievous insult, and enforced moderation on me. To call their reaction silly would be kind in my book (and no, I won't say who they were/are, so don't ask ... though others in that same situation with that list and those owners contacted me later to commiserate). It was their list, and they have the right to take any action they wished, which they did, no matter how right or wrong headed it was/is. A well moderated list (e.g. very light touch) will have a rich variety of users and be mostly spam and idiot free. A poorly moderated list will turn into a sycophantic echo chamber. One of the side affects of a well moderated list is a stable or growing population of participants. Conversely, a poorly moderated list tends to lose many of the voices one needs for a diverse exchange of views (as it tends towards echo chamber mode). > > Having said that, the current situation is obscure to say the least. I know a > colleague at a British university, who is on the list, but whose posts always > bounce. His attempts at contacting any list owners via the mailman interface > were never answered. I've heard this from a number of folks. Some simply cannot post to the list for whatever reason. Likely RBL/DUL blocking on email servers. We build email annotation pipelines, that do a much better job than DUL/RBL lists. DUL/RBL are daisy cutters*, annotation pipelines are scalpels. Chances are these people are the collateral damage associated with using RBL/DUL. * A "daisy cutter" is a euphemism for a very large explosive device in which the shockwave, traversing a large field, could remove flowers from their stalks. Have a look at youtube (http://www.youtube.com/watch?v=_upy14pesi4) for an example. > > Back to lurking in my cave... > > Herbert > > On 16/06/12 01:47, beowulf-request at beowulf.org wrote: >> Send Beowulf mailing list submissions to >> beowulf at beowulf.org >> >> To subscribe or unsubscribe via the World Wide Web, visit >> http://www.beowulf.org/mailman/listinfo/beowulf >> or, via email, send a message with subject or body 'help' to >> beowulf-request at beowulf.org >> >> You can reach the person managing the list at >> beowulf-owner at beowulf.org >> >> When replying, please edit your Subject line so it is more specific >> than "Re: Contents of Beowulf digest..." >> >> >> Today's Topics: >> >> 1. Re: Status of beowulf.org? (Jan Wender) >> 2. Re: Status of beowulf.org? (Bernard Li) >> 3. Re: Status of beowulf.org? (Jonathan Aquilina) >> 4. Re: Status of beowulf.org? (Chris Samuel) >> 5. Re: Status of beowulf.org? (Bill Broadley) >> 6. Re: Status of beowulf.org? (Joe Landman) >> 7. Re: Status of beowulf.org? (Lyle Bickley) >> 8. Re: Status of beowulf.org? (Chris Samuel) >> 9. Re: Status of beowulf.org? (Bernard Li) >> >> >> ---------------------------------------------------------------------- >> >> Message: 1 >> Date: Fri, 15 Jun 2012 21:25:33 +0200 >> From: Jan Wender >> Subject: Re: [Beowulf] Status of beowulf.org? >> To: Beowulf Mailing List >> Message-ID:<4FDB8C2D.9030707 at science-computing.de> >> Content-Type: text/plain; charset="iso-8859-1" >> >> Hi all, >> >> Arend from Penguin replied and they are looking for the list. They would >> like to continue hosting the list, but would ask for some volunteers to >> administrate it. >> >> Cheerio, Jan > -- Joseph Landman, Ph.D Founder and CEO Scalable Informatics Inc. email: landman at scalableinformatics.com web : http://scalableinformatics.com http://scalableinformatics.com/sicluster phone: +1 734 786 8423 x121 fax : +1 866 888 3112 cell : +1 734 612 4615 _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From peter.st.john at gmail.com Sat Jun 16 15:21:13 2012 From: peter.st.john at gmail.com (Peter St. John) Date: Sat, 16 Jun 2012 15:21:13 -0400 Subject: [Beowulf] list moderation Message-ID: In the old days, we used to have pairs of lists: one un-moderated (world-writable), the other moderated (world readable). You subscribe to either or both; the standards for off-topic witticisms would be easier (depending on taste) at the former, but there'd be more spam. When an interesting and informative post appears on the open list, someone who subscribes to both forwards it to the moderated list. It's extra work to get a persistent troll banned from the open list, and it's extra work to get a new person permed for the moderated list, both require admin attention. Peter -- Mailscanner: Clean -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From herbert.fruchtl at st-andrews.ac.uk Sun Jun 17 12:22:04 2012 From: herbert.fruchtl at st-andrews.ac.uk (Herbert Fruchtl) Date: Sun, 17 Jun 2012 17:22:04 +0100 Subject: [Beowulf] Status of beowulf.org? In-Reply-To: References: Message-ID: <4FDE042C.3070707@st-andrews.ac.uk> Joe Landman wrote: >> As a lurker of many years' standing, I am vehemently opposed to moderation. It >> slows down traffic (in the rare case that I do pose a question, it's because I'm >> desperate and want a response IMMEDIATELY!), is open to abuse, and it comes with >> a legal minefield (if let's say, a corporate lawyer at Intel/AMD/NVIDIA thinks >> they have been unfairly slagged off, they may go after the list owner). > so ... moderation stops this (going after the list owner) ... how? No. It's the lack of moderation that should at least provide some safeguards. The legal argument (occasionally challenged, and depending on your jurisdiction, but broadly accepted) is that if you are moderating, you take responsibility. If you don't, you are equivalent to a phone provider or the post office, who are not responsible for the content they deliver. Herbert _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From samuel at unimelb.edu.au Mon Jun 18 03:02:06 2012 From: samuel at unimelb.edu.au (Christopher Samuel) Date: Mon, 18 Jun 2012 17:02:06 +1000 Subject: [Beowulf] Status of beowulf.org? In-Reply-To: <20120616112659.GG17120@leitl.org> References: <4FDC64FE.4040201@st-andrews.ac.uk> <20120616112659.GG17120@leitl.org> Message-ID: <4FDED26E.1090508@unimelb.edu.au> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 16/06/12 21:26, Eugen Leitl wrote: > Moderation for Mailman typically means that new members are > moderated by default, and unmoderated after the first post which is > not spam. This is certainly how the list seemed to operate, although with a much longer window between starting posting and finding your posts going through unapproved. > Only chronical offenders are typically put back on moderation. So > there is no delay for list traffic, but for new subscribers. > Typically, this takes a day or two. There is also Mailman's "emergency moderation" switch for a list, but something of a last resort (it's useful for announcement only type lists where even with all your carefully chosen rules to list who is allowed to send you still want a backstop so you can check one last time before it goes out). cheers, Chris - -- Christopher Samuel - Senior Systems Administrator VLSCI - Victorian Life Sciences Computation Initiative Email: samuel at unimelb.edu.au Phone: +61 (0)3 903 55545 http://www.vlsci.unimelb.edu.au/ -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk/e0m0ACgkQO2KABBYQAh+ckgCdGyVjgzoTuvROBNyOuzvMUU7K tdIAnR5YXWFm+ZhiTj7ojS9P4sccTfUP =qxqX -----END PGP SIGNATURE----- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From john.hearns at mclaren.com Mon Jun 18 10:31:15 2012 From: john.hearns at mclaren.com (Hearns, John) Date: Mon, 18 Jun 2012 15:31:15 +0100 Subject: [Beowulf] Caption Competition Message-ID: <207BB2F60743C34496BE41039233A8090EAF47A6@MRL-PWEXCHMB02.mil.tagmclarengroup.com> The Register has a good shot of Sequoia under construction: http://regmedia.co.uk/2012/06/17/ibm_sequoia_llnl.jpg There must be some funny caption for this! As an aside, are those very deep false floors? John Hearns | CFD Hardware Specialist | McLaren Racing Limited McLaren Technology Centre, Chertsey Road, Woking, Surrey GU21 4YH, UK T: +44 (0) 1483 262000 D: +44 (0) 1483 262352 F: +44 (0) 1483 261928 E: john.hearns at mclaren.com W: www.mclaren.com The contents of this email are confidential and for the exclusive use of the intended recipient. If you receive this email in error you should not copy it, retransmit it, use it or disclose its contents but should return it to the sender immediately and delete your copy. -- Mailscanner: Clean -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From james.p.lux at jpl.nasa.gov Mon Jun 18 10:48:41 2012 From: james.p.lux at jpl.nasa.gov (Lux, Jim (337C)) Date: Mon, 18 Jun 2012 14:48:41 +0000 Subject: [Beowulf] Caption Competition In-Reply-To: <207BB2F60743C34496BE41039233A8090EAF47A6@MRL-PWEXCHMB02.mil.tagmclarengroup.com> Message-ID: I've worked in places with everything from 12"-18" under the floor to ones where you could stand up underneath the floor. The latter are more pleasant to work in, although it really needs two people then, unless you want to get very good at climbing up and down the ladder. For the former, you just pull up all the tiles except the ones the equipment is standing on, and step from hole to hole. A lot more bending down and threading stuff between holes. From: "Hearns, John" > Date: Mon, 18 Jun 2012 15:31:15 +0100 To: "beowulf at beowulf.org" > Subject: [Beowulf] Caption Competition The Register has a good shot of Sequoia under construction: http://regmedia.co.uk/2012/06/17/ibm_sequoia_llnl.jpg There must be some funny caption for this! As an aside, are those very deep false floors? John Hearns | CFD Hardware Specialist | McLaren Racing Limited McLaren Technology Centre, Chertsey Road, Woking, Surrey GU21 4YH, UK T: +44 (0) 1483 262000 D: +44 (0) 1483 262352 F: +44 (0) 1483 261928 E: john.hearns at mclaren.com W: www.mclaren.com The contents of this email are confidential and for the exclusive use of the intended recipient. If you receive this email in error you should not copy it, retransmit it, use it or disclose its contents but should return it to the sender immediately and delete your copy. _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From hahn at mcmaster.ca Mon Jun 18 11:39:12 2012 From: hahn at mcmaster.ca (Mark Hahn) Date: Mon, 18 Jun 2012 11:39:12 -0400 (EDT) Subject: [Beowulf] Caption Competition In-Reply-To: <207BB2F60743C34496BE41039233A8090EAF47A6@MRL-PWEXCHMB02.mil.tagmclarengroup.com> References: <207BB2F60743C34496BE41039233A8090EAF47A6@MRL-PWEXCHMB02.mil.tagmclarengroup.com> Message-ID: > http://regmedia.co.uk/2012/06/17/ibm_sequoia_llnl.jpg > > There must be some funny caption for this! "do these cables clash with my safety vest?" "as you can see, your DC TCO will improve once you hire cabling gnomes." "down here is where we store the pron." "since this cluster will melt the polar icecaps, it's built on stilts!" > As an aside, are those very deep false floors? we have a location with a raised floor of ~4 ft. I'm not sure how that was chosen, but I also can't think of any reason why not. I mean, in general, raised floors are a chilled air plenum, so it's clearly good to avoid narrow ones. (the DC I set next to has about 16", and 2-8" of that is consumed by cables.) in general, I would advocate engineering DCs with as unobstructed airflow for both hot and cold as possible, and trying hard to keep cables out of the way of either. I'd love to see some CFD simulations of alternative DC layouts. for instance is it a good design to have no raised floor, but sealed H/C aisles fed by separate sets of ducts? how about a "linear" DC, where there is just a row of chillers aligned with a single row of racks? _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From rigved.sharma123 at gmail.com Mon Jun 18 14:58:36 2012 From: rigved.sharma123 at gmail.com (rigved sharma) Date: Tue, 19 Jun 2012 00:28:36 +0530 Subject: [Beowulf] qsub flag for reservation Message-ID: Hi, we are using torque and maui. We have 3 dedicated reservations for user john (john.0, john.1, john.2) on different nodes. Now we want that john should use ADVRES flag while submitting the jobs.We are aware of flag for single reservation id but not for multiple reservation ids.how to give that? _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From deadline at eadline.org Mon Jun 18 18:01:42 2012 From: deadline at eadline.org (Douglas Eadline) Date: Mon, 18 Jun 2012 18:01:42 -0400 Subject: [Beowulf] Some history an my theory (was Status of beowulf.org) In-Reply-To: <4FDC64FE.4040201@st-andrews.ac.uk> References: <4FDC64FE.4040201@st-andrews.ac.uk> Message-ID: <06fb57baefa1dc6f4b69db7600a6674f.squirrel@mail.eadline.org> All, When Don was running the list, moderation was there to eliminate spam, ever notice how clean this list has been? That is, there is a list of white hats that could always post (old timers mostly) everything else was moderated to check for spam. I assume the list is now running on auto pilot (actually with no pilot) where over moderation is the rule to catch spam and no one has assumed Don's roll of releasing the few true Beowulf messages in the sea of spam (see below) You may find this helpful, (From my list archives, Wed, February 8, 2006 3:54 pm) ----- After having a near-perfect record of keeping out spam and virus email, one slipped through yesterday. It's a good example of why mailing lists can't be auto-moderated. The current elaborate system requires heavy human moderation, and this message still slid past everything and was automatically approved. The message appeared to come from a subscribed user, so it passed the first check. (This is actually common: spammers and viruses use pairs of addresses from the same source, so evil mail is likely to come from someone you have heard of.) The message passed both ClamAV and SpamAssasin (although a compressed zip file should have triggered something). It didn't have any of the keywords that are configured in Mailman's "hold" rules. And finally, that user was approved for auto-post for messages that passed all of the previous rules. Please keep this event in mind before you complain that your message was held for moderation. 95-99% (depending on the day) of inbound mail to the mailing lists is immediately discarded as obvious viruses and spam. Only very low scoring mail from approved subscribers is eligible for auto-approval The rest is held for manual moderation. Only about 2% of those held messages are valid postings. That means about 50 messages manually discarded for each manually approved posting. And except for a few weeks scattered over the history of the list, I've been the sole or primary moderator. The bottom line is that we are considering a message board format to replace the mailing list. It would have required logins to post, and retroactive moderation to delete advertising and trolls. Any opinions? -- Donald Becker -- Doug -- Mailscanner: Clean _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From samuel at unimelb.edu.au Mon Jun 18 21:14:45 2012 From: samuel at unimelb.edu.au (Christopher Samuel) Date: Tue, 19 Jun 2012 11:14:45 +1000 Subject: [Beowulf] Caption Competition In-Reply-To: <207BB2F60743C34496BE41039233A8090EAF47A6@MRL-PWEXCHMB02.mil.tagmclarengroup.com> References: <207BB2F60743C34496BE41039233A8090EAF47A6@MRL-PWEXCHMB02.mil.tagmclarengroup.com> Message-ID: <4FDFD285.4030801@unimelb.edu.au> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 19/06/12 00:31, Hearns, John wrote: > There must be some funny caption for this! "This isn't what I was expecting when they said I'd be a support engineer!" > As an aside, are those very deep false floors? They are, a fair bit deeper than what we have for our BG/Q, but then they have 24X more racks than us and so need a lot more plumbing and cabling. cheers! Chris - -- Christopher Samuel - Senior Systems Administrator VLSCI - Victorian Life Sciences Computation Initiative Email: samuel at unimelb.edu.au Phone: +61 (0)3 903 55545 http://www.vlsci.unimelb.edu.au/ -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk/f0oUACgkQO2KABBYQAh9XxQCglUljL9dt+zkCSHULNQPrjtTZ 9SsAnA4BlEenTRKF9a2dUyUmogpi2HYJ =hfyX -----END PGP SIGNATURE----- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From samuel at unimelb.edu.au Mon Jun 18 21:19:25 2012 From: samuel at unimelb.edu.au (Christopher Samuel) Date: Tue, 19 Jun 2012 11:19:25 +1000 Subject: [Beowulf] Status of beowulf.org? In-Reply-To: <3FBA67A9A790594FA9F74DD91A869A7812A695@046-CH1MPN1-081.046d.mgd.msft.net> References: <4FD89DFF.9020708@ias.edu> <4FDBBBF7.1020802@cse.ucdavis.edu> <4FDBC0E2.9090706@scalableinformatics.com> <201206161008.53854.samuel@unimelb.edu.au> <3FBA67A9A790594FA9F74DD91A869A7812A695@046-CH1MPN1-081.046d.mgd.msft.net> Message-ID: <4FDFD39D.1080002@unimelb.edu.au> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 18/06/12 22:47, Pierce, Thomas H (H) wrote: > Hi All, Hiya, > As a "forced" lurker for the last few years, I have seen posts > "moderated" and lost. I agree with Joe L. that moderation "kills" > more discussions than "rescues" diversions. I think this is true for a moderated list with no active moderator (as we have now), but with a (very) light hand and a rapid transition from moderated to unmoderated for new users then it may not be too bad. That said I've never had to set moderation for new users on any lists I've run before, but then they weren't as public as this and Doug has already presented evidence that spammers have got stuff through to the list before now.. > Alas, USENET seems to be gone and "free" newsgroups and mailing > lists are less popular. Very sad (though inevitable for USENET I think). I *really* dislike web based forums.. > If one can vote in a colleague, I would like to see Doug Eadline or > Joe L. host the mailing list. The evidence is that the hosting itself is OK (in that the list is continuing to work despite the website being inaccessible), what's the problem is the lack of list admins and the website. cheers, Chris - -- Christopher Samuel - Senior Systems Administrator VLSCI - Victorian Life Sciences Computation Initiative Email: samuel at unimelb.edu.au Phone: +61 (0)3 903 55545 http://www.vlsci.unimelb.edu.au/ -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk/f05wACgkQO2KABBYQAh9g3gCcCP11KhNy4aGbBHwzIJGS8I+d Y9gAnRgtephRetWHArav8vEVcaKwFLgF =Ur5d -----END PGP SIGNATURE----- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From lindahl at pbm.com Tue Jun 19 03:18:09 2012 From: lindahl at pbm.com (Greg Lindahl) Date: Tue, 19 Jun 2012 00:18:09 -0700 Subject: [Beowulf] Caption Competition Message-ID: <20120619071809.GC23616@bx9.net> On Mon, Jun 18, 2012 at 11:39:12AM -0400, Mark Hahn wrote: > we have a location with a raised floor of ~4 ft. I'm not sure how > that was chosen, but I also can't think of any reason why not. > I mean, in general, raised floors are a chilled air plenum, > so it's clearly good to avoid narrow ones. (the DC I set next to > has about 16", and 2-8" of that is consumed by cables.) Beats me how people design these things, but yeah, deep floors aren't that unusual, although I suppose I've heard of them mostly in places where data cabling is under the floor. The standard in the Silicon Valley these days is to run data cables above the racks, and power cables under the floor, if possible. As for DCs which don't have raised floors at all, they are common in the Silicon Valley. They tell me that they use momentum to get the cold air from the duct in the ceiling to the floor. As far as I can tell, that works fine, i.e., the one data center that I've had nodes in which had inadequate cooling and no raised floor, the coolest nodes were still on the bottom. -- greg p.s. we should grab the archives and move the list to a server that "we" control. Just sayin' _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From john.hearns at mclaren.com Tue Jun 19 04:44:19 2012 From: john.hearns at mclaren.com (Hearns, John) Date: Tue, 19 Jun 2012 09:44:19 +0100 Subject: [Beowulf] Caption Competition References: <207BB2F60743C34496BE41039233A8090EAF47A6@MRL-PWEXCHMB02.mil.tagmclarengroup.com> Message-ID: <207BB2F60743C34496BE41039233A8090EAF50F0@MRL-PWEXCHMB02.mil.tagmclarengroup.com> "Dang, Earl, was that there knit one, purl one or knit two, purl one?" The contents of this email are confidential and for the exclusive use of the intended recipient. If you receive this email in error you should not copy it, retransmit it, use it or disclose its contents but should return it to the sender immediately and delete your copy. _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From eugen at leitl.org Tue Jun 19 08:34:46 2012 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 19 Jun 2012 14:34:46 +0200 Subject: [Beowulf] building a 96-core Ubuntu ARM solar-powered cluster Message-ID: <20120619123446.GE17120@leitl.org> (ob caveat phoronix) http://www.phoronix.com/scan.php?page=article&item=mit_cluster_build&num=1 Building A 96-Core Ubuntu ARM Solar-Powered Cluster Published on June 19, 2012 Written by Michael Larabel Last week I shared results from the Phoronix 12-core ARM Linux mini cluster that was constructed out of six PandaBoard ES development boards. Over the weekend, a 96-core ARM cluster succeeded this build. While packing nearly 100 cores and running Ubuntu Linux, the power consumption was just a bit more than 200 Watts. This array of nearly 100 processor cores was even powered up by a solar panel. This past weekend I was out at the Massachusetts Institute of Technology (MIT) where this build took place. A massive ARM build out has been in the plans for a few months and to even get it running off a solar panel. The build was a success and by Sunday, the goals were realized. Due to my past ARM Linux benchmarking on Phoronix that they have followed, their use of the Phoronix Test Suite, and my experience with Linux benchmarking and performance testing in general, I was invited over to MIT to help with this 96-core ARM build after having collaborated with them for a few months. This cluster / super-computer was built around 48 PandaBoards. The bulk of the PandaBoards were not the ES model (I brought my collection of PandaBoard ES models as back-ups for the PandaBoard nodes that failed), but just the vanilla model. The non-ES model packs a Texas Instruments OMAP4430 with a dual-core 1.0GHz dual-core Cortex-A9 processor. The GPU and CPU of the PandaBoard ES with its OMAP4460 are at higher clock speeds, but aside from that it is very similar to the OMAP4430 model. For maximum density and to make it easier to transport, the PandaBoards ended up being stacked vertically. The enclosure for the 48 PandaBoards was an industrial trashcan. Rather than using AC adapters, the PandaBoards were running off a USB power source. The power consumption on the original PandaBoard is similar to that of the PandaBoard ES or perhaps slightly lower when using the more efficient USB power source. My PandaBoard ES testing usually indicates about a 3 Watt idle per board, 5 Watt under load, or 6 Watts under extreme load. This MIT 96-core cluster would idle at just under 170 Watts and for the loads we hit it with over the weekend usually would just go a bit above 200 Watts. Overall, it was a fairly interesting weekend project! On the software side was a stock Ubuntu 12.04 ARM OMAP4 installation across all 48 PandaBoards on the SD cards. As far as any benchmark results, MIT sent in some numbers for the Green500 and some other performance tests are still being worked out. From the benchmarks I ran on the hardware, they dissented a bit from my expectations based upon what I was achieving with my 12-core PandaBoard ES cluster, so for the moment until all kinks in the new build are worked out I will refrain from sharing any numbers. Such many-core ARM clusters though are showing great potential in performance-per-Watt scenarios. For now, see my 12-core ARM cluster results. I will also have more numbers on the way shortly from the Phoronix build. Over the weekend, there also was not much time for performance tuning. Ubuntu 12.10 presents some very impressive performance gains as Phoronix results from earlier this month have indicated. MIT will be putting out a video, a couple papers, and some other information on this 96-core / 48 PandaBoard cluster so stay tuned for much more information. _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From alscheinine at tuffmail.us Tue Jun 19 11:23:49 2012 From: alscheinine at tuffmail.us (Alan Louis Scheinine) Date: Tue, 19 Jun 2012 10:23:49 -0500 Subject: [Beowulf] Some history an my theory (was Status of beowulf.org) In-Reply-To: <06fb57baefa1dc6f4b69db7600a6674f.squirrel@mail.eadline.org> References: <4FDC64FE.4040201@st-andrews.ac.uk> <06fb57baefa1dc6f4b69db7600a6674f.squirrel@mail.eadline.org> Message-ID: <4FE09985.2030402@tuffmail.us> I remember that post from Donald Becker but did not have a copy. Thank you very much for reminding us, in particular potential volunteers, how much time is involved. The end result has been of high quality over the years, thanks to Don. Regards, Alan Douglas Eadline wrote: > All, > > When Don was running the list, moderation was there to > eliminate spam, ever notice how clean this list has been? > That is, there is a list of white hats that could always > post (old timers mostly) everything else was moderated to check for > spam. I assume the list is now running on auto pilot (actually > with no pilot) where over moderation is the rule to catch spam > and no one has assumed Don's roll of releasing the few > true Beowulf messages in the sea of spam (see below) > > You may find this helpful, (From my list archives, Wed, > February 8, 2006 3:54 pm) > > ----- > > After having a near-perfect record of keeping out spam and virus > email, one slipped through yesterday. > > It's a good example of why mailing lists can't be auto-moderated. > The current elaborate system requires heavy human moderation, and this > message still slid past everything and was automatically approved. > > The message appeared to come from a subscribed user, so it passed the > first check. (This is actually common: spammers and viruses use pairs of > addresses from the same source, so evil mail is likely to come from > someone you have heard of.) > > The message passed both ClamAV and SpamAssasin (although a compressed > zip file should have triggered something). It didn't have any of the > keywords that are configured in Mailman's "hold" rules. And finally, that > user was approved for auto-post for messages that passed all of the > previous rules. > > Please keep this event in mind before you complain that your message was > held for moderation. 95-99% (depending on the day) of inbound mail to the > mailing lists is immediately discarded as obvious viruses and spam. > Only very low scoring mail from approved subscribers is eligible for > auto-approval The rest is held for manual moderation. Only about 2% of > those held messages are valid postings. That means about 50 messages > manually discarded for each manually approved posting. And except for a > few weeks scattered over the history of the list, I've been the sole or > primary moderator. > > The bottom line is that we are considering a message board format to > replace the mailing list. It would have required logins to > post, and retroactive moderation to delete advertising and trolls. > Any opinions? > > -- > Donald Becker > > > > -- > Doug > -- Alan Scheinine 200 Georgann Dr., Apt. E6 Vicksburg, MS 39180 Email: alscheinine at tuffmail.us Mobile phone: 225 288 4176 http://www.flickr.com/photos/ascheinine _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From hahn at mcmaster.ca Fri Jun 29 15:50:03 2012 From: hahn at mcmaster.ca (Mark Hahn) Date: Fri, 29 Jun 2012 15:50:03 -0400 (EDT) Subject: [Beowulf] water cooling Message-ID: Hi all, I'm involved in some planning that tries to evaluate large HPC datacenter designs for a few years out. One really fundamental issue that seems unclear is whether direct water cooling will be fairly prevalent by then. One train of thought is that power densities will increase to 30KW/rack or so, necessitating water. But will it be rack-back radiators (far less efficient, but fairly routine today) or obtain much higher efficiency by skipping the air-cooling step (like Aquasar, SuperMUC, K Machine, etc). so, how commodity will direct water cooling be? for extra points, what KW/rack density are you planning? (by "commodity", I mean "available from vendors like HP/Dell/IBM as well as parts vendors like Supermicro.") thanks, mark. _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From samuel at unimelb.edu.au Tue Jun 5 02:47:42 2012 From: samuel at unimelb.edu.au (Christopher Samuel) Date: Tue, 05 Jun 2012 16:47:42 +1000 Subject: [Beowulf] Forward: RE: In-Reply-To: <51944af13e135e02aa73095381c2ce2f.squirrel@mail.eadline.org> References: <51944af13e135e02aa73095381c2ce2f.squirrel@mail.eadline.org> Message-ID: <4FCDAB8E.2020707@unimelb.edu.au> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 03/05/12 22:40, Douglas Eadline wrote: > I know Penguin runs the list, but I'm not sure who to contact, I'll > forward it to the list. Hopefully someone will be able to provide > an answer. No answer, other than to confirm it's still down from here. :-( My only contact at Penguin moved to Apple a year or two back so I don't know anyone to contact there these days - anyone else ? cheers, Chris - -- Christopher Samuel - Senior Systems Administrator VLSCI - Victorian Life Sciences Computation Initiative Email: samuel at unimelb.edu.au Phone: +61 (0)3 903 55545 http://www.vlsci.unimelb.edu.au/ -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk/Nq44ACgkQO2KABBYQAh9XywCfWNPC8Xr/IFk076T5IkBR4yPc 24YAoJQC8vr6QGA5JdCOmSFPFZ/551m2 =EDuM -----END PGP SIGNATURE----- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From j.wender at science-computing.de Tue Jun 5 03:23:28 2012 From: j.wender at science-computing.de (Jan Wender) Date: Tue, 05 Jun 2012 09:23:28 +0200 Subject: [Beowulf] Forward: RE: In-Reply-To: <4FCDAB8E.2020707@unimelb.edu.au> References: <51944af13e135e02aa73095381c2ce2f.squirrel@mail.eadline.org> <4FCDAB8E.2020707@unimelb.edu.au> Message-ID: <4FCDB3F0.3020306@science-computing.de> Hi all, I asked Arend Dittmer, who works at Penguin whether he can help. Cheerio, Jan Christopher Samuel schrieb: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On 03/05/12 22:40, Douglas Eadline wrote: > >> I know Penguin runs the list, but I'm not sure who to contact, I'll >> forward it to the list. Hopefully someone will be able to provide >> an answer. > > No answer, other than to confirm it's still down from here. :-( > > My only contact at Penguin moved to Apple a year or two back so I > don't know anyone to contact there these days - anyone else ? > > cheers, > Chris > - -- > Christopher Samuel - Senior Systems Administrator > VLSCI - Victorian Life Sciences Computation Initiative > Email: samuel at unimelb.edu.au Phone: +61 (0)3 903 55545 > http://www.vlsci.unimelb.edu.au/ > > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.11 (GNU/Linux) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ > > iEYEARECAAYFAk/Nq44ACgkQO2KABBYQAh9XywCfWNPC8Xr/IFk076T5IkBR4yPc > 24YAoJQC8vr6QGA5JdCOmSFPFZ/551m2 > =EDuM > -----END PGP SIGNATURE----- > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing > To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- ---- Company Information ---- Vorstandsvorsitzender: Gerd-Lothar Leonhart Vorstand: Dr. Bernd Finkbeiner, Dr. Arno Steitz, Dr. Ingrid Zech Vorsitzender des Aufsichtsrats: Philippe Miltin Sitz: Tuebingen Registergericht: Stuttgart Registernummer: HRB 382196 -- Mailscanner: Clean -------------- next part -------------- A non-text attachment was scrubbed... Name: j_wender.vcf Type: text/x-vcard Size: 340 bytes Desc: not available URL: -------------- next part -------------- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From ntmoore at gmail.com Tue Jun 5 13:48:36 2012 From: ntmoore at gmail.com (Nathan Moore) Date: Tue, 5 Jun 2012 12:48:36 -0500 Subject: [Beowulf] Desktop fan reccommendation Message-ID: All, This is barely beowuf related... New desktop machine is a Shuttle SX79R5, http://us.shuttle.com/barebone/Models/SX79R5.html In the past, shuttles have been very quiet, but this one has a fairly loud variable speed fan on the CPU heat exchanger. I normally buy replacement parts from vendors like newegg, but their selection of 90mm case fans mainly seems to be described by CFM and whether the fan has LED lights mounted in it (FYI, that is not a selling point). So, is there an engineer's version of newegg that ya'll know about? There must be a super quiet 90mm fan out there that I can pick up for $10... Nathan Moore Physics, Winona State University -- Mailscanner: Clean -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From john.hearns at mclaren.com Tue Jun 5 14:08:23 2012 From: john.hearns at mclaren.com (Hearns, John) Date: Tue, 5 Jun 2012 19:08:23 +0100 Subject: [Beowulf] Desktop fan reccommendation References: Message-ID: <207BB2F60743C34496BE41039233A8090E6B93AB@MRL-PWEXCHMB02.mil.tagmclarengroup.com> Lenovo's workstation fans are extremely quiet. I was told by a Lenovo engineer that they are designed to resemble an owl's wings. Owls are pretty silent beasts - as they have to be to swoop on those unsuspecting mice. The contents of this email are confidential and for the exclusive use of the intended recipient. If you receive this email in error you should not copy it, retransmit it, use it or disclose its contents but should return it to the sender immediately and delete your copy. -- Mailscanner: Clean -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From ntmoore at gmail.com Tue Jun 5 14:21:40 2012 From: ntmoore at gmail.com (Nathan Moore) Date: Tue, 5 Jun 2012 13:21:40 -0500 Subject: [Beowulf] Desktop fan reccommendation In-Reply-To: <207BB2F60743C34496BE41039233A8090E6B93AB@MRL-PWEXCHMB02.mil.tagmclarengroup.com> References: <207BB2F60743C34496BE41039233A8090E6B93AB@MRL-PWEXCHMB02.mil.tagmclarengroup.com> Message-ID: I wonder if this is Lenovo's vendor? http://www.quietpcusa.com/Noctua-NF-B9-Vortex-Control-92mm-Quiet-Computer-Fan-P398C67.aspx On Tue, Jun 5, 2012 at 1:08 PM, Hearns, John wrote: > Lenovo?s workstation fans are extremely quiet.**** > > I was told by a Lenovo engineer that they are designed to resemble an > owl?s wings.**** > > Owls are pretty silent beasts ? as they have to be to swoop on those > unsuspecting mice.**** > > The contents of this email are confidential and for the exclusive use of > the intended recipient. If you receive this email in error you should not > copy it, retransmit it, use it or disclose its contents but should return > it to the sender immediately and delete your copy. > > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing > To change your subscription (digest mode or unsubscribe) visit > http://www.beowulf.org/mailman/listinfo/beowulf > > -- - - - - - - - - - - - - - - - - - - - - - Nathan Moore Winona, MN - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -- Mailscanner: Clean -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From Daniel.Pfenniger at unige.ch Wed Jun 6 05:38:24 2012 From: Daniel.Pfenniger at unige.ch (Daniel Pfenniger) Date: Wed, 06 Jun 2012 11:38:24 +0200 Subject: [Beowulf] Desktop fan reccommendation In-Reply-To: References: Message-ID: <4FCF2510.80502@unige.ch> Nathan Moore wrote: > All, > > This is barely beowuf related... > > New desktop machine is a Shuttle SX79R5, > http://us.shuttle.com/barebone/Models/SX79R5.html > > In the past, shuttles have been very quiet, but this one has a fairly loud > variable speed fan on the CPU heat exchanger. I normally buy replacement parts > from vendors like newegg, but their selection of 90mm case fans mainly seems to > be described by CFM and whether the fan has LED lights mounted in it (FYI, that > is not a selling point). > > So, is there an engineer's version of newegg that ya'll know about? There must > be a super quiet 90mm fan out there that I can pick up for $10... I remind ads for quiet and more efficient rotor-less fans for PC's but cannot find such products anymore. The idea was to maximize the air flow area by displacing the central motor to the blade edges. Not only the larger central area would allow a lower, quieter blade speed, but the blades being accelerated at their extremities by the circular motor would be mechanically more stable, less subject to vibrations. My guess is that such fans, although technically better, were too expensive in regard of the advantages. The Dyson bladeless and silent fans are based om a different principle, a cylindrical thin air layer carries along the inner air column, the air flow is then laminar (http://www.dyson.com/store/fans.asp). Dan _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From diep at xs4all.nl Wed Jun 6 07:38:03 2012 From: diep at xs4all.nl (Vincent Diepeveen) Date: Wed, 6 Jun 2012 13:38:03 +0200 Subject: [Beowulf] Desktop fan reccommendation In-Reply-To: References: Message-ID: <31570B96-9227-4CF1-BB79-910547DC1F94@xs4all.nl> On Jun 5, 2012, at 7:48 PM, Nathan Moore wrote: > All, > > This is barely beowuf related... > > New desktop machine is a Shuttle SX79R5, http://us.shuttle.com/ > barebone/Models/SX79R5.html > > In the past, shuttles have been very quiet, but this one has a > fairly loud variable speed fan on the CPU heat exchanger. You sure this one is easy to replace? It seems that it doesn't have a cooler at all for the cpu but as you say it's some sort of cheapskate thing tubing that pumps liquid through the socket and then seemingly with 1 fan that is doing cooling both for the PSU as well as the cpu fan at once. Then from inside it pushes that air seemingly to outside and there it goes through a tiny grill, which also will impact the airflow bigtime i'd guess. So whatever you do, you need a fan that delivers at least the same CFM and has the same airpressure. Most likely the replacement fan for 9CM there still will be something of at least a 5000 RPM or so, so very noisy, and just don't believe all the manufacturer specs there, they usually 'overclock' fans nowadays effectively generating a far higher RPM and *that* CFM they put on the box. If you'd get a tad bigger fan it's easy to get it more quiet, but it seems difficult to fit in. Maybe if you remove the grill and produce a kind of cardbox thing channels the air into the tiny PSU/heat exchanger. Just tape won't do i'd guess, as after a while that'll losen too much, so some glue also needed, then it would be a lot easier to get it quiet. It's a tiny fan for a CPU/PSU combo. Just 9 CM is not much. At 12 CM there is a wonderful fan i'm very happy about that's the aerocool shark fan. It's only there in red incarnation. Nowadays the fans that do not have leds are either more expensive or produce less CFM so you'll have no escape except to get one with leds, as they produce those massively. You'll have to measure though whether a bigger fan fits inside and if so what its maximum dimensions are and how it impacts your cardbox creature. > I normally buy replacement parts from vendors like newegg, but > their selection of 90mm case fans mainly seems to be described by > CFM and whether the fan has LED lights mounted in it (FYI, that is > not a selling point). > > So, is there an engineer's version of newegg that ya'll know about? > There must be a super quiet 90mm fan out there that I can pick up > for $10... > > Nathan Moore > Physics, Winona State University > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin > Computing > To change your subscription (digest mode or unsubscribe) visit > http://www.beowulf.org/mailman/listinfo/beowulf _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From diep at xs4all.nl Wed Jun 6 07:49:07 2012 From: diep at xs4all.nl (Vincent Diepeveen) Date: Wed, 6 Jun 2012 13:49:07 +0200 Subject: [Beowulf] Desktop fan reccommendation In-Reply-To: References: Message-ID: <307F35D1-83C8-426A-A877-3185B61BD09A@xs4all.nl> Nathan at closer look it seems there is on the outside a tiny fan as well for the psu. Probably that's the thing making the big noise. Possibly it's a 15k rpm fan or something? Hah at least 65 decibel i'd guess if you measure correctly. It'll have to deliver enough airflow to cool the psu part? Maybe you could do the same thing i'm doing, that's just put a huge fan at the outside, rewire the 120-230 volt wires and deliver that power in a different manner, it might be able to suck out enough, especially if you get rid of the grill of the 15k RPM fan, it'll be easier for it to suck out air there. Also the limiting grill of the heat exchanger if you cut it out it might work. You can just test it. whether it works. Should work ok i guess. You need a capable fan outside though. How much must it cool, probably a high clocked socket 2011 that's fulltime crunching in AVX will eat a 260 watt or so from the powertap? On Jun 5, 2012, at 7:48 PM, Nathan Moore wrote: > All, > > This is barely beowuf related... > > New desktop machine is a Shuttle SX79R5, http://us.shuttle.com/ > barebone/Models/SX79R5.html > > In the past, shuttles have been very quiet, but this one has a > fairly loud variable speed fan on the CPU heat exchanger. I > normally buy replacement parts from vendors like newegg, but their > selection of 90mm case fans mainly seems to be described by CFM and > whether the fan has LED lights mounted in it (FYI, that is not a > selling point). > > So, is there an engineer's version of newegg that ya'll know about? > There must be a super quiet 90mm fan out there that I can pick up > for $10... > > Nathan Moore > Physics, Winona State University > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin > Computing > To change your subscription (digest mode or unsubscribe) visit > http://www.beowulf.org/mailman/listinfo/beowulf _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From prentice at ias.edu Wed Jun 6 08:42:48 2012 From: prentice at ias.edu (Prentice Bisbal) Date: Wed, 06 Jun 2012 08:42:48 -0400 Subject: [Beowulf] Desktop fan reccommendation In-Reply-To: <4FCF2510.80502@unige.ch> References: <4FCF2510.80502@unige.ch> Message-ID: <4FCF5048.1020602@ias.edu> On 06/06/2012 05:38 AM, Daniel Pfenniger wrote: > Nathan Moore wrote: >> All, >> >> This is barely beowuf related... >> >> New desktop machine is a Shuttle SX79R5, >> http://us.shuttle.com/barebone/Models/SX79R5.html >> >> In the past, shuttles have been very quiet, but this one has a fairly loud >> variable speed fan on the CPU heat exchanger. I normally buy replacement parts >> from vendors like newegg, but their selection of 90mm case fans mainly seems to >> be described by CFM and whether the fan has LED lights mounted in it (FYI, that >> is not a selling point). >> >> So, is there an engineer's version of newegg that ya'll know about? There must >> be a super quiet 90mm fan out there that I can pick up for $10... > I remind ads for quiet and more efficient rotor-less fans for PC's but > cannot find such products anymore. > > The idea was to maximize the air flow area by displacing the central motor > to the blade edges. Not only the larger central area would allow a lower, > quieter blade speed, but the blades being accelerated at their extremities > by the circular motor would be mechanically more stable, less subject to > vibrations. My guess is that such fans, although technically better, were > too expensive in regard of the advantages. > I had one of these fans on one of my CPU heatsinks a few years ago. It was much quieter than the fan it replaced,but still not all that quiet when compared to a Dell or HP tower. I forget the name of the manufacturer or the model. The last time I looked, I couldn't find them anywhere. -- Prentice _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From james.p.lux at jpl.nasa.gov Wed Jun 6 09:24:50 2012 From: james.p.lux at jpl.nasa.gov (Lux, Jim (337C)) Date: Wed, 6 Jun 2012 13:24:50 +0000 Subject: [Beowulf] Desktop fan reccommendation In-Reply-To: <4FCF2510.80502@unige.ch> Message-ID: I used to work for a company that made fans... -> a very, very tiny fraction of the power into the fan goes into acoustic noise, so it's not a big driver of efficiency -> acoustics of fans are a black art. There are things that you know make it worse.. But once you avoid those, there's a lot of empiricism: Very tough to model accurately, even with a big cluster, and the price premium for a quieter fan isn't worth it. -> Do not have the number of blades be the same as the number of support struts or things in the way. In general, odd number of blades is better. Taken to an extreme, this is how you build a siren (two plates with holes, one spinning) -> big fan turning slower makes less noise than small fan turning fast -> noise is strongly dependent on the air speed. In HVAC design, the usual rule of thumb is to keep the airspeed below 1000 linear feet per minute (yeah, non SI unit.. But it's duct cfm/duct cross sectional area) -> having fan blade tips close to surrounding shroud makes it more efficient AND quieter, but requires tighter mechanical tolerances in manufacturing -> the spacing between blades is important, and a real challenge in any rotating fan. Near the hub, the trailing edge of one blade is closer to the leading edge of the next. AND, the tangential velocity of the blade through the air is different at the hub(root) than at the tip. Fans with large hubs are easier to optimize (smaller variation), BUT, you give up airflow area for a given outside dimensions. -> funky notches and swoops in the blades sometimes help, sometimes don't. I think mostly they're for patent protection. If I sell a fan with 3 asymmetric notches in each blade, and a container load of Chinese copies shows up at the port, it's easier to say that they infringe my patent. -> blade balance is important. Not only in terms of rotating mass, but in terms of aerodynamic balance. If the blade pitch is slightly different on each blade, then it will be noisier. -> well designed input and output vanes (particularly the latter) seem to make it quieter, but I don't know why. On 6/6/12 2:38 AM, "Daniel Pfenniger" wrote: >Nathan Moore wrote: >> All, >> >> This is barely beowuf related... >> >> New desktop machine is a Shuttle SX79R5, >> http://us.shuttle.com/barebone/Models/SX79R5.html >> >> In the past, shuttles have been very quiet, but this one has a fairly >>loud >> variable speed fan on the CPU heat exchanger. I normally buy >>replacement parts >> from vendors like newegg, but their selection of 90mm case fans mainly >>seems to >> be described by CFM and whether the fan has LED lights mounted in it >>(FYI, that >> is not a selling point). >> >> So, is there an engineer's version of newegg that ya'll know about? >>There must >> be a super quiet 90mm fan out there that I can pick up for $10... > >I remind ads for quiet and more efficient rotor-less fans for PC's but >cannot find such products anymore. > >The idea was to maximize the air flow area by displacing the central motor >to the blade edges. Not only the larger central area would allow a lower, >quieter blade speed, but the blades being accelerated at their extremities >by the circular motor would be mechanically more stable, less subject to >vibrations. My guess is that such fans, although technically better, were >too expensive in regard of the advantages. Yes.. Fans are a very cost sensitive product. For a lot of applications, nobody cares how noisy the fan is. > >The Dyson bladeless and silent fans are based om a different principle, >a cylindrical thin air layer carries along the inner air column, the >air flow is then laminar (http://www.dyson.com/store/fans.asp). But you still need a fan to generate the pressurized air for the slit. However, that fan can be hidden inside the base and can be baffled for noise reduction. > > Dan >_______________________________________________ >Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing >To change your subscription (digest mode or unsubscribe) visit >http://www.beowulf.org/mailman/listinfo/beowulf _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From diep at xs4all.nl Wed Jun 6 09:36:08 2012 From: diep at xs4all.nl (Vincent Diepeveen) Date: Wed, 6 Jun 2012 15:36:08 +0200 Subject: [Beowulf] Desktop fan reccommendation In-Reply-To: <4FCF5048.1020602@ias.edu> References: <4FCF2510.80502@unige.ch> <4FCF5048.1020602@ias.edu> Message-ID: How much airflow per square centimeter do they generate? As for the cluster here plenty of space available. To rent office space is around a 50 euro a square meter a year over here, not sure about there. So the cluster, some cardboard and huge fans of 14 and 18 CM are diong the job to cool the nodes and switch, mellanox of course. now as i understand the square meters they reserve for datacenters is always far too limited, causing space each node eats as important as well, yet that's not the problem here in my office. The thing that worries me more is the airflow to outside (and inside). Usually only have limited amount of square centimeters of tube there. The 'industrial' fans that have massive airflow, they're very very noisy. I'm already wondering about using some massive cardboard box and blow in air there using 8 fans (@ 100CFM each) or so and then behind them a second layer of fans, around 6 @ 100CFM, creating a massive overpressure, hoping that this will generate more airpressure, enough to blow in and blow out through some meters of tubing, but seems not like a perfect solution to me. On Jun 6, 2012, at 2:42 PM, Prentice Bisbal wrote: > > On 06/06/2012 05:38 AM, Daniel Pfenniger wrote: >> Nathan Moore wrote: >>> All, >>> >>> This is barely beowuf related... >>> >>> New desktop machine is a Shuttle SX79R5, >>> http://us.shuttle.com/barebone/Models/SX79R5.html >>> >>> In the past, shuttles have been very quiet, but this one has a >>> fairly loud >>> variable speed fan on the CPU heat exchanger. I normally buy >>> replacement parts >>> from vendors like newegg, but their selection of 90mm case fans >>> mainly seems to >>> be described by CFM and whether the fan has LED lights mounted in >>> it (FYI, that >>> is not a selling point). >>> >>> So, is there an engineer's version of newegg that ya'll know >>> about? There must >>> be a super quiet 90mm fan out there that I can pick up for $10... >> I remind ads for quiet and more efficient rotor-less fans for PC's >> but >> cannot find such products anymore. >> >> The idea was to maximize the air flow area by displacing the >> central motor >> to the blade edges. Not only the larger central area would allow >> a lower, >> quieter blade speed, but the blades being accelerated at their >> extremities >> by the circular motor would be mechanically more stable, less >> subject to >> vibrations. My guess is that such fans, although technically >> better, were >> too expensive in regard of the advantages. >> > I had one of these fans on one of my CPU heatsinks a few years ago. It > was much quieter than the fan it replaced,but still not all that quiet > when compared to a Dell or HP tower. I forget the name of the > manufacturer or the model. The last time I looked, I couldn't find > them > anywhere. > > -- > Prentice > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin > Computing > To change your subscription (digest mode or unsubscribe) visit > http://www.beowulf.org/mailman/listinfo/beowulf _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From james.p.lux at jpl.nasa.gov Wed Jun 6 10:56:22 2012 From: james.p.lux at jpl.nasa.gov (Lux, Jim (337C)) Date: Wed, 6 Jun 2012 14:56:22 +0000 Subject: [Beowulf] Desktop fan reccommendation In-Reply-To: Message-ID: On 6/6/12 6:36 AM, "Vincent Diepeveen" wrote: >How much airflow per square centimeter do they generate? That's not typically how fans are rated.. You'll have a curve of volume/time (e.g. Cubic feet/minute or cubic meters/hour) for a given back pressure (usually in "inches of water column") Fan ratings at zero backpressure are almost worthless. There's huge variation from the freeflow number to a backpressure number. You need the number at some decent backpressure (0.25" water column, for instance) A EBM Papst 4182 NX is nominally 105.9 CFM..at 0.1" it's about 90 CFM, at 0.2" it's about 50, and at the max backpressure for that fan. > >As for the cluster here plenty of space available. To rent office >space is around a 50 euro a square meter a year over here, >not sure about there. So the cluster, some cardboard and huge fans of >14 and 18 CM are diong the job to cool the nodes >and switch, mellanox of course. now as i understand the square meters >they reserve for datacenters is always far too limited, >causing space each node eats as important as well, yet that's not the >problem here in my office. > >The thing that worries me more is the airflow to outside (and >inside). Usually only have limited amount of square centimeters of >tube there. The 'industrial' fans that have massive airflow, they're >very very noisy. Not true... You can get VERY quiet fans that push a lot of air through a large duct. It's all about the air speed. You might want to look at a centrifugal blower rather than a axial fan. Axial fans don't do as well against high static pressures, and if you're doing a scheme with ducting, a centrifugal fan is usually a better choice. > >I'm already wondering about using some massive cardboard box and blow >in air there using 8 fans (@ 100CFM each) or so >and then behind them a second layer of fans, around 6 @ 100CFM, >creating a massive overpressure, hoping that this will >generate more airpressure, enough to blow in and blow out through >some meters of tubing, but seems not like a perfect solution to me. That sort of works, but the problem is that unless your "taper" is very, very long, you're basically just creating a pressurized plenum, and the fans will be inefficient working against that backpressure. What you are trying to do is combine multiple low speed flows into one high speed flow, and that's a tricky aerodynamics problem. That said, it does allow you to put a noisy fan somewhere else. IN general, high pressure fans are more noisy than low pressure fans, for the same flow or horsepower rating. Stacking fans doesn't work very well. The flow coming off the fan is twisting (unless you've got vanes to recover the rotational energy) so the second fan in the stack is working against a spiraling flow. Counter rotating sequential fans does work, but is trickier to design, and there's a lot fewer fans available with reverse rotation. > >On Jun 6, 2012, at 2:42 PM, Prentice Bisbal wrote: > >> >> On 06/06/2012 05:38 AM, Daniel Pfenniger wrote: >>> Nathan Moore wrote: >>>> All, >>>> >>>> This is barely beowuf related... >>>> >>>> New desktop machine is a Shuttle SX79R5, >>>> http://us.shuttle.com/barebone/Models/SX79R5.html >>>> >>>> In the past, shuttles have been very quiet, but this one has a >>>> fairly loud >>>> variable speed fan on the CPU heat exchanger. I normally buy >>>> replacement parts >>>> from vendors like newegg, but their selection of 90mm case fans >>>> mainly seems to >>>> be described by CFM and whether the fan has LED lights mounted in >>>> it (FYI, that >>>> is not a selling point). >>>> >>>> So, is there an engineer's version of newegg that ya'll know >>>> about? There must >>>> be a super quiet 90mm fan out there that I can pick up for $10... >>> I remind ads for quiet and more efficient rotor-less fans for PC's >>> but >>> cannot find such products anymore. >>> >>> The idea was to maximize the air flow area by displacing the >>> central motor >>> to the blade edges. Not only the larger central area would allow >>> a lower, >>> quieter blade speed, but the blades being accelerated at their >>> extremities >>> by the circular motor would be mechanically more stable, less >>> subject to >>> vibrations. My guess is that such fans, although technically >>> better, were >>> too expensive in regard of the advantages. >>> >> I had one of these fans on one of my CPU heatsinks a few years ago. It >> was much quieter than the fan it replaced,but still not all that quiet >> when compared to a Dell or HP tower. I forget the name of the >> manufacturer or the model. The last time I looked, I couldn't find >> them >> anywhere. >> >> -- >> Prentice >> _______________________________________________ >> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin >> Computing >> To change your subscription (digest mode or unsubscribe) visit >> http://www.beowulf.org/mailman/listinfo/beowulf > >_______________________________________________ >Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing >To change your subscription (digest mode or unsubscribe) visit >http://www.beowulf.org/mailman/listinfo/beowulf _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From mathog at caltech.edu Wed Jun 6 11:17:47 2012 From: mathog at caltech.edu (mathog) Date: Wed, 06 Jun 2012 08:17:47 -0700 Subject: [Beowulf] Desktop fan reccommendation In-Reply-To: References: Message-ID: <12a71f10d82f8c03a7c37e6003fb566f@saf.bio.caltech.edu> This isn't that hard a problem. Visit the major fan manufacturers sites, buy the one that will fit, moves at least as much air as the original, and is much quieter. The manufacturers all list their products size (ie, the ones that will fit) and then check cfm and noise. For instance, I have bought fans from these guys a couple of times: http://www.dynatron-corp.com/en/product_list.aspx?cv=20-72 Generally you have to go through a distributor and not buy direct, but that is no big deal. Jim Lux wrote: > -> acoustics of fans are a black art. Especially when they fail. We had a 20mm fan go bad in a sort of scanner recently. This itty bitty fan barely moves any air at the best of times (it cools a 486, which really doesn't need to be cooled) and under normal circumstances the fan is completely inaudible. The users contacted me and told me that scanner was making horrible mechanical failure sounds, as if the scan stage was scraping on something. I didn't measure it, but the sound was really loud, I'm guessing at least 85 decibels, and it really did sound like the end of the world. The sound came in bursts, with no noise in between. I'm guessing a bearing moving around in the fan, between a noisy position and a quiet one, or maybe it had developed some sort of resonance. All that racket was from one tiny fan. Regards, David Mathog mathog at caltech.edu Manager, Sequence Analysis Facility, Biology Division, Caltech _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From Daniel.Pfenniger at unige.ch Wed Jun 6 13:33:20 2012 From: Daniel.Pfenniger at unige.ch (Daniel Pfenniger) Date: Wed, 06 Jun 2012 19:33:20 +0200 Subject: [Beowulf] Desktop fan reccommendation In-Reply-To: <53bed0488afa16de22a4e9a658ec1290.squirrel@th.physik.uni-frankfurt.de> References: <4FCF2510.80502@unige.ch> <53bed0488afa16de22a4e9a658ec1290.squirrel@th.physik.uni-frankfurt.de> Message-ID: <4FCF9460.10500@unige.ch> holway at th.physik.uni-frankfurt.de wrote: > >> The Dyson bladeless and silent fans are based om a different principle, >> a cylindrical thin air layer carries along the inner air column, the >> air flow is then laminar (http://www.dyson.com/store/fans.asp). > > Which is not good if your trying to cool stuff..... Well, the fans we are discussing expel air *out* of the box so the heat carried by the air doesn't care about the downstream laminar or turbulent state of the airflow. However noise generation does depend on the airflow state, since the acoustic power is proportional to the 8th power of the turbulence eddy speed (Lighthill 1952, 1954). This is why jet planes are noisy, as their turbulence is almost sonic. The airplane or helicopter propeller tips, or the fan blade ends move closer to the sound speed, so most of the sound is generated there. The conclusion is that to keep a computer quiet one has advantage to use large fans rotating at low speed. For the same air/heat output one gets much less noise, especially if the airflow is laminar. Dan _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From diep at xs4all.nl Wed Jun 6 12:56:20 2012 From: diep at xs4all.nl (Vincent Diepeveen) Date: Wed, 6 Jun 2012 18:56:20 +0200 Subject: [Beowulf] Desktop fan reccommendation In-Reply-To: References: Message-ID: <158B5F81-672F-4A09-B3E6-F66DD85E7E8A@xs4all.nl> On Jun 6, 2012, at 4:56 PM, Lux, Jim (337C) wrote: > > > On 6/6/12 6:36 AM, "Vincent Diepeveen" wrote: > >> How much airflow per square centimeter do they generate? > > That's not typically how fans are rated.. Yeah that was a creative way it seems to mean airspeed :) For a small diameter tube one needs a massive airspeed to still push through some hundreds of CFM. Note the new generation fans really improved a lot. I'm very happy with that aerocool shark of 12 CM. It's 7 euro in shops here a piece (includes 19% VAT which soon gets btw 21% here). Happen to have a link for the type of fan you mean that fits in a small tube of around a 10CM diameter or so and which is centrifugal, big CFM and low noise? Will be interesting to toy with! V-sign! (for the non-insiders - 6th of june is D-Day) > > You'll have a curve of volume/time (e.g. Cubic feet/minute or cubic > meters/hour) for a given back pressure (usually in "inches of water > column") > > Fan ratings at zero backpressure are almost worthless. There's huge > variation from the freeflow number to a backpressure number. You > need the > number at some decent backpressure (0.25" water column, for instance) > > > A EBM Papst 4182 NX is nominally 105.9 CFM..at 0.1" it's about 90 > CFM, at > 0.2" it's about 50, and at the max backpressure for that fan. > > > >> >> As for the cluster here plenty of space available. To rent office >> space is around a 50 euro a square meter a year over here, >> not sure about there. So the cluster, some cardboard and huge fans of >> 14 and 18 CM are diong the job to cool the nodes >> and switch, mellanox of course. now as i understand the square meters >> they reserve for datacenters is always far too limited, >> causing space each node eats as important as well, yet that's not the >> problem here in my office. >> >> The thing that worries me more is the airflow to outside (and >> inside). Usually only have limited amount of square centimeters of >> tube there. The 'industrial' fans that have massive airflow, they're >> very very noisy. > > > Not true... You can get VERY quiet fans that push a lot of air > through a > large duct. It's all about the air speed. > > You might want to look at a centrifugal blower rather than a axial > fan. > Axial fans don't do as well against high static pressures, and if > you're > doing a scheme with ducting, a centrifugal fan is usually a better > choice. > >> >> I'm already wondering about using some massive cardboard box and blow >> in air there using 8 fans (@ 100CFM each) or so >> and then behind them a second layer of fans, around 6 @ 100CFM, >> creating a massive overpressure, hoping that this will >> generate more airpressure, enough to blow in and blow out through >> some meters of tubing, but seems not like a perfect solution to me. > > > That sort of works, but the problem is that unless your "taper" is > very, > very long, you're basically just creating a pressurized plenum, and > the > fans will be inefficient working against that backpressure. What > you are > trying to do is combine multiple low speed flows into one high > speed flow, > and that's a tricky aerodynamics problem. That said, it does > allow you > to put a noisy fan somewhere else. > > IN general, high pressure fans are more noisy than low pressure > fans, for > the same flow or horsepower rating. > > Stacking fans doesn't work very well. The flow coming off the fan is > twisting (unless you've got vanes to recover the rotational energy) > so the > second fan in the stack is working against a spiraling flow. Counter > rotating sequential fans does work, but is trickier to design, and > there's > a lot fewer fans available with reverse rotation. > > >> >> On Jun 6, 2012, at 2:42 PM, Prentice Bisbal wrote: >> >>> >>> On 06/06/2012 05:38 AM, Daniel Pfenniger wrote: >>>> Nathan Moore wrote: >>>>> All, >>>>> >>>>> This is barely beowuf related... >>>>> >>>>> New desktop machine is a Shuttle SX79R5, >>>>> http://us.shuttle.com/barebone/Models/SX79R5.html >>>>> >>>>> In the past, shuttles have been very quiet, but this one has a >>>>> fairly loud >>>>> variable speed fan on the CPU heat exchanger. I normally buy >>>>> replacement parts >>>>> from vendors like newegg, but their selection of 90mm case fans >>>>> mainly seems to >>>>> be described by CFM and whether the fan has LED lights mounted in >>>>> it (FYI, that >>>>> is not a selling point). >>>>> >>>>> So, is there an engineer's version of newegg that ya'll know >>>>> about? There must >>>>> be a super quiet 90mm fan out there that I can pick up for $10... >>>> I remind ads for quiet and more efficient rotor-less fans for PC's >>>> but >>>> cannot find such products anymore. >>>> >>>> The idea was to maximize the air flow area by displacing the >>>> central motor >>>> to the blade edges. Not only the larger central area would allow >>>> a lower, >>>> quieter blade speed, but the blades being accelerated at their >>>> extremities >>>> by the circular motor would be mechanically more stable, less >>>> subject to >>>> vibrations. My guess is that such fans, although technically >>>> better, were >>>> too expensive in regard of the advantages. >>>> >>> I had one of these fans on one of my CPU heatsinks a few years >>> ago. It >>> was much quieter than the fan it replaced,but still not all that >>> quiet >>> when compared to a Dell or HP tower. I forget the name of the >>> manufacturer or the model. The last time I looked, I couldn't find >>> them >>> anywhere. >>> >>> -- >>> Prentice >>> _______________________________________________ >>> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin >>> Computing >>> To change your subscription (digest mode or unsubscribe) visit >>> http://www.beowulf.org/mailman/listinfo/beowulf >> >> _______________________________________________ >> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin >> Computing >> To change your subscription (digest mode or unsubscribe) visit >> http://www.beowulf.org/mailman/listinfo/beowulf > _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From ntmoore at gmail.com Wed Jun 6 13:00:15 2012 From: ntmoore at gmail.com (Nathan Moore) Date: Wed, 6 Jun 2012 12:00:15 -0500 Subject: [Beowulf] Desktop fan reccommendation In-Reply-To: <31570B96-9227-4CF1-BB79-910547DC1F94@xs4all.nl> References: <31570B96-9227-4CF1-BB79-910547DC1F94@xs4all.nl> Message-ID: > > You sure this one is easy to replace? > Yes, very easy to replace. About 8 phillips screws. Unfortunately though, the shroud is fixed at 92mm or so, so a bigger, slower fan is not possible. It seems that it doesn't have a cooler at all for the cpu but as you say > it's some sort of cheapskate thing tubing that pumps liquid through the socket and then seemingly with 1 fan that is doing cooling both > for the PSU as well as the cpu fan at once. Sort-of. I think the "heat pipe" is essentially 4-5 copper tubes that run to a fine-finned radiator. The fan vents the radiator. It is actually a fairly elegant, compact, and reliable design. > > -- Mailscanner: Clean -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From diep at xs4all.nl Wed Jun 6 13:10:45 2012 From: diep at xs4all.nl (Vincent Diepeveen) Date: Wed, 6 Jun 2012 19:10:45 +0200 Subject: [Beowulf] Desktop fan reccommendation In-Reply-To: References: <31570B96-9227-4CF1-BB79-910547DC1F94@xs4all.nl> Message-ID: <6547CA52-944E-4A84-B8F6-B995E5E9F5EC@xs4all.nl> How many RPM is that 9 CM fan? How about that small tiny fan of the psu isn't that one very noisy? Cooling a psu that has to deliver a 220 watt or so sure needs lots of CFM to cool and the tiny fans that i know from rackmounts that create a 20+ CFM they're all 50+ decibel rated, add some aluminium around 'em and it's 65 decibel... On Jun 6, 2012, at 7:00 PM, Nathan Moore wrote: > You sure this one is easy to replace? > > Yes, very easy to replace. About 8 phillips screws. Unfortunately > though, the shroud is fixed at 92mm or so, so a bigger, slower fan > is not possible. > > It seems that it doesn't have a cooler at all for the cpu but as > you say it's some sort of cheapskate thing tubing that pumps liquid > through the socket and then seemingly with 1 fan that is doing > cooling both for the PSU as well as the cpu fan at once. > > Sort-of. I think the "heat pipe" is essentially 4-5 copper tubes > that run to a fine-finned radiator. The fan vents the radiator. > It is actually a fairly elegant, compact, and reliable design. > > _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From james.p.lux at jpl.nasa.gov Wed Jun 6 13:28:00 2012 From: james.p.lux at jpl.nasa.gov (Lux, Jim (337C)) Date: Wed, 6 Jun 2012 17:28:00 +0000 Subject: [Beowulf] Desktop fan reccommendation In-Reply-To: <4FCF9460.10500@unige.ch> References: <4FCF2510.80502@unige.ch> <53bed0488afa16de22a4e9a658ec1290.squirrel@th.physik.uni-frankfurt.de> <4FCF9460.10500@unige.ch> Message-ID: I'm not sure that the acoustic noise from fans is from the actual aerodynamic noises (e.g. not like a jet engine, or the pressure/shock waves from the blades). The blade tips are probably operating in a low speed incompressible flow regime. For low speed fans typical of this application, noise is much more from incidental flow behavior and mechanical transmission (e.g. the airflow from the blade hitting a stationary object and creating a pulsed flow which then hits the package side and makes it vibrate). There's also surprisingly high noise in some fans from the DC brushless motor (a cheap controller uses square edge pulses to the windings, so the torque has pulses, which then are mechanically transmitted to the housing.. a nice "whine" source for a little 6000 rpm motor with a lot of poles) Actually, not all fans are set up to suck out of the box. Blowing in works better for heat transfer (you're pushing cold dense air, rather than sucking warm undense air).. Most test equipment uses the "suck in through a filter and pressurize the box" design approach. I think PCs evolved the other way because the single fan was in the power supply, and you didn't want to blow hot air, preheated by the power supply, through the rest of the system. So it is set up as an "exhaust from PS box" fan. And a lot of higher performance PCs (like the Dell sitting on my desk) use centrifugal fans (with variable speed, to boot) Jim Lux -----Original Message----- From: beowulf-bounces at beowulf.org [mailto:beowulf-bounces at beowulf.org] On Behalf Of Daniel Pfenniger Sent: Wednesday, June 06, 2012 10:33 AM To: holway at th.physik.uni-frankfurt.de Cc: Beowulf Mailing List Subject: Re: [Beowulf] Desktop fan reccommendation holway at th.physik.uni-frankfurt.de wrote: > >> The Dyson bladeless and silent fans are based om a different >> principle, a cylindrical thin air layer carries along the inner air >> column, the air flow is then laminar (http://www.dyson.com/store/fans.asp). > > Which is not good if your trying to cool stuff..... Well, the fans we are discussing expel air *out* of the box so the heat carried by the air doesn't care about the downstream laminar or turbulent state of the airflow. However noise generation does depend on the airflow state, since the acoustic power is proportional to the 8th power of the turbulence eddy speed (Lighthill 1952, 1954). This is why jet planes are noisy, as their turbulence is almost sonic. The airplane or helicopter propeller tips, or the fan blade ends move closer to the sound speed, so most of the sound is generated there. The conclusion is that to keep a computer quiet one has advantage to use large fans rotating at low speed. For the same air/heat output one gets much less noise, especially if the airflow is laminar. Dan _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From ntmoore at gmail.com Wed Jun 6 13:36:29 2012 From: ntmoore at gmail.com (Nathan Moore) Date: Wed, 6 Jun 2012 12:36:29 -0500 Subject: [Beowulf] Desktop fan reccommendation In-Reply-To: References: <4FCF2510.80502@unige.ch> <53bed0488afa16de22a4e9a658ec1290.squirrel@th.physik.uni-frankfurt.de> <4FCF9460.10500@unige.ch> Message-ID: > > Actually, not all fans are set up to suck out of the box. Ha! > Blowing in works better for heat transfer (you're pushing cold dense air, > rather than sucking warm undense air).. Most test equipment uses the "suck > in through a filter and pressurize the box" design approach. I think PCs > evolved the other way because the single fan was in the power supply, and > you didn't want to blow hot air, preheated by the power supply, through the > rest of the system. So it is set up as an "exhaust from PS box" fan. > > And a lot of higher performance PCs (like the Dell sitting on my desk) use > centrifugal fans (with variable speed, to boot) > > Jim Lux > > -----Original Message----- > From: beowulf-bounces at beowulf.org [mailto:beowulf-bounces at beowulf.org] On > Behalf Of Daniel Pfenniger > Sent: Wednesday, June 06, 2012 10:33 AM > To: holway at th.physik.uni-frankfurt.de > Cc: Beowulf Mailing List > Subject: Re: [Beowulf] Desktop fan reccommendation > > holway at th.physik.uni-frankfurt.de wrote: > > > >> The Dyson bladeless and silent fans are based om a different > >> principle, a cylindrical thin air layer carries along the inner air > >> column, the air flow is then laminar ( > http://www.dyson.com/store/fans.asp). > > > > Which is not good if your trying to cool stuff..... > > Well, the fans we are discussing expel air *out* of the box so the heat > carried by the air doesn't care about the downstream laminar or turbulent > state of the airflow. > > However noise generation does depend on the airflow state, since the > acoustic power is proportional to the 8th power of the turbulence eddy > speed (Lighthill 1952, 1954). This is why jet planes are noisy, as their > turbulence is almost sonic. The airplane or helicopter propeller tips, or > the fan blade ends move closer to the sound speed, so most of the sound is > generated there. > > The conclusion is that to keep a computer quiet one has advantage to use > large fans rotating at low speed. For the same air/heat output one gets > much less noise, especially if the airflow is laminar. > > > Dan > > > > > > > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing > To change your subscription (digest mode or unsubscribe) visit > http://www.beowulf.org/mailman/listinfo/beowulf > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing > To change your subscription (digest mode or unsubscribe) visit > http://www.beowulf.org/mailman/listinfo/beowulf > -- - - - - - - - - - - - - - - - - - - - - - Nathan Moore Winona, MN - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -- Mailscanner: Clean -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From diep at xs4all.nl Wed Jun 6 13:45:45 2012 From: diep at xs4all.nl (Vincent Diepeveen) Date: Wed, 6 Jun 2012 19:45:45 +0200 Subject: [Beowulf] Desktop fan reccommendation In-Reply-To: References: <4FCF2510.80502@unige.ch> <53bed0488afa16de22a4e9a658ec1290.squirrel@th.physik.uni-frankfurt.de> <4FCF9460.10500@unige.ch> Message-ID: <451F3CB3-B159-47D2-BAF1-0E62264A72EB@xs4all.nl> On Jun 6, 2012, at 7:28 PM, Lux, Jim (337C) wrote: > I'm not sure that the acoustic noise from fans is from the actual > aerodynamic noises (e.g. not like a jet engine, or the pressure/ > shock waves from the blades). The blade tips are probably > operating in a low speed incompressible flow regime. > > For low speed fans typical of this application, noise is much more > from incidental flow behavior and mechanical transmission (e.g. the > airflow from the blade hitting a stationary object and creating a > pulsed flow which then hits the package side and makes it > vibrate). There's also surprisingly high noise in some fans from > the DC brushless motor (a cheap controller uses square edge pulses > to the windings, so the torque has pulses, which then are > mechanically transmitted to the housing.. a nice "whine" source for > a little 6000 rpm motor with a lot of poles) > > Actually, not all fans are set up to suck out of the box. > Blowing in works better for heat transfer (you're pushing cold > dense air, rather than sucking warm undense air).. Most test > equipment uses the "suck in through a filter and pressurize the > box" design approach. I think PCs evolved the other way because > the single fan was in the power supply, and you didn't want to blow > hot air, preheated by the power supply, through the rest of the > system. So it is set up as an "exhaust from PS box" fan. Exhausting for PC's is most effective for what you probably call 'low airspeed' fans when i measured some years ago with a dual k7 machine. It was far more effective than blowing in some air. The ballgame changes when you blow in at some massive mercilious CFM as getting the lower temperature sooner to the cpu is going to make a difference then. This is not so interesting for computers though. I blew in with far over moped sounds using some delta fans. Yet it's already cooled really well by then so not such an interesting difference. At that huge blow in rate, it was very effective indeed, yet that difference i could only measure when total overkilling the machine with those fans. Actually the machines thin aluminium started to bend under that huge airpressure, but i figured that out only long after the experiment, but that's for another time to discuss :) > > And a lot of higher performance PCs (like the Dell sitting on my > desk) use centrifugal fans (with variable speed, to boot) > When i googled on centrifugal fans, i saw huge prices in the hundreds of dollars. Would mean the centrifugal fans are more expensive than the entire cluster which seems a tad odd. So it's gonna be the cheapskate cardboard solution with some duct tape and glue and relative cheap fans. > Jim Lux > > -----Original Message----- > From: beowulf-bounces at beowulf.org [mailto:beowulf- > bounces at beowulf.org] On Behalf Of Daniel Pfenniger > Sent: Wednesday, June 06, 2012 10:33 AM > To: holway at th.physik.uni-frankfurt.de > Cc: Beowulf Mailing List > Subject: Re: [Beowulf] Desktop fan reccommendation > > holway at th.physik.uni-frankfurt.de wrote: >> >>> The Dyson bladeless and silent fans are based om a different >>> principle, a cylindrical thin air layer carries along the inner air >>> column, the air flow is then laminar (http://www.dyson.com/store/ >>> fans.asp). >> >> Which is not good if your trying to cool stuff..... > > Well, the fans we are discussing expel air *out* of the box so the > heat carried by the air doesn't care about the downstream laminar > or turbulent state of the airflow. > > However noise generation does depend on the airflow state, since > the acoustic power is proportional to the 8th power of the > turbulence eddy speed (Lighthill 1952, 1954). This is why jet > planes are noisy, as their turbulence is almost sonic. The > airplane or helicopter propeller tips, or the fan blade ends move > closer to the sound speed, so most of the sound is generated there. > > The conclusion is that to keep a computer quiet one has advantage > to use large fans rotating at low speed. For the same air/heat > output one gets much less noise, especially if the airflow is laminar. > > > Dan > > > > > > > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin > Computing To change your subscription (digest mode or unsubscribe) > visit http://www.beowulf.org/mailman/listinfo/beowulf > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin > Computing > To change your subscription (digest mode or unsubscribe) visit > http://www.beowulf.org/mailman/listinfo/beowulf _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From james.p.lux at jpl.nasa.gov Wed Jun 6 16:07:10 2012 From: james.p.lux at jpl.nasa.gov (Lux, Jim (337C)) Date: Wed, 6 Jun 2012 20:07:10 +0000 Subject: [Beowulf] Desktop fan reccommendation In-Reply-To: <451F3CB3-B159-47D2-BAF1-0E62264A72EB@xs4all.nl> References: <4FCF2510.80502@unige.ch> <53bed0488afa16de22a4e9a658ec1290.squirrel@th.physik.uni-frankfurt.de> <4FCF9460.10500@unige.ch> <451F3CB3-B159-47D2-BAF1-0E62264A72EB@xs4all.nl> Message-ID: -----Original Message----- From: Vincent Diepeveen [mailto:diep at xs4all.nl] Sent: Wednesday, June 06, 2012 10:46 AM To: Lux, Jim (337C) Cc: Daniel.Pfenniger at unige.ch; holway at th.physik.uni-frankfurt.de; Beowulf Mailing List Subject: Re: [Beowulf] Desktop fan reccommendation On Jun 6, 2012, at 7:28 PM, Lux, Jim (337C) wrote: > And a lot of higher performance PCs (like the Dell sitting on my > desk) use centrifugal fans (with variable speed, to boot) > When i googled on centrifugal fans, i saw huge prices in the hundreds of dollars. Would mean the centrifugal fans are more expensive than the entire cluster which seems a tad odd. So it's gonna be the cheapskate cardboard solution with some duct tape and glue and relative cheap fans. --- The fan in my dell is plastic and cheap.. I've seen them surplus for under $5.. But what you want is often sold as a "squirrel cage blower".. The advantages are: a) good performance against backpressure b) lots of very small blades, so the "blade repetition rate" noise is high frequency, low amplitude and easily absorbed c) they give decent performance at low rotation rates (500-1000 RPM) they are the dominant device used in, for instance, heating and airconditioning. A good cheap source is from automotive scrap yards. The blower that pushes the air through the heater core and all the various ducts in a car is well suited to pushing a lot of air through a lot of loss. 12VDC typically. Make sure you get the housing too, not just the squirrel cage and motor. This may require a bit of hacksawing on modern cars. Ones from upscale cars are quieter than more downscale cars. So find that Mercedes scrap, not the stuff from the DDR (Do Trabants even have heaters, or do you wear your good socialist overcoat) Here's a typical item on eBay http://www.ebay.com/itm/Squirrel-Cage-Shaded-Pole-Blower-Fan-220-CFM-Dayton-60-available-/190685633240?pt=LH_DefaultDomain_0&hash=item2c65bfdad8 Here's a 12VDC one http://www.ebay.com/itm/454-CFM-12-VOLT-DC-SPAL-007-A42-32D-3-SPEED-CAB-FAN-BLOWER-16-1406-/270917027943?_trksid=p4340.m1982&_trkparms=aid%3D555000%26algo%3DPW.CURRENT%26ao%3D1%26asc%3D10%26meid%3D8950398135996031222%26pid%3D100009%26prg%3D1005%26rk%3D1 I've also seen somewhat larger versions of this as an appliance.. plastic housing, designed to be set on the floor to blow air for cooling or helping to dry recently mopped floors or wet carpets. Here's one from a computer http://www.surpluscenter.com/item.asp?item=16-1151&catname=electric I should point out that they make these in huge sizes (as in 1 million CFM) for applications like underground mine ventilation _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From prentice at ias.edu Wed Jun 6 16:18:42 2012 From: prentice at ias.edu (Prentice Bisbal) Date: Wed, 06 Jun 2012 16:18:42 -0400 Subject: [Beowulf] Desktop fan reccommendation In-Reply-To: References: <4FCF2510.80502@unige.ch> <53bed0488afa16de22a4e9a658ec1290.squirrel@th.physik.uni-frankfurt.de> <4FCF9460.10500@unige.ch> <451F3CB3-B159-47D2-BAF1-0E62264A72EB@xs4all.nl> Message-ID: <4FCFBB22.7050608@ias.edu> On 06/06/2012 04:07 PM, Lux, Jim (337C) wrote: > > > > I've also seen somewhat larger versions of this as an appliance.. plastic housing, designed to be set on the floor to blow air for cooling or helping to dry recently mopped floors or wet carpets. > > You should be able to get one of these plastic housing ones from a janitorial supply company or an emergency services supply company. Janitors use them to dry wet floors, and firefighters use them to vent a house when CO limits are too high, or there's too much smoke in a house. You can get one from McMaster-Carr for $346.67 or $451.67. Click on "portable blowers" in the link below: http://www.mcmaster.com/#standard-air-blowers/ -- Prentice _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From diep at xs4all.nl Thu Jun 7 02:47:50 2012 From: diep at xs4all.nl (Vincent Diepeveen) Date: Thu, 7 Jun 2012 08:47:50 +0200 Subject: [Beowulf] Desktop fan reccommendation In-Reply-To: References: <4FCF2510.80502@unige.ch> <53bed0488afa16de22a4e9a658ec1290.squirrel@th.physik.uni-frankfurt.de> <4FCF9460.10500@unige.ch> <451F3CB3-B159-47D2-BAF1-0E62264A72EB@xs4all.nl> Message-ID: <586C6315-BB53-4599-8AE0-85044C5D4AC6@xs4all.nl> This is all huge decibel junk man. So any centrifugal fan is out of the question if it produces that much noise right next to my desk :) In general spoken i don't understand why so many persons accept from manufacturers those huge soundlevels. The fans i got here, the 18 CM ones are 700 RPM @ 19 decibel, you don't hear them. The 1500 RPM aerocools 12 CM, you do hear them, but with their own rubbers and 1.5 meters away it's acceptable noise, though it depends upon the heatsinks you got a lot and i had to buy some floor isolating material that absorbs a lot of decibels, to quiet down things more. Requirement 1 is : it must be low noise of course. Would be very bad to have something of 60-100 decibel next to professional sound equipment. On Jun 6, 2012, at 10:07 PM, Lux, Jim (337C) wrote: > > > > -----Original Message----- > From: Vincent Diepeveen [mailto:diep at xs4all.nl] > Sent: Wednesday, June 06, 2012 10:46 AM > To: Lux, Jim (337C) > Cc: Daniel.Pfenniger at unige.ch; holway at th.physik.uni-frankfurt.de; > Beowulf Mailing List > Subject: Re: [Beowulf] Desktop fan reccommendation > > > On Jun 6, 2012, at 7:28 PM, Lux, Jim (337C) wrote: >> And a lot of higher performance PCs (like the Dell sitting on my >> desk) use centrifugal fans (with variable speed, to boot) >> > > When i googled on centrifugal fans, i saw huge prices in the hundreds > of dollars. > > Would mean the centrifugal fans are more expensive than the entire > cluster which seems a tad odd. > > So it's gonna be the cheapskate cardboard solution with some duct > tape and glue and relative cheap fans. > > --- > > The fan in my dell is plastic and cheap.. I've seen them surplus > for under $5.. > > But what you want is often sold as a "squirrel cage blower".. > > The advantages are: > a) good performance against backpressure > b) lots of very small blades, so the "blade repetition rate" noise > is high frequency, low amplitude and easily absorbed > c) they give decent performance at low rotation rates (500-1000 RPM) > > they are the dominant device used in, for instance, heating and > airconditioning. > > A good cheap source is from automotive scrap yards. The blower > that pushes the air through the heater core and all the various > ducts in a car is well suited to pushing a lot of air through a lot > of loss. 12VDC typically. Make sure you get the housing too, not > just the squirrel cage and motor. This may require a bit of > hacksawing on modern cars. > > Ones from upscale cars are quieter than more downscale cars. So > find that Mercedes scrap, not the stuff from the DDR (Do Trabants > even have heaters, or do you wear your good socialist overcoat) > > > Here's a typical item on eBay > http://www.ebay.com/itm/Squirrel-Cage-Shaded-Pole-Blower-Fan-220- > CFM-Dayton-60-available-/190685633240? > pt=LH_DefaultDomain_0&hash=item2c65bfdad8 > > Here's a 12VDC one > http://www.ebay.com/itm/454-CFM-12-VOLT-DC-SPAL-007-A42-32D-3-SPEED- > CAB-FAN-BLOWER-16-1406-/270917027943? > _trksid=p4340.m1982&_trkparms=aid%3D555000%26algo%3DPW.CURRENT%26ao% > 3D1%26asc%3D10%26meid%3D8950398135996031222%26pid%3D100009%26prg% > 3D1005%26rk%3D1 > > > I've also seen somewhat larger versions of this as an appliance.. > plastic housing, designed to be set on the floor to blow air for > cooling or helping to dry recently mopped floors or wet carpets. > > Here's one from a computer > http://www.surpluscenter.com/item.asp?item=16-1151&catname=electric > > I should point out that they make these in huge sizes (as in 1 > million CFM) for applications like underground mine ventilation _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From bill at cse.ucdavis.edu Fri Jun 8 20:06:19 2012 From: bill at cse.ucdavis.edu (Bill Broadley) Date: Fri, 08 Jun 2012 17:06:19 -0700 Subject: [Beowulf] Torrents for HPC Message-ID: <4FD2937B.6010408@cse.ucdavis.edu> I've built Myrinet, SDR, DDR, and QDR clusters ( no FDR yet), but I still have users whose use cases and budgets still only justify GigE. I've setup a 160TB hadoop cluster is working well, but haven't found justification for the complexity/cost related to lustre. I have high hopes for Ceph, but it seems not quite ready yet. I'd happy to hear otherwise. A new user on one of my GigE clusters submits batches of 500 jobs that need to randomly read a 30-60GB dataset. They aren't the only user of said cluster so each job will be waiting in the queue with a mix of others. As you might imagine that hammers a central GigE connected NFS server pretty hard. This cluster has 38 computes/304 cores/608 threads. I thought torrent might be a good way to publish such a dataset to the compute nodes (thus avoiding the GigE bottleneck). So I wrote a small/simple bittorrent client and made a 16GB example data set and measured the performance pushing it to 38 compute nodes: http://cse.ucdavis.edu/bill/btbench-2.png The slow ramp up is partially because I'm launching torrent clients with a crude for i in { ssh $i launch_torrent.sh }. I get approximately 2.5GB/sec sustained when writing to 38 compute nodes. So 38 nodes * 16GB = 608GB to distribute @ 2.5 GHz sec = 240 seconds or so. The clients definitely see MUCH faster performance when access a local copy instead of a small share of the performance/bandwidth of a central file server. Do you think it's worth bundling up for others to use? This is how it works: 1) User runs publish before they start submitting jobs. 2) The publish command makes a torrent of that directory and starts seeding that torrent. 3) The user submits an arbitrary number of jobs that needs that directory. Inside the job they "$ subscribe " 4) The subscribe command launches one torrent client per node (not per j job) and blocks until the directory is completely downloaded 5) /scratch// has the users data Not nearly as convenient as having a fast parallel filesystem, but seems potentially useful for those who have large read only datasets, GigE and NFS. Thoughts? _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From jlb17 at duke.edu Mon Jun 11 13:49:23 2012 From: jlb17 at duke.edu (Joshua Baker-LePain) Date: Mon, 11 Jun 2012 13:49:23 -0400 (EDT) Subject: [Beowulf] Torrents for HPC In-Reply-To: <4FD2937B.6010408@cse.ucdavis.edu> References: <4FD2937B.6010408@cse.ucdavis.edu> Message-ID: On Fri, 8 Jun 2012 at 5:06pm, Bill Broadley wrote > Do you think it's worth bundling up for others to use? > > This is how it works: > 1) User runs publish before they start submitting > jobs. > 2) The publish command makes a torrent of that directory and starts > seeding that torrent. > 3) The user submits an arbitrary number of jobs that needs that > directory. Inside the job they "$ subscribe " > 4) The subscribe command launches one torrent client per node (not per j > job) and blocks until the directory is completely downloaded > 5) /scratch// has the users data > > Not nearly as convenient as having a fast parallel filesystem, but seems > potentially useful for those who have large read only datasets, GigE and > NFS. > > Thoughts? I would definitely be interested in a tool like this. Our situation is about as you describe -- we don't have the budget or workload to justify any interconnect higher-end than GigE, but have folks who pound our central storage to get at DBs stored there. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From beckerjes at mail.nih.gov Mon Jun 11 14:02:43 2012 From: beckerjes at mail.nih.gov (Jesse Becker) Date: Mon, 11 Jun 2012 14:02:43 -0400 Subject: [Beowulf] Torrents for HPC In-Reply-To: References: <4FD2937B.6010408@cse.ucdavis.edu> Message-ID: <20120611180243.GN38490@mail.nih.gov> On Mon, Jun 11, 2012 at 01:49:23PM -0400, Joshua Baker-LePain wrote: >On Fri, 8 Jun 2012 at 5:06pm, Bill Broadley wrote > >> Do you think it's worth bundling up for others to use? >> >> This is how it works: >> 1) User runs publish before they start submitting >> jobs. >> 2) The publish command makes a torrent of that directory and starts >> seeding that torrent. >> 3) The user submits an arbitrary number of jobs that needs that >> directory. Inside the job they "$ subscribe " >> 4) The subscribe command launches one torrent client per node (not per j >> job) and blocks until the directory is completely downloaded >> 5) /scratch// has the users data >> >> Not nearly as convenient as having a fast parallel filesystem, but seems >> potentially useful for those who have large read only datasets, GigE and >> NFS. >> >> Thoughts? > >I would definitely be interested in a tool like this. Our situation is >about as you describe -- we don't have the budget or workload to justify >any interconnect higher-end than GigE, but have folks who pound our >central storage to get at DBs stored there. I looked into doing something like this on 50-node cluster to synchronize several hundred GB of semi-static data used in /scratch. I found that the time to build the torrent files--calculating checksums and such--was *far* more time consuming than the actual file distribution. This is on top of the rather severe IO hit on the "seed" box as well. I fought with it for a while, but came to the conclusion that *for _this_ data*, and how quickly it changed, torrents weren't the way to go--largely because of the cost of creating the torrent in the first place. However, I do think that similar systems could be very useful, if perhaps a bit less strict in their tests. The peer-to-peer model is uselful, and (in some cases) simple size/date check could be enough to determine when (re)copying a file. One thing torrent's don't handle are file deletions, which opens up a few new problems. Eventually, I moved to a distrbuted rsync tree, which worked for a while, but was slightly fragile. Eventually, we dropped the whole thing when we purchased a sufficiently fast storage system. -- Jesse Becker NHGRI Linux support (Digicon Contractor) _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From landman at scalableinformatics.com Mon Jun 11 14:10:35 2012 From: landman at scalableinformatics.com (Joe Landman) Date: Mon, 11 Jun 2012 14:10:35 -0400 Subject: [Beowulf] Torrents for HPC In-Reply-To: <20120611180243.GN38490@mail.nih.gov> References: <4FD2937B.6010408@cse.ucdavis.edu> <20120611180243.GN38490@mail.nih.gov> Message-ID: <4FD6349B.6000604@scalableinformatics.com> On 06/11/2012 02:02 PM, Jesse Becker wrote: > I looked into doing something like this on 50-node cluster to > synchronize several hundred GB of semi-static data used in /scratch. > I found that the time to build the torrent files--calculating checksums > and such--was *far* more time consuming than the actual file > distribution. This is on top of the rather severe IO hit on the "seed" > box as well. > A long while ago, we developed 'xcp' which did data distribution from 1 machine to many machines, and was quite fast (non-broadcast). Specifically for moving some genomic/proteomic databases to remote nodes. Didn't see much interest in it, so we shelved it. It worked like this xcp file remote_path [--nodes node1[,node2....]] [--all] We were working on generalizing it for directories and other things as well, but as I noted, people were starting to talk (breathlessly at the time) about torrents for distribution, so we pushed it off and forgot about it. > I fought with it for a while, but came to the conclusion that *for > _this_ data*, and how quickly it changed, torrents weren't the way to > go--largely because of the cost of creating the torrent in the first > place. > > However, I do think that similar systems could be very useful, if > perhaps a bit less strict in their tests. The peer-to-peer model is > uselful, and (in some cases) simple size/date check could be enough to > determine when (re)copying a file. > > One thing torrent's don't handle are file deletions, which opens up a > few new problems. > > Eventually, I moved to a distrbuted rsync tree, which worked for a > while, but was slightly fragile. Eventually, we dropped the whole > thing when we purchased a sufficiently fast storage system. This is one of the things that drove us to building fast storage systems. Data motion is hard, and a good fast storage unit with some serious data movement cannons and high power storage can solve the problem with greater ease/elegance. -- Joseph Landman, Ph.D Founder and CEO Scalable Informatics Inc. email: landman at scalableinformatics.com web : http://scalableinformatics.com http://scalableinformatics.com/sicluster phone: +1 734 786 8423 x121 fax : +1 866 888 3112 cell : +1 734 612 4615 _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From bernard at vanhpc.org Mon Jun 11 14:17:53 2012 From: bernard at vanhpc.org (Bernard Li) Date: Mon, 11 Jun 2012 11:17:53 -0700 Subject: [Beowulf] Torrents for HPC In-Reply-To: <4FD6349B.6000604@scalableinformatics.com> References: <4FD2937B.6010408@cse.ucdavis.edu> <20120611180243.GN38490@mail.nih.gov> <4FD6349B.6000604@scalableinformatics.com> Message-ID: Hi all: I'd also like to point you guys to pcp: http://www.theether.org/pcp/ It's a bit old, but should still build on modern systems. It would be nice if somebody picks up development after all these years (hint hint) :-) Cheers, Bernard On Mon, Jun 11, 2012 at 11:10 AM, Joe Landman wrote: > On 06/11/2012 02:02 PM, Jesse Becker wrote: > >> I looked into doing something like this on 50-node cluster to >> synchronize several hundred GB of semi-static data used in /scratch. >> I found that the time to build the torrent files--calculating checksums >> and such--was *far* more time consuming than the actual file >> distribution. ?This is on top of the rather severe IO hit on the "seed" >> box as well. >> > > A long while ago, we developed 'xcp' which did data distribution from 1 > machine to many machines, and was quite fast (non-broadcast). > Specifically for moving some genomic/proteomic databases to remote > nodes. ?Didn't see much interest in it, so we shelved it. ?It worked > like this > > ? ? ? ?xcp file remote_path [--nodes node1[,node2....]] [--all] > > We were working on generalizing it for directories and other things as > well, but as I noted, people were starting to talk (breathlessly at the > time) about torrents for distribution, so we pushed it off and forgot > about it. > >> I fought with it for a while, but came to the conclusion that *for >> _this_ data*, and how quickly it changed, torrents weren't the way to >> go--largely because of the cost of creating the torrent in the first >> place. >> >> However, I do think that similar systems could be very useful, if >> perhaps a bit less strict in their tests. ?The peer-to-peer model is >> uselful, and (in some cases) simple size/date check could be enough to >> determine when (re)copying a file. >> >> One thing torrent's don't handle are file deletions, which opens up a >> few new problems. >> >> Eventually, I moved to a distrbuted rsync tree, which worked for a >> while, but was slightly fragile. ?Eventually, we dropped the whole >> thing when we purchased a sufficiently fast storage system. > > This is one of the things that drove us to building fast storage > systems. ?Data motion is hard, and a good fast storage unit with some > serious data movement cannons and high power storage can solve the > problem with greater ease/elegance. > > > -- > Joseph Landman, Ph.D > Founder and CEO > Scalable Informatics Inc. > email: landman at scalableinformatics.com > web ?: http://scalableinformatics.com > ? ? ? ?http://scalableinformatics.com/sicluster > phone: +1 734 786 8423 x121 > fax ?: +1 866 888 3112 > cell : +1 734 612 4615 > > > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing > To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From stewart at serissa.com Mon Jun 11 17:37:03 2012 From: stewart at serissa.com (Lawrence Stewart) Date: Mon, 11 Jun 2012 17:37:03 -0400 Subject: [Beowulf] Torrents for HPC In-Reply-To: <4FD6349B.6000604@scalableinformatics.com> References: <4FD2937B.6010408@cse.ucdavis.edu> <20120611180243.GN38490@mail.nih.gov> <4FD6349B.6000604@scalableinformatics.com> Message-ID: Another one of these file distribution things is "sbcast" from the slurm resource manager. It was amazingly fast to distribute a modest size file to all 972 nodes of the large Sicortex machine. I didn't try it with large files. _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From skylar.thompson at gmail.com Mon Jun 11 20:34:35 2012 From: skylar.thompson at gmail.com (Skylar Thompson) Date: Mon, 11 Jun 2012 17:34:35 -0700 Subject: [Beowulf] Torrents for HPC In-Reply-To: <4FD2937B.6010408@cse.ucdavis.edu> References: <4FD2937B.6010408@cse.ucdavis.edu> Message-ID: <4FD68E9B.6010204@gmail.com> On 6/8/2012 5:06 PM, Bill Broadley wrote: > > I've built Myrinet, SDR, DDR, and QDR clusters ( no FDR yet), but I > still have users whose use cases and budgets still only justify GigE. > > I've setup a 160TB hadoop cluster is working well, but haven't found > justification for the complexity/cost related to lustre. I have high > hopes for Ceph, but it seems not quite ready yet. I'd happy to hear > otherwise. > > A new user on one of my GigE clusters submits batches of 500 jobs that > need to randomly read a 30-60GB dataset. They aren't the only user of > said cluster so each job will be waiting in the queue with a mix of others. > > As you might imagine that hammers a central GigE connected NFS server > pretty hard. This cluster has 38 computes/304 cores/608 threads. > > I thought torrent might be a good way to publish such a dataset to the > compute nodes (thus avoiding the GigE bottleneck). So I wrote a > small/simple bittorrent client and made a 16GB example data set and > measured the performance pushing it to 38 compute nodes: > http://cse.ucdavis.edu/bill/btbench-2.png > > The slow ramp up is partially because I'm launching torrent clients with > a crude for i in { ssh $i launch_torrent.sh }. > > I get approximately 2.5GB/sec sustained when writing to 38 compute > nodes. So 38 nodes * 16GB = 608GB to distribute @ 2.5 GHz sec = 240 > seconds or so. > > The clients definitely see MUCH faster performance when access a local > copy instead of a small share of the performance/bandwidth of a central > file server. > > Do you think it's worth bundling up for others to use? > > This is how it works: > 1) User runs publish before they start submitting > jobs. > 2) The publish command makes a torrent of that directory and starts > seeding that torrent. > 3) The user submits an arbitrary number of jobs that needs that > directory. Inside the job they "$ subscribe " > 4) The subscribe command launches one torrent client per node (not per j > job) and blocks until the directory is completely downloaded > 5) /scratch// has the users data > > Not nearly as convenient as having a fast parallel filesystem, but seems > potentially useful for those who have large read only datasets, GigE and > NFS. > > Thoughts? We've run into a similar need for a solution at $WORK. I work in a large genomics research department and we have cluster users who want to copy large data files (20GB-500GB) to hundreds of cluster nodes at once. Since the people that need this tend to run MPI anyways, I wrote an MPI utility that copies a file once to every node in the job, taking care to make sure each node only gets one copy of the file and to copy the file only if its SHA1 hash changes. Skylar _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From atp at piskorski.com Tue Jun 12 01:54:10 2012 From: atp at piskorski.com (Andrew Piskorski) Date: Tue, 12 Jun 2012 01:54:10 -0400 Subject: [Beowulf] Torrents for HPC In-Reply-To: <4FD6349B.6000604@scalableinformatics.com> References: <4FD6349B.6000604@scalableinformatics.com> Message-ID: <20120612055410.GA45268@piskorski.com> On Mon, Jun 11, 2012 at 02:10:35PM -0400, Joe Landman wrote: > A long while ago, we developed 'xcp' which did data distribution from 1 > machine to many machines, and was quite fast (non-broadcast). Sounds very similar to nettee. Can you compare/contrast the two? http://saf.bio.caltech.edu/nettee.html > We were working on generalizing it for directories and other things as > well, Ah. Nettee can only handle that sort of thing by playing games with tar, which isn't terribly user friendly. -- Andrew Piskorski http://www.piskorski.com/ _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From atp at piskorski.com Tue Jun 12 02:37:35 2012 From: atp at piskorski.com (Andrew Piskorski) Date: Tue, 12 Jun 2012 02:37:35 -0400 Subject: [Beowulf] Torrents for HPC In-Reply-To: <20120611180243.GN38490@mail.nih.gov> References: <20120611180243.GN38490@mail.nih.gov> Message-ID: <20120612063735.GB45268@piskorski.com> On Mon, Jun 11, 2012 at 02:02:43PM -0400, Jesse Becker wrote: > I found that the time to build the torrent files--calculating checksums > and such--was *far* more time consuming than the actual file > distribution. This is on top of the rather severe IO hit on the "seed" > box as well. Hm, I wonder if zsync does better: http://zsync.moria.org.uk/ Just now with zsync v0.6.1 (from 2009), running zsyncmake on a 696 MB *.iso file took 9.7 seconds on my (rather pedestrian) desktop. That was reading from and writing to the same SATA disk, and it used one cpu core at about 80% the whole time. When I ran two zsyncmakes at once, each one took twice as long and only used 40% cpu, so that 70 MB/s clearly seems to limited by disk-IO on this machine, not cpu. -- Andrew Piskorski http://www.piskorski.com/ _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From dnlombar at ichips.intel.com Tue Jun 12 11:19:08 2012 From: dnlombar at ichips.intel.com (David N. Lombard) Date: Tue, 12 Jun 2012 08:19:08 -0700 Subject: [Beowulf] Torrents for HPC In-Reply-To: References: <4FD2937B.6010408@cse.ucdavis.edu> <20120611180243.GN38490@mail.nih.gov> <4FD6349B.6000604@scalableinformatics.com> Message-ID: <20120612151908.GA18824@nlxcldnl2.cl.intel.com> On Mon, Jun 11, 2012 at 11:17:53AM -0700, Bernard Li wrote: > Hi all: > > I'd also like to point you guys to pcp: > > http://www.theether.org/pcp/ > > It's a bit old, but should still build on modern systems. It would be > nice if somebody picks up development after all these years (hint > hint) :-) > +1 for pcp It's one of my /favorites/ from the past. As it did a pipeline file transfer over a tree, it was only a tad slower than a single point-to-point copy. Brent Chun (the author), also wrote a related amazingly fast parallel execution utility, gexec. -- David N. Lombard, Intel, Irvine, CA I do not speak for Intel Corporation; all comments are strictly my own. _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From ellis at cse.psu.edu Tue Jun 12 13:56:22 2012 From: ellis at cse.psu.edu (Ellis H. Wilson III) Date: Tue, 12 Jun 2012 13:56:22 -0400 Subject: [Beowulf] Torrents for HPC In-Reply-To: <4FD2937B.6010408@cse.ucdavis.edu> References: <4FD2937B.6010408@cse.ucdavis.edu> Message-ID: <4FD782C6.7050704@cse.psu.edu> On 06/08/12 20:06, Bill Broadley wrote: > A new user on one of my GigE clusters submits batches of 500 jobs that > need to randomly read a 30-60GB dataset. They aren't the only user of > said cluster so each job will be waiting in the queue with a mix of others. With a 160TB cluster and only a 30-60GB dataset, is there any reason why the user isn't simply storing their dataset in HDFS? Does the data change frequently via a non-MapReduce framework such that it needs to be pulled from NFS before every job? If the dataset is in a few dozen files and in HDFS in the cluster, there is no reason why MapReduce shouldn't spawn it's tasks directly "on" the data, without need (most of the time) for moving all of the data to every node as you mention. > The clients definitely see MUCH faster performance when access a local > copy instead of a small share of the performance/bandwidth of a central > file server. This makes perfect sense, and is in fact exactly what Hadoop already attempts to do by trying to co-locate MapReduce tasks with pre-placed data in HDFS. Hadoop tries to move the computation to the data in this case, rather than what you are trying to do: Move the data to the computation, which tends to be /way/ harder unless you've got killer storage. All of this said, it is unclear from your email whether this user is using Hadoop or if that was just a side-node and they are operating in a totally different cluster with a different framework (MPI?). Best, ellis _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From orion at cora.nwra.com Tue Jun 12 17:06:19 2012 From: orion at cora.nwra.com (Orion Poplawski) Date: Tue, 12 Jun 2012 15:06:19 -0600 Subject: [Beowulf] Torrents for HPC In-Reply-To: References: <4FD2937B.6010408@cse.ucdavis.edu> <20120611180243.GN38490@mail.nih.gov> <4FD6349B.6000604@scalableinformatics.com> Message-ID: <4FD7AF4B.7030500@cora.nwra.com> On 06/11/2012 12:17 PM, Bernard Li wrote: > Hi all: > > I'd also like to point you guys to pcp: > > http://www.theether.org/pcp/ > > It's a bit old, but should still build on modern systems. It would be > nice if somebody picks up development after all these years (hint > hint) :-) Hmm, the home page indicates it went into ganglia, but it's not there now. Anyone know what happened? -- Orion Poplawski Technical Manager 303-415-9701 x222 NWRA, Boulder Office FAX: 303-415-9702 3380 Mitchell Lane orion at nwra.com Boulder, CO 80301 http://www.nwra.com _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From bernard at vanhpc.org Tue Jun 12 17:27:41 2012 From: bernard at vanhpc.org (Bernard Li) Date: Tue, 12 Jun 2012 14:27:41 -0700 Subject: [Beowulf] Torrents for HPC In-Reply-To: <4FD7AF4B.7030500@cora.nwra.com> References: <4FD2937B.6010408@cse.ucdavis.edu> <20120611180243.GN38490@mail.nih.gov> <4FD6349B.6000604@scalableinformatics.com> <4FD7AF4B.7030500@cora.nwra.com> Message-ID: Hi Orion: On Tue, Jun 12, 2012 at 2:06 PM, Orion Poplawski wrote: > Hmm, the home page indicates it went into ganglia, but it's not there now. > Anyone know what happened? The code is here: http://ganglia.svn.sf.net/viewvc/ganglia/trunk/gexec/pcp/ Perhaps Brent could update the page with the direct link? Thanks, Bernard _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From bill at cse.ucdavis.edu Tue Jun 12 18:42:47 2012 From: bill at cse.ucdavis.edu (Bill Broadley) Date: Tue, 12 Jun 2012 15:42:47 -0700 Subject: [Beowulf] Torrents for HPC In-Reply-To: <4FD2937B.6010408@cse.ucdavis.edu> References: <4FD2937B.6010408@cse.ucdavis.edu> Message-ID: <4FD7C5E7.3020803@cse.ucdavis.edu> Many thanks for the online and offline feedback. I've been reviewing the mentioned alternatives. From what I can tell none of them allow nodes to join/leave at random. Our problem is that a user might submit 500-50,000 jobs that depend on a particular dataset and have a variable number of jobs/nodes running at any given time. So ideally each node that a job lands on would do something like: 1) Is this node subscribed to this dataset? If not start a client. 2) Is the dataset completely downloaded? If not wait. Because of the node churn we didn't want the send approach. We also wanted to handle multiple file transfers of multiple directories for multiple users at once. From what I tell, most (all?) other approaches assume a mostly idle network and don't robustly handle cases where 1/3rd of the nodes have highly contended links. Because we are using the links for MPI, NFS, and torrents we didn't want to use an approach that wasn't robust with highly variable per node bandwidth. Any comments on how well the various alternatives work with a busy network? Seems like any tree based approach would have problems. As far as the torrent creation process. My small 5 disk RAID manages 300-400MB/sec and manages around 80% of that for creating torrents. It looks single threaded, parallel friendly, and easy to parallelize. But from what I can tell torrent creation is I/O limited at least for us. I already have some parallel checksumming code around for another project, I could likely tweak it to create torrents if people out there thing this is a real bottleneck. I like the torrent behavior of guaranteed file integrity and self-healing files. Using MPI does make quite a bit of sense for clusters with high speed interconnects. Although I suspect that being network bound for IO is less of a problem. I'd consider it though, I do have sdr/ddr/qdr clusters around, but so far (knock on wood) not IO limited. I've done a fair bit of MPI programming, but I'm not sure it's easy/possible to have nodes dynamically join/leave. Worst case I guess you could launch a thread/process for each pair of peers that wanted to trade blocks and still use TCP for swapping metadata about what peers to connect to and block to trade. _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From skylar.thompson at gmail.com Tue Jun 12 18:47:14 2012 From: skylar.thompson at gmail.com (Skylar Thompson) Date: Tue, 12 Jun 2012 15:47:14 -0700 Subject: [Beowulf] Torrents for HPC In-Reply-To: <4FD7C5E7.3020803@cse.ucdavis.edu> References: <4FD2937B.6010408@cse.ucdavis.edu> <4FD7C5E7.3020803@cse.ucdavis.edu> Message-ID: <4FD7C6F2.20003@gmail.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 06/12/2012 03:42 PM, Bill Broadley wrote: > Using MPI does make quite a bit of sense for clusters with high > speed interconnects. Although I suspect that being network bound > for IO is less of a problem. I'd consider it though, I do have > sdr/ddr/qdr clusters around, but so far (knock on wood) not IO > limited. I've done a fair bit of MPI programming, but I'm not sure > it's easy/possible to have nodes dynamically join/leave. Worst > case I guess you could launch a thread/process for each pair of > peers that wanted to trade blocks and still use TCP for swapping > metadata about what peers to connect to and block to trade. We manage this by having users run this in the same Grid Engine parallel environment they run their job in. This means they're guaranteed to run the sync job on the same nodes their actual job runs on. The copied files change so slowly that even on 1GbE network is rarely a bottleneck, since we only transfer files that are changed. Skylar -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk/XxvAACgkQsc4yyULgN4b6dACfb5KIcql9wAbcudIKiO+IMrHX xS4An1caTjSp0MOCgb4Ach6h8ynQe7CF =LE07 -----END PGP SIGNATURE----- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From bill at cse.ucdavis.edu Tue Jun 12 18:59:46 2012 From: bill at cse.ucdavis.edu (Bill Broadley) Date: Tue, 12 Jun 2012 15:59:46 -0700 Subject: [Beowulf] Torrents for HPC In-Reply-To: <4FD7C6F2.20003@gmail.com> References: <4FD2937B.6010408@cse.ucdavis.edu> <4FD7C5E7.3020803@cse.ucdavis.edu> <4FD7C6F2.20003@gmail.com> Message-ID: <4FD7C9E2.4020506@cse.ucdavis.edu> On 06/12/2012 03:47 PM, Skylar Thompson wrote: > We manage this by having users run this in the same Grid Engine > parallel environment they run their job in. This means they're > guaranteed to run the sync job on the same nodes their actual job runs > on. The copied files change so slowly that even on 1GbE network is > rarely a bottleneck, since we only transfer files that are changed. Our problem is we have many users and don't want 50,000 30 minute jobs to turn into a giant jobs that defeats the priority system while running. With an array job users can get 100% of the cluster if it's idle and quickly decay to their fair share when other higher priority jobs run. That way we can have the cluster 100% utilized, but new jobs (from users using less than their fair share) can get through the queue (which might well be months long) quickly. _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From prentice at ias.edu Wed Jun 13 09:16:23 2012 From: prentice at ias.edu (Prentice Bisbal) Date: Wed, 13 Jun 2012 09:16:23 -0400 Subject: [Beowulf] Torrents for HPC In-Reply-To: <4FD2937B.6010408@cse.ucdavis.edu> References: <4FD2937B.6010408@cse.ucdavis.edu> Message-ID: <4FD892A7.4010207@ias.edu> Bill, Thanks for sharing this. I've often wondered if using BitTorrent in this way would be useful for HPC. Thanks for answering that question! Prentice On 06/08/2012 08:06 PM, Bill Broadley wrote: > I've built Myrinet, SDR, DDR, and QDR clusters ( no FDR yet), but I > still have users whose use cases and budgets still only justify GigE. > > I've setup a 160TB hadoop cluster is working well, but haven't found > justification for the complexity/cost related to lustre. I have high > hopes for Ceph, but it seems not quite ready yet. I'd happy to hear > otherwise. > > A new user on one of my GigE clusters submits batches of 500 jobs that > need to randomly read a 30-60GB dataset. They aren't the only user of > said cluster so each job will be waiting in the queue with a mix of others. > > As you might imagine that hammers a central GigE connected NFS server > pretty hard. This cluster has 38 computes/304 cores/608 threads. > > I thought torrent might be a good way to publish such a dataset to the > compute nodes (thus avoiding the GigE bottleneck). So I wrote a > small/simple bittorrent client and made a 16GB example data set and > measured the performance pushing it to 38 compute nodes: > http://cse.ucdavis.edu/bill/btbench-2.png > > The slow ramp up is partially because I'm launching torrent clients with > a crude for i in { ssh $i launch_torrent.sh }. > > I get approximately 2.5GB/sec sustained when writing to 38 compute > nodes. So 38 nodes * 16GB = 608GB to distribute @ 2.5 GHz sec = 240 > seconds or so. > > The clients definitely see MUCH faster performance when access a local > copy instead of a small share of the performance/bandwidth of a central > file server. > > Do you think it's worth bundling up for others to use? > > This is how it works: > 1) User runs publish before they start submitting > jobs. > 2) The publish command makes a torrent of that directory and starts > seeding that torrent. > 3) The user submits an arbitrary number of jobs that needs that > directory. Inside the job they "$ subscribe " > 4) The subscribe command launches one torrent client per node (not per j > job) and blocks until the directory is completely downloaded > 5) /scratch// has the users data > > Not nearly as convenient as having a fast parallel filesystem, but seems > potentially useful for those who have large read only datasets, GigE and > NFS. > > Thoughts? > > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing > To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf > _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From bs_lists at aakef.fastmail.fm Wed Jun 13 09:40:09 2012 From: bs_lists at aakef.fastmail.fm (Bernd Schubert) Date: Wed, 13 Jun 2012 15:40:09 +0200 Subject: [Beowulf] Torrents for HPC In-Reply-To: <4FD2937B.6010408@cse.ucdavis.edu> References: <4FD2937B.6010408@cse.ucdavis.edu> Message-ID: <4FD89839.2040904@aakef.fastmail.fm> On 06/09/2012 02:06 AM, Bill Broadley wrote: > > I've built Myrinet, SDR, DDR, and QDR clusters ( no FDR yet), but I > still have users whose use cases and budgets still only justify GigE. > > I've setup a 160TB hadoop cluster is working well, but haven't found > justification for the complexity/cost related to lustre. I have high > hopes for Ceph, but it seems not quite ready yet. I'd happy to hear > otherwise. > What about an easy to setup cluster file system such as FhGFS? As one of its developers I'm a bit biased of course, but then I'm also familiar with Lustre, an I think FhGFS is far more easiy to setup. We also do not have the problem to run clients and servers on the same node and so of our customers make heavy use of that and use their compute nodes as storage servers. That should a provide the same or better throughput as your torrent system. Cheers, Bernd _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From landman at scalableinformatics.com Wed Jun 13 09:55:39 2012 From: landman at scalableinformatics.com (Joe Landman) Date: Wed, 13 Jun 2012 09:55:39 -0400 Subject: [Beowulf] Torrents for HPC In-Reply-To: <4FD89839.2040904@aakef.fastmail.fm> References: <4FD2937B.6010408@cse.ucdavis.edu> <4FD89839.2040904@aakef.fastmail.fm> Message-ID: <4FD89BDB.4050100@scalableinformatics.com> On 06/13/2012 09:40 AM, Bernd Schubert wrote: > On 06/09/2012 02:06 AM, Bill Broadley wrote: >> >> I've built Myrinet, SDR, DDR, and QDR clusters ( no FDR yet), but I >> still have users whose use cases and budgets still only justify GigE. >> >> I've setup a 160TB hadoop cluster is working well, but haven't found >> justification for the complexity/cost related to lustre. I have high >> hopes for Ceph, but it seems not quite ready yet. I'd happy to hear >> otherwise. >> > > What about an easy to setup cluster file system such as FhGFS? As one of > its developers I'm a bit biased of course, but then I'm also familiar > with Lustre, an I think FhGFS is far more easiy to setup. We also do not > have the problem to run clients and servers on the same node and so of > our customers make heavy use of that and use their compute nodes as > storage servers. That should a provide the same or better throughput as > your torrent system. I'd like to chime in and note that we have customers re-implementing storage with FhGFS. Ceph will be good. You can build a reasonable system today with xfs as the backing store. The RADOS device is an excellent basis for building reliable systems. Generally speaking none of the cluster file systems will solve the specific problem in the original post, though some of the cluster file systems (and various implementation details) will make the problem indicated to be much less of a problem. -- Joseph Landman, Ph.D Founder and CEO Scalable Informatics Inc. email: landman at scalableinformatics.com web : http://scalableinformatics.com http://scalableinformatics.com/sicluster phone: +1 734 786 8423 x121 fax : +1 866 888 3112 cell : +1 734 612 4615 _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From prentice at ias.edu Wed Jun 13 10:04:47 2012 From: prentice at ias.edu (Prentice Bisbal) Date: Wed, 13 Jun 2012 10:04:47 -0400 Subject: [Beowulf] Status of beowulf.org? Message-ID: <4FD89DFF.9020708@ias.edu> I know this came up recently. I just wanted to see if any new information has surfaced. Does anyone know what the status of beowulf.org is? I will be starting a new job in few weeks, and I'm in the process of unsubscribing from all the mailing lists I subscribe to at my current job. Following the link to the beowulf.org mailman page to control my subscription results in The connection has timed out The server at www.beowulf.org is taking too long to respond. Looks like I'll be unsubscribing through e-mail commands, but I'm worried about how difficult it will be to re-subscribe once I start the new job. -- Prentice _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From landman at scalableinformatics.com Wed Jun 13 10:11:08 2012 From: landman at scalableinformatics.com (Joe Landman) Date: Wed, 13 Jun 2012 10:11:08 -0400 Subject: [Beowulf] Status of beowulf.org? In-Reply-To: <4FD89DFF.9020708@ias.edu> References: <4FD89DFF.9020708@ias.edu> Message-ID: <4FD89F7C.2070302@scalableinformatics.com> On 06/13/2012 10:04 AM, Prentice Bisbal wrote: > I know this came up recently. I just wanted to see if any new > information has surfaced. > > Does anyone know what the status of beowulf.org is? I will be starting a This is part of Penguin Computing, and may have whithered a bit since Don Becker left. > new job in few weeks, and I'm in the process of unsubscribing from all > the mailing lists I subscribe to at my current job. Following the link > to the beowulf.org mailman page to control my subscription results in > > The connection has timed out > The server at www.beowulf.org is taking too long to respond. > > > Looks like I'll be unsubscribing through e-mail commands, but I'm > worried about how difficult it will be to re-subscribe once I start the > new job. If Penguin doesn't want to handle hosting it anymore, please let us know (and feel free to contact me offline, we'd be happy to either host it, or set it up on EC2 or sumthin). > -- Joseph Landman, Ph.D Founder and CEO Scalable Informatics Inc. email: landman at scalableinformatics.com web : http://scalableinformatics.com http://scalableinformatics.com/sicluster phone: +1 734 786 8423 x121 fax : +1 866 888 3112 cell : +1 734 612 4615 _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From eagles051387 at gmail.com Wed Jun 13 10:13:33 2012 From: eagles051387 at gmail.com (Jonathan Aquilina) Date: Wed, 13 Jun 2012 16:13:33 +0200 Subject: [Beowulf] Status of beowulf.org? In-Reply-To: <4FD89F7C.2070302@scalableinformatics.com> References: <4FD89DFF.9020708@ias.edu> <4FD89F7C.2070302@scalableinformatics.com> Message-ID: <72779032-28C9-447F-A852-E38D3529D9CF@gmail.com> I too am willing to host it granted I am only a lurker, clustering is something that still highly interests me. Regards Jonathan Aquilina On 13 Jun 2012, at 16:11, Joe Landman wrote: > On 06/13/2012 10:04 AM, Prentice Bisbal wrote: >> I know this came up recently. I just wanted to see if any new >> information has surfaced. >> >> Does anyone know what the status of beowulf.org is? I will be starting a > > This is part of Penguin Computing, and may have whithered a bit since > Don Becker left. > >> new job in few weeks, and I'm in the process of unsubscribing from all >> the mailing lists I subscribe to at my current job. Following the link >> to the beowulf.org mailman page to control my subscription results in >> >> The connection has timed out >> The server at www.beowulf.org is taking too long to respond. >> >> >> Looks like I'll be unsubscribing through e-mail commands, but I'm >> worried about how difficult it will be to re-subscribe once I start the >> new job. > > If Penguin doesn't want to handle hosting it anymore, please let us know > (and feel free to contact me offline, we'd be happy to either host it, > or set it up on EC2 or sumthin). > >> > > > -- > Joseph Landman, Ph.D > Founder and CEO > Scalable Informatics Inc. > email: landman at scalableinformatics.com > web : http://scalableinformatics.com > http://scalableinformatics.com/sicluster > phone: +1 734 786 8423 x121 > fax : +1 866 888 3112 > cell : +1 734 612 4615 > > > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing > To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From j.wender at science-computing.de Wed Jun 13 10:28:19 2012 From: j.wender at science-computing.de (Jan Wender) Date: Wed, 13 Jun 2012 16:28:19 +0200 Subject: [Beowulf] Status of beowulf.org? In-Reply-To: <72779032-28C9-447F-A852-E38D3529D9CF@gmail.com> References: <4FD89DFF.9020708@ias.edu> <4FD89F7C.2070302@scalableinformatics.com> <72779032-28C9-447F-A852-E38D3529D9CF@gmail.com> Message-ID: <4FD8A383.9030904@science-computing.de> Hi all, I tried again to reach Arend at Penguin, now using another email adress. Will keep you posted. Cheerio, Jan -- ---- Company Information ---- Vorstandsvorsitzender: Gerd-Lothar Leonhart Vorstand: Dr. Bernd Finkbeiner, Dr. Arno Steitz, Dr. Ingrid Zech Vorsitzender des Aufsichtsrats: Philippe Miltin Sitz: Tuebingen Registergericht: Stuttgart Registernummer: HRB 382196 -- Mailscanner: Clean -------------- next part -------------- A non-text attachment was scrubbed... Name: j_wender.vcf Type: text/x-vcard Size: 340 bytes Desc: not available URL: -------------- next part -------------- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From pc7 at sanger.ac.uk Wed Jun 13 10:59:58 2012 From: pc7 at sanger.ac.uk (Peter) Date: Wed, 13 Jun 2012 15:59:58 +0100 Subject: [Beowulf] Torrents for HPC In-Reply-To: <4FD782C6.7050704@cse.psu.edu> References: <4FD2937B.6010408@cse.ucdavis.edu> <4FD782C6.7050704@cse.psu.edu> Message-ID: <4FD8AAEE.8060103@sanger.ac.uk> On 12/06/12 18:56, Ellis H. Wilson III wrote: > On 06/08/12 20:06, Bill Broadley wrote: >> A new user on one of my GigE clusters submits batches of 500 jobs that >> need to randomly read a 30-60GB dataset. They aren't the only user of >> said cluster so each job will be waiting in the queue with a mix of others. > With a 160TB cluster and only a 30-60GB dataset, is there any reason why > the user isn't simply storing their dataset in HDFS? Does the data > change frequently via a non-MapReduce framework such that it needs to be > pulled from NFS before every job? If the dataset is in a few dozen > files and in HDFS in the cluster, there is no reason why MapReduce > shouldn't spawn it's tasks directly "on" the data, without need (most of > the time) for moving all of the data to every node as you mention. From experience this can have varied results and still requires careful management/thought. With HDFS if the replicate number is 3 (often the default case) and the 30 node cluster has 500 jobs then either an initial step is required to replicate the data to all other cluster nodes and then perform the analysis (this imposes the expected network / disk IO impact and job start up latency already in place). Alternatively keep the replication at 3 (or a.n.other defined number) and limit the number of jobs to the available resources where the data replicates pre-exist. The challenge is finding the sweet spot for the work in progress and as always nothing is ever free. So HDFS does not remove the replication process although it helps to hide the processes involved. The other joy encountered with HDFS is that we found it can be less than stable in a multi user environment, this has been confirmed by various others so as always care is required during testing. There are alternatives to HDFS which can be used in conjunction with Hadoop but I'm afraid I'm not able to recommend any in particular as it's been a while since I last kicked the tyres. Is this something that others have more recent experience with and can recommend an alternative ? Pete -- The Wellcome Trust Sanger Institute is operated by Genome Research Limited, a charity registered in England with number 1021457 and a company registered in England with number 2742969, whose registered office is 215 Euston Road, London, NW1 2BE. _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From pc7 at sanger.ac.uk Wed Jun 13 11:13:01 2012 From: pc7 at sanger.ac.uk (Peter) Date: Wed, 13 Jun 2012 16:13:01 +0100 Subject: [Beowulf] Torrents for HPC In-Reply-To: <4FD89839.2040904@aakef.fastmail.fm> References: <4FD2937B.6010408@cse.ucdavis.edu> <4FD89839.2040904@aakef.fastmail.fm> Message-ID: <4FD8ADFD.4070707@sanger.ac.uk> > What about an easy to setup cluster file system such as FhGFS? As one of > its developers I'm a bit biased of course, but then I'm also familiar > with Lustre, an I think FhGFS is far more easiy to setup. We also do not > have the problem to run clients and servers on the same node and so of > our customers make heavy use of that and use their compute nodes as > storage servers. That should a provide the same or better throughput as > your torrent system. > > Cheers, > Bernd An interesting idea. There is at least one storage vendor which has more cores on it's controllers than are required to provide access to the disk subsystems. They have made various inroads in placing a virtualisation layer over these and making them available for other tasks... compute, irods etc etc. Add this to something like the above or stork (http://stork.cse.buffalo.edu/) could be interesting. Pete -- The Wellcome Trust Sanger Institute is operated by Genome Research Limited, a charity registered in England with number 1021457 and a company registered in England with number 2742969, whose registered office is 215 Euston Road, London, NW1 2BE. _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From ellis at cse.psu.edu Wed Jun 13 07:21:58 2012 From: ellis at cse.psu.edu (Ellis H. Wilson III) Date: Wed, 13 Jun 2012 07:21:58 -0400 Subject: [Beowulf] Torrents for HPC In-Reply-To: <4FD8AAEE.8060103@sanger.ac.uk> References: <4FD2937B.6010408@cse.ucdavis.edu> <4FD782C6.7050704@cse.psu.edu> <4FD8AAEE.8060103@sanger.ac.uk> Message-ID: <4FD877D6.5030404@cse.psu.edu> On 06/13/12 10:59, Peter wrote: > On 12/06/12 18:56, Ellis H. Wilson III wrote: >> On 06/08/12 20:06, Bill Broadley wrote: >>> A new user on one of my GigE clusters submits batches of 500 jobs that >>> need to randomly read a 30-60GB dataset. They aren't the only user of >>> said cluster so each job will be waiting in the queue with a mix of others. >> With a 160TB cluster and only a 30-60GB dataset, is there any reason why >> the user isn't simply storing their dataset in HDFS? Does the data >> change frequently via a non-MapReduce framework such that it needs to be >> pulled from NFS before every job? If the dataset is in a few dozen >> files and in HDFS in the cluster, there is no reason why MapReduce >> shouldn't spawn it's tasks directly "on" the data, without need (most of >> the time) for moving all of the data to every node as you mention. > > From experience this can have varied results and still requires careful > management/thought. With HDFS if the replicate number is 3 (often the > default case) and the 30 node cluster has 500 jobs then either an > initial step is required to replicate the data to all other cluster > nodes and then perform the analysis (this imposes the expected network / > disk IO impact and job start up latency already in place). > It really shouldn't require much management, nor initial data movement at all. BTW, I understood 500 jobs to be totally agnostic about each other, as if they were calculating different things using the same dataset. If these are 500 tasks within the same job, well, that's an entirely different matter. If they are just jobs, it really doesn't matter if there are 5 or 500, as by default with Hadoop 0.20 at least jobs are executed in FIFO order. Further, if the user programmed his or her application to be configurable for number of mappers and reducers, it is trivial to match the number of mappers to the slots in the system and reducers similarly (though often reducers is something much lower, like 1 per node). Assuming the 30GB dataset is in 30 1GB files, which shouldn't be hard to guarantee or achieve, each node will get 1 of these files. Therefore the user simply specifies that he or she wants (let's assume 2 map slots per node) 60 map tasks, and Hadoop will silently try to make sure each task ends up on one of the three nodes (assuming default triplication) that have a local data copy. > Alternatively keep the replication at 3 (or a.n.other defined number) > and limit the number of jobs to the available resources where the data > replicates pre-exist. The challenge is finding the sweet spot for the > work in progress and as always nothing is ever free. With only 30 nodes and 30 to 60GB of data, I think it is safe to assume the data exists /everywhere/ in the cluster. Even if Hadoop was stupid and randomly selected a node there would be a 1/10 chance the data was already there, and it's not stupid, so it will check all three of the nodes with replicas before spawning the task elsewhere. Now if there are 1000 nodes and just 30GB of data, then Hadoop will make sure your tasks are prioritized on the nodes that have your data or at least, in the same rack as the nodes that have it. > So HDFS does not remove the replication process although it helps to > hide the processes involved. As I've said, if you set things up properly, there shouldn't be much, if any, replication, and Hadoop doesn't help to hide the replication -- it totally obscures the process. You have no hand in doing so. > The other joy encountered with HDFS is that we found it can be less than > stable in a multi user environment, this has been confirmed by various > others so as always care is required during testing. I'll concede that original configuration can be tough, but I've assisted with management of an HDFS instance that stored ~60TB of data and over 10 million files, both as scratch and for users home dirs. It is certainly stable enough for day to day use. > There are alternatives to HDFS which can be used in conjunction with > Hadoop but I'm afraid I'm not able to recommend any in particular as > it's been a while since I last kicked the tyres. Is this something that > others have more recent experience with and can recommend an alternative ? I'm working on an alternative to HDFS as we speak, which bypasses HDFS entirely and allows people using MapReduce to run directly against multiple NAS boxes as if they were a single federated storage system. I'll be sending something out to this list about the source when I release it. Best, ellis _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From alscheinine at tuffmail.us Wed Jun 13 11:30:57 2012 From: alscheinine at tuffmail.us (Alan Louis Scheinine) Date: Wed, 13 Jun 2012 10:30:57 -0500 Subject: [Beowulf] Status of beowulf.org? In-Reply-To: <4FD89F7C.2070302@scalableinformatics.com> References: <4FD89DFF.9020708@ias.edu> <4FD89F7C.2070302@scalableinformatics.com> Message-ID: <4FD8B231.8010407@tuffmail.us> The message archive at the web site would be valuable for those interested in Beowulf clusters. I've read almost every message for many years, but when a problem or question arises I need to go back to the archive to get details. -- Alan Scheinine 200 Georgann Dr., Apt. E6 Vicksburg, MS 39180 Email: alscheinine at tuffmail.us Mobile phone: 225 288 4176 http://www.flickr.com/photos/ascheinine _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From eagles051387 at gmail.com Wed Jun 13 11:40:01 2012 From: eagles051387 at gmail.com (Jonathan Aquilina) Date: Wed, 13 Jun 2012 17:40:01 +0200 Subject: [Beowulf] Easy clustering Message-ID: Is there something out there that is gui based that can be run from ones linux mac or win box to easily manage a linux cluster? -- Mailscanner: Clean -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From pc7 at sanger.ac.uk Wed Jun 13 11:43:53 2012 From: pc7 at sanger.ac.uk (Peter) Date: Wed, 13 Jun 2012 16:43:53 +0100 Subject: [Beowulf] Torrents for HPC In-Reply-To: <4FD877D6.5030404@cse.psu.edu> References: <4FD2937B.6010408@cse.ucdavis.edu> <4FD782C6.7050704@cse.psu.edu> <4FD8AAEE.8060103@sanger.ac.uk> <4FD877D6.5030404@cse.psu.edu> Message-ID: <4FD8B539.4060007@sanger.ac.uk> On 13/06/12 12:21, Ellis H. Wilson III wrote: > On 06/13/12 10:59, Peter wrote: >> On 12/06/12 18:56, Ellis H. Wilson III wrote: >>> On 06/08/12 20:06, Bill Broadley wrote: >>>> A new user on one of my GigE clusters submits batches of 500 jobs that >>>> need to randomly read a 30-60GB dataset. They aren't the only user of >>>> said cluster so each job will be waiting in the queue with a mix of others. >>> With a 160TB cluster and only a 30-60GB dataset, is there any reason why >>> the user isn't simply storing their dataset in HDFS? Does the data >>> change frequently via a non-MapReduce framework such that it needs to be >>> pulled from NFS before every job? If the dataset is in a few dozen >>> files and in HDFS in the cluster, there is no reason why MapReduce >>> shouldn't spawn it's tasks directly "on" the data, without need (most of >>> the time) for moving all of the data to every node as you mention. >> From experience this can have varied results and still requires careful >> management/thought. With HDFS if the replicate number is 3 (often the >> default case) and the 30 node cluster has 500 jobs then either an > > initial step is required to replicate the data to all other cluster > > nodes and then perform the analysis (this imposes the expected network / > > disk IO impact and job start up latency already in place). > > > > It really shouldn't require much management, nor initial data movement > at all. BTW, I understood 500 jobs to be totally agnostic about each > other, as if they were calculating different things using the same > dataset. If these are 500 tasks within the same job, well, that's an > entirely different matter. If they are just jobs, it really doesn't > matter if there are 5 or 500, as by default with Hadoop 0.20 at least > jobs are executed in FIFO order. Further, if the user programmed his or > her application to be configurable for number of mappers and reducers, > it is trivial to match the number of mappers to the slots in the system > and reducers similarly (though often reducers is something much lower, > like 1 per node). > > Assuming the 30GB dataset is in 30 1GB files, which shouldn't be hard to > guarantee or achieve, each node will get 1 of these files. Therefore > the user simply specifies that he or she wants (let's assume 2 map slots > per node) 60 map tasks, and Hadoop will silently try to make sure each > task ends up on one of the three nodes (assuming default triplication) > that have a local data copy. > >> Alternatively keep the replication at 3 (or a.n.other defined number) >> and limit the number of jobs to the available resources where the data >> replicates pre-exist. The challenge is finding the sweet spot for the >> work in progress and as always nothing is ever free. > With only 30 nodes and 30 to 60GB of data, I think it is safe to assume > the data exists /everywhere/ in the cluster. Even if Hadoop was stupid > and randomly selected a node there would be a 1/10 chance the data was > already there, and it's not stupid, so it will check all three of the > nodes with replicas before spawning the task elsewhere. Now if there > are 1000 nodes and just 30GB of data, then Hadoop will make sure your > tasks are prioritized on the nodes that have your data or at least, in > the same rack as the nodes that have it. > >> So HDFS does not remove the replication process although it helps to >> hide the processes involved. > As I've said, if you set things up properly, there shouldn't be much, if > any, replication, and Hadoop doesn't help to hide the replication -- it > totally obscures the process. You have no hand in doing so. > >> The other joy encountered with HDFS is that we found it can be less than >> stable in a multi user environment, this has been confirmed by various >> others so as always care is required during testing. > I'll concede that original configuration can be tough, but I've assisted > with management of an HDFS instance that stored ~60TB of data and over > 10 million files, both as scratch and for users home dirs. It is > certainly stable enough for day to day use. > >> There are alternatives to HDFS which can be used in conjunction with >> Hadoop but I'm afraid I'm not able to recommend any in particular as >> it's been a while since I last kicked the tyres. Is this something that >> others have more recent experience with and can recommend an alternative ? > I'm working on an alternative to HDFS as we speak, which bypasses HDFS > entirely and allows people using MapReduce to run directly against > multiple NAS boxes as if they were a single federated storage system. > I'll be sending something out to this list about the source when I > release it. > > Best, > Many thanks for your comments Ellis, I read the initial Q that the full data set may be required by any job so an upgrade to my personal filters may be required :). If this were the case then post job submission it becomes a wait until a node with the data becomes available or alternatively a copy to a.n.other node needs to take place before it can be used for the task at hand. At this point it's sort of a balance between how many nodes are available immediately for the task and how long do you wish to wait, either for the FIFO tasks to complete on a subset of available nodes or the copy to take place. Given that 30-60Gb is small enough copy everywhere, that sort of takes things full circle to the initial rsync options (and variants) previously discussed to local disk. Although I apologise if I'm miss-interpreting the above. The comment regarding the obscuring the replication process was directed more towards the user experience, they don't need to know it automagically happens BUT behind the scenes the copies are happening all the same, with the expected impact incurred on IO etc. So HDFS doesn't make the process impact free. If you are able to send more to the list regarding HDFS plan B that would be great and certainly something I'd be interested in hearing more about. Do you have a blog or similar with references regarding any of the above ? If so that would be much appreciated. Thanks again and good luck with the multiple NAS option. Pete -- The Wellcome Trust Sanger Institute is operated by Genome Research Limited, a charity registered in England with number 1021457 and a company registered in England with number 2742969, whose registered office is 215 Euston Road, London, NW1 2BE. _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From landman at scalableinformatics.com Wed Jun 13 11:54:32 2012 From: landman at scalableinformatics.com (Joe Landman) Date: Wed, 13 Jun 2012 11:54:32 -0400 Subject: [Beowulf] Easy clustering In-Reply-To: References: Message-ID: <30e82854-3e21-45e4-9e69-39ef3dbbdf7f@email.android.com> Bright computing product. Uses their own cluster tools. -- Sent from an android device. Please excuse brevity and typos Jonathan Aquilina wrote: Is there something out there that is gui based that can be run from ones linux mac or win box to easily manage a linux cluster? -- Mailscanner: Clean -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From ellis at cse.psu.edu Wed Jun 13 08:07:07 2012 From: ellis at cse.psu.edu (Ellis H. Wilson III) Date: Wed, 13 Jun 2012 08:07:07 -0400 Subject: [Beowulf] Torrents for HPC In-Reply-To: <4FD8B539.4060007@sanger.ac.uk> References: <4FD2937B.6010408@cse.ucdavis.edu> <4FD782C6.7050704@cse.psu.edu> <4FD8AAEE.8060103@sanger.ac.uk> <4FD877D6.5030404@cse.psu.edu> <4FD8B539.4060007@sanger.ac.uk> Message-ID: <4FD8826B.3050903@cse.psu.edu> On 06/13/12 11:43, Peter wrote: > I read the initial Q that the full data set may be required by any job > so an upgrade to my personal filters may be required :). If this were No, you are correct about that, or at least, that's what I understood it to mean as well. So for instance, Job1 has Task1-30 and the 30GB DataSet has Chunk1-30, each 1GB in size, spread over the entire cluster. Hadoop just matches Task1 to the chunk it wants to work on. Yes, this means there at least must be parts of the process that are emb. parallel, but that's pretty much taken for granted with big data computation. The serial parts are typically handled by the shuffle and reduce phases at the end. > Given that 30-60Gb is small enough copy everywhere, that sort of takes I wouldn't expect much performance improvement going from 3 to all 30 chunks on a given node, unless you are incredibly unlucky or something is terribly misconfigured with your Hadoop instance. While 30GB isn't too bad to copy elsewhere, it's incredibly poor use of storage resources, having 30 copies of the data all over. > The comment regarding the obscuring the replication process was directed > more towards the user experience, they don't need to know it > automagically happens BUT behind the scenes the copies are happening all > the same, with the expected impact incurred on IO etc. So HDFS doesn't > make the process impact free. Making 30 copies of a 30GB dataset composed of 30 1GB files is quite different than 3 copies of each file, in size and work passed onto the user to manage. Even if you get unlucky and one of your tasks does require remote data, Hadoop handles streaming it to the task while it needs it and cleans up afterwards. It's going to be far more considerate about storage resources than any human being will be. > If you are able to send more to the list regarding HDFS plan B that > would be great and certainly something I'd be interested in hearing more > about. Do you have a blog or similar with references regarding any of > the above ? If so that would be much appreciated. Not yet. Working on a website as well -- will let you know as soon as that completes. Best, ellis _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From Greg at Keller.net Wed Jun 13 12:25:12 2012 From: Greg at Keller.net (Greg Keller) Date: Wed, 13 Jun 2012 11:25:12 -0500 Subject: [Beowulf] Beowulf Digest, Vol 100, Issue 10 In-Reply-To: References: Message-ID: > What about an easy to setup cluster file system such as FhGFS? As one of > its developers I'm a bit biased of course, but then I'm also familiar > with Lustre, an I think FhGFS is far more easiy to setup. We also do not > have the problem to run clients and servers on the same node and so of > our customers make heavy use of that and use their compute nodes as > storage servers. That should a provide the same or better throughput as > your torrent system. > > Cheers, > Bernd We've been curious about FhGFS but the licensing did not leave us confident we would always have access to it if we integrated it into our business and made available to our users. Serious success could essentially cause an epic failure if the license made it expensive to us (as commercial users) suddenly. As a "cloud" based hpc provider I thought it was too risky and have been happy with Lustre and it's affiliates. Specifically this clause could be a problem: 3.2 LICENSEE may NOT: ... - rent or lease the LICENSED SOFTWARE and DOCUMENTATION to any third party ... Does anyone think the license was intended to block cloud providers making it available as part of a cloud based HPC solution? Am I mis-interpreting this? Not looking for a legal-ese battle but I am wondering if other licenses commonly used in cloud contexts have similar language. Anyone think the FS is fantastic enough that I should fight (spend money on lawyers and licenses) to put it in front of "Cloud" HPC users? Cheers! Greg -- Mailscanner: Clean -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From bs_lists at aakef.fastmail.fm Wed Jun 13 13:17:18 2012 From: bs_lists at aakef.fastmail.fm (Bernd Schubert) Date: Wed, 13 Jun 2012 19:17:18 +0200 Subject: [Beowulf] Torrents for HPC In-Reply-To: <4FD89BDB.4050100@scalableinformatics.com> References: <4FD2937B.6010408@cse.ucdavis.edu> <4FD89839.2040904@aakef.fastmail.fm> <4FD89BDB.4050100@scalableinformatics.com> Message-ID: <4FD8CB1E.2090103@aakef.fastmail.fm> On 06/13/2012 03:55 PM, Joe Landman wrote: > On 06/13/2012 09:40 AM, Bernd Schubert wrote: >> On 06/09/2012 02:06 AM, Bill Broadley wrote: >>> >>> I've built Myrinet, SDR, DDR, and QDR clusters ( no FDR yet), but I >>> still have users whose use cases and budgets still only justify GigE. >>> >>> I've setup a 160TB hadoop cluster is working well, but haven't found >>> justification for the complexity/cost related to lustre. I have high >>> hopes for Ceph, but it seems not quite ready yet. I'd happy to hear >>> otherwise. >>> >> >> What about an easy to setup cluster file system such as FhGFS? As one of >> its developers I'm a bit biased of course, but then I'm also familiar >> with Lustre, an I think FhGFS is far more easiy to setup. We also do not >> have the problem to run clients and servers on the same node and so of >> our customers make heavy use of that and use their compute nodes as >> storage servers. That should a provide the same or better throughput as >> your torrent system. Arg, so many mistakes, why do I never notice those before sending the mail? :( > > I'd like to chime in and note that we have customers re-implementing > storage with FhGFS. > > Ceph will be good. You can build a reasonable system today with xfs as > the backing store. The RADOS device is an excellent basis for building > reliable systems. While the op does not need IB, most cluster nowadays do have IB. I think Ceph still does not support that, does it? Cheers, Bernd _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From deadline at eadline.org Wed Jun 13 15:35:29 2012 From: deadline at eadline.org (Douglas Eadline) Date: Wed, 13 Jun 2012 15:35:29 -0400 Subject: [Beowulf] Status of beowulf.org? In-Reply-To: <4FD89DFF.9020708@ias.edu> References: <4FD89DFF.9020708@ias.edu> Message-ID: <7a0f510c425b98deb2853d215a3784ed.squirrel@mail.eadline.org> This is a question that is going to need an answer sooner than later and some input from Penguin would be nice -- nudge Certainly there are others that can help with this effort if Penguin is too busy or do not have the resources. -- Doug > I know this came up recently. I just wanted to see if any new > information has surfaced. > > Does anyone know what the status of beowulf.org is? I will be starting a > new job in few weeks, and I'm in the process of unsubscribing from all > the mailing lists I subscribe to at my current job. Following the link > to the beowulf.org mailman page to control my subscription results in > > The connection has timed out > The server at www.beowulf.org is taking too long to respond. > > > Looks like I'll be unsubscribing through e-mail commands, but I'm > worried about how difficult it will be to re-subscribe once I start the > new job. > > -- > Prentice > > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing > To change your subscription (digest mode or unsubscribe) visit > http://www.beowulf.org/mailman/listinfo/beowulf > > -- > Mailscanner: Clean > -- Doug -- Mailscanner: Clean _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From bill at cse.ucdavis.edu Wed Jun 13 17:59:16 2012 From: bill at cse.ucdavis.edu (Bill Broadley) Date: Wed, 13 Jun 2012 14:59:16 -0700 Subject: [Beowulf] Torrents for HPC In-Reply-To: <4FD89839.2040904@aakef.fastmail.fm> References: <4FD2937B.6010408@cse.ucdavis.edu> <4FD89839.2040904@aakef.fastmail.fm> Message-ID: <4FD90D34.5090501@cse.ucdavis.edu> On 06/13/2012 06:40 AM, Bernd Schubert wrote: > What about an easy to setup cluster file system such as FhGFS? Great suggestion. I'm all for a generally useful parallel file systems instead of torrent solution with a very narrow use case. > As one of > its developers I'm a bit biased of course, but then I'm also familiar I think this list is exactly the place where a developer should jump in and suggest/explain their solutions as it related to use in HPC clusters. > with Lustre, an I think FhGFS is far more easiy to setup. We also do not > have the problem to run clients and servers on the same node and so of > our customers make heavy use of that and use their compute nodes as > storage servers. That should a provide the same or better throughput as > your torrent system. I found the wiki, the "view flyer", FAQ, and related. I had a few questions, I found this link http://www.fhgfs.com/wiki/wikka.php?wakka=FAQ#ha_support but was not sure of the details. What happens when a metadata server dies? What happens when a storage server dies? If either above is data loss/failure/unreadable files is there a description of how to improve against this with drbd+heartbeat or equivalent? Sounds like source is not available, and only binaries for CentOS? Looks like it does need a kernel module, does that mean only old 2.6.X CentOS kernels are supported? Does it work with mainline ofed on qlogic and mellanox hardware? From a sysadmin point of view I'm also interested in: * Do blocks auto balance across storage nodes? * Is managing disk space, inodes (or equiv) and related capacity planning complex? Or does df report useful/obvious numbers? * Can storage nodes be added/removed easily by migrating on/off of hardware? * Is FhGFS handle 100% of the distributed file system responsibilities or does it layer on top of xfs/ext4 or related? (like ceph) * With large files does performance scale reasonably with storage servers? * With small files does performance scale reasonably with metadata servers? BTW, if anyone is current on any other parallel file system I'd (and I suspect others on list) would find it very valuable. I run a hadoop cluster, but I suspect there are others on list that could provide better answer than I. My lustre knowledge is second hand and dated. _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From samuel at unimelb.edu.au Thu Jun 14 02:11:25 2012 From: samuel at unimelb.edu.au (Christopher Samuel) Date: Thu, 14 Jun 2012 16:11:25 +1000 Subject: [Beowulf] Torrents for HPC In-Reply-To: <4FD8CB1E.2090103@aakef.fastmail.fm> References: <4FD2937B.6010408@cse.ucdavis.edu> <4FD89839.2040904@aakef.fastmail.fm> <4FD89BDB.4050100@scalableinformatics.com> <4FD8CB1E.2090103@aakef.fastmail.fm> Message-ID: <4FD9808D.6010602@unimelb.edu.au> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 14/06/12 03:17, Bernd Schubert wrote: > While the op does not need IB, most cluster nowadays do have IB. I > think Ceph still does not support that, does it? Well, if it works over an IP network then it should work with IPoIB, even if it doesn't have native IB. - -- Christopher Samuel - Senior Systems Administrator VLSCI - Victorian Life Sciences Computation Initiative Email: samuel at unimelb.edu.au Phone: +61 (0)3 903 55545 http://www.vlsci.unimelb.edu.au/ -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk/ZgI0ACgkQO2KABBYQAh8rsACeKtEMTjdR7Ldt8Us+vQd444lr SCcAoIGCpmh0sf7jhpwAVzCZ2hI2Bxq9 =PGcV -----END PGP SIGNATURE----- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From eagles051387 at gmail.com Thu Jun 14 04:03:57 2012 From: eagles051387 at gmail.com (Jonathan Aquilina) Date: Thu, 14 Jun 2012 10:03:57 +0200 Subject: [Beowulf] Easy clustering In-Reply-To: <30e82854-3e21-45e4-9e69-39ef3dbbdf7f@email.android.com> References: <30e82854-3e21-45e4-9e69-39ef3dbbdf7f@email.android.com> Message-ID: <41A8BCA1-03EF-4825-A986-D608F6EC4268@gmail.com> What i was thinking is there an easy front end UI that one can install lets say on their normal mac pc to manage their cluster and all sorts of aspects of the cluster. Regards Jonathan Aquilina On 13 Jun 2012, at 17:54, Joe Landman wrote: > Bright computing product. Uses their own cluster tools. > -- > Sent from an android device. Please excuse brevity and typos > > Jonathan Aquilina wrote: > Is there something out there that is gui based that can be run from ones linux mac or win box to easily manage a linux cluster? > -- Mailscanner: Clean -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From reuti at staff.uni-marburg.de Thu Jun 14 06:26:40 2012 From: reuti at staff.uni-marburg.de (Reuti) Date: Thu, 14 Jun 2012 12:26:40 +0200 Subject: [Beowulf] Easy clustering In-Reply-To: <41A8BCA1-03EF-4825-A986-D608F6EC4268@gmail.com> References: <30e82854-3e21-45e4-9e69-39ef3dbbdf7f@email.android.com> <41A8BCA1-03EF-4825-A986-D608F6EC4268@gmail.com> Message-ID: <10695FDD-DF83-4AC9-B9EC-316028BD845B@staff.uni-marburg.de> Am 14.06.2012 um 10:03 schrieb Jonathan Aquilina: > What i was thinking is there an easy front end UI that one can install lets say on their normal mac pc to manage their cluster and all sorts of aspects of the cluster. How do you define "manage"? Remote KVM, installation by PXE, control by ipmitools, queue control,... -- Reuti > Regards > > Jonathan Aquilina > > > > On 13 Jun 2012, at 17:54, Joe Landman wrote: > >> Bright computing product. Uses their own cluster tools. >> -- >> Sent from an android device. Please excuse brevity and typos >> >> Jonathan Aquilina wrote: >> Is there something out there that is gui based that can be run from ones linux mac or win box to easily manage a linux cluster? >> > > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing > To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From eagles051387 at gmail.com Thu Jun 14 08:29:36 2012 From: eagles051387 at gmail.com (Jonathan Aquilina) Date: Thu, 14 Jun 2012 14:29:36 +0200 Subject: [Beowulf] Easy clustering In-Reply-To: <10695FDD-DF83-4AC9-B9EC-316028BD845B@staff.uni-marburg.de> References: <30e82854-3e21-45e4-9e69-39ef3dbbdf7f@email.android.com> <41A8BCA1-03EF-4825-A986-D608F6EC4268@gmail.com> <10695FDD-DF83-4AC9-B9EC-316028BD845B@staff.uni-marburg.de> Message-ID: Reuti, what i mean I have used webmin, but I hear mixed reviews about in terms of security vulnerabilities. I wonder how a python based web framework would work in this type of environment. Has anyone tried out in ubuntu 12.04 the Metal as a service (MAAS) stuff? Regards Jonathan Aquilina On 14 Jun 2012, at 12:26, Reuti wrote: > Am 14.06.2012 um 10:03 schrieb Jonathan Aquilina: > >> What i was thinking is there an easy front end UI that one can install lets say on their normal mac pc to manage their cluster and all sorts of aspects of the cluster. > > How do you define "manage"? Remote KVM, installation by PXE, control by ipmitools, queue control,... > > -- Reuti > > >> Regards >> >> Jonathan Aquilina >> >> >> >> On 13 Jun 2012, at 17:54, Joe Landman wrote: >> >>> Bright computing product. Uses their own cluster tools. >>> -- >>> Sent from an android device. Please excuse brevity and typos >>> >>> Jonathan Aquilina wrote: >>> Is there something out there that is gui based that can be run from ones linux mac or win box to easily manage a linux cluster? >>> >> >> _______________________________________________ >> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing >> To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf > -- Mailscanner: Clean -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From landman at scalableinformatics.com Thu Jun 14 11:24:18 2012 From: landman at scalableinformatics.com (Joe Landman) Date: Thu, 14 Jun 2012 11:24:18 -0400 Subject: [Beowulf] Easy clustering In-Reply-To: References: <30e82854-3e21-45e4-9e69-39ef3dbbdf7f@email.android.com> <41A8BCA1-03EF-4825-A986-D608F6EC4268@gmail.com> <10695FDD-DF83-4AC9-B9EC-316028BD845B@staff.uni-marburg.de> Message-ID: <4FDA0222.9050609@scalableinformatics.com> On 06/14/2012 08:29 AM, Jonathan Aquilina wrote: > Reuti, what i mean > > I have used webmin, but I hear mixed reviews about in terms of security > vulnerabilities. I wonder how a python based web framework would work in > this type of environment. Has anyone tried out in ubuntu 12.04 the Metal > as a service (MAAS) stuff? I think there are several different things being mixed in here. Clustering as in Beowulf clustering? Clustering as in building/managing a group of related machines, but not necessarily beowulf? Then you asked about Ubuntu. Ok ... I think we need clarification on what sort of cluster you are talking about ... but I can answer the ubuntu question. We are currently running 2x Ubuntu 12.04 servers in the amazon cloud to handle mail and web for us. Started right before our move to our new digs (c.f. http://scalableinformatics.com/location ), and we are continuing to run it there. Basically this is for seamless continuity more than anything else. Once we get our second network line into the facility, we'll probably "retire" one of these, and use the other as a smaller instance for mail forwarding. Since we are doing this as virtualized instances, we wouldn't do serious/significant resource intensive computing on it. Works great for a web/mail server though. If we were doing hard core computing, we'd go with one of the other instance types. I manage these through CLI. Certificate based ssh access. > > Regards > > Jonathan Aquilina > > > > On 14 Jun 2012, at 12:26, Reuti wrote: > >> Am 14.06.2012 um 10:03 schrieb Jonathan Aquilina: >> >>> What i was thinking is there an easy front end UI that one can >>> install lets say on their normal mac pc to manage their cluster and >>> all sorts of aspects of the cluster. >> >> How do you define "manage"? Remote KVM, installation by PXE, control >> by ipmitools, queue control,... >> >> -- Reuti >> >> >>> Regards >>> >>> Jonathan Aquilina >>> >>> >>> >>> On 13 Jun 2012, at 17:54, Joe Landman wrote: >>> >>>> Bright computing product. Uses their own cluster tools. >>>> -- >>>> Sent from an android device. Please excuse brevity and typos >>>> >>>> Jonathan Aquilina >>> > wrote: >>>> Is there something out there that is gui based that can be run from >>>> ones linux mac or win box to easily manage a linux cluster? >>>> >>> >>> _______________________________________________ >>> Beowulf mailing list, Beowulf at beowulf.org >>> sponsored by Penguin Computing >>> To change your subscription (digest mode or unsubscribe) visit >>> http://www.beowulf.org/mailman/listinfo/beowulf >> > -- Joseph Landman, Ph.D Founder and CEO Scalable Informatics Inc. email: landman at scalableinformatics.com web : http://scalableinformatics.com http://scalableinformatics.com/sicluster phone: +1 734 786 8423 x121 fax : +1 866 888 3112 cell : +1 734 612 4615 _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From bs_lists at aakef.fastmail.fm Thu Jun 14 12:30:45 2012 From: bs_lists at aakef.fastmail.fm (Bernd Schubert) Date: Thu, 14 Jun 2012 18:30:45 +0200 Subject: [Beowulf] Torrents for HPC In-Reply-To: <4FD89839.2040904@aakef.fastmail.fm> References: <4FD2937B.6010408@cse.ucdavis.edu> <4FD89839.2040904@aakef.fastmail.fm> Message-ID: <4FDA11B5.8010403@aakef.fastmail.fm> [I'm moving that from your digest answer to the general discussion thread] > On 06/13/2012 06:25 PM, Greg Keller wrote:> >> What about an easy to setup cluster file system such as FhGFS? As one of >> its developers I'm a bit biased of course, but then I'm also familiar >> with Lustre, an I think FhGFS is far more easiy to setup. We also do not >> have the problem to run clients and servers on the same node and so of >> our customers make heavy use of that and use their compute nodes as >> storage servers. That should a provide the same or better throughput as >> your torrent system. >> >> Cheers, >> Bernd > > We've been curious about FhGFS but the licensing did not leave us > confident we would always have access to it if we integrated it into our > business and made available to our users. Serious success could > essentially cause an epic failure if the license made it expensive to us > (as commercial users) suddenly. As a "cloud" based hpc provider I > thought it was too risky and have been happy with Lustre and it's > affiliates. > > Specifically this clause could be a problem: > > 3.2 LICENSEE may NOT: > > ... > > - rent or lease the LICENSED SOFTWARE and DOCUMENTATION to any third party > > ... > > Does anyone think the license was intended to block cloud providers making > it available as part of a cloud based HPC solution? Am I mis-interpreting this? > Not looking for a legal-ese battle but I am wondering if other licenses commonly > used in cloud contexts have similar language. Anyone think the FS is fantastic > enough that I should fight (spend money on lawyers and licenses) to put it in > front of "Cloud" HPC users? Arg, such issues are exactly the reason why I don't like contracts and laws written by lawyers. Instead of writing with 'normal' words understandable by everyone, they have their own language, which nobody can understand is entirely unclear. I'm not sure if they do understand themselves what they have written... Given the high number of useless lawsuits probably not. This clause is about charging for the licensed software (i.e. fhgfs), not about services around fhgfs. Neither this clause nor any other clause in the EULA is intended or prohibits that you provide fhgfs to cloud users. So this particular clause just says that you are not allowed to charge money for allowing people to use fhgfs. So it actually protects users from paying for a software, which is in fact free to use for everyone, no matter if it's a commercial user or not. On the other hand, you are still free to charge customers for services around fhgfs, e.g. you might charge your cloud customers for installing fhgfs or maintaining it or something like that - if that's what you have in mind. Please let us know if this is sufficient for you to consider FhGFS in the future or if we again should work with our Fraunhofer lawyers to improve the license. Thanks, Bernd _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From bs_lists at aakef.fastmail.fm Thu Jun 14 12:14:27 2012 From: bs_lists at aakef.fastmail.fm (Bernd Schubert) Date: Thu, 14 Jun 2012 18:14:27 +0200 Subject: [Beowulf] Torrents for HPC In-Reply-To: <4FD90D34.5090501@cse.ucdavis.edu> References: <4FD2937B.6010408@cse.ucdavis.edu> <4FD89839.2040904@aakef.fastmail.fm> <4FD90D34.5090501@cse.ucdavis.edu> Message-ID: <4FDA0DE3.9060803@aakef.fastmail.fm> On 06/13/2012 11:59 PM, Bill Broadley wrote: > On 06/13/2012 06:40 AM, Bernd Schubert wrote: >> What about an easy to setup cluster file system such as FhGFS? > > Great suggestion. I'm all for a generally useful parallel file systems > instead of torrent solution with a very narrow use case. > >> As one of >> its developers I'm a bit biased of course, but then I'm also familiar > > I think this list is exactly the place where a developer should jump in > and suggest/explain their solutions as it related to use in HPC clusters. > >> with Lustre, an I think FhGFS is far more easiy to setup. We also do not >> have the problem to run clients and servers on the same node and so of >> our customers make heavy use of that and use their compute nodes as >> storage servers. That should a provide the same or better throughput as >> your torrent system. > > I found the wiki, the "view flyer", FAQ, and related. > > I had a few questions, I found this link > http://www.fhgfs.com/wiki/wikka.php?wakka=FAQ#ha_support but was not > sure of the details. > > What happens when a metadata server dies? > > What happens when a storage server dies? Right, those two issues we are presently actively working on. So the current release relies on hardware raid. But later on this year there will be meta data mirroring. After that data mirroring will follow. > > If either above is data loss/failure/unreadable files is there a > description of how to improve against this with drbd+heartbeat or > equivalent? During the next weeks we will test fhgfs-ocf scripts for an HA (pacemaker) installation. As we are going to be paid for the installation, I do no know yet when we will make those scripts publically available. Generally drbd+heartbeat as mirroring solution is possible. > > Sounds like source is not available, and only binaries for CentOS? Well, RHEL5 / RHEL6 based, SLES10 / SLES11 and Debian. And sorry, the server daemons are not open source yet. I think the more people asking to open it, the faster this process will be. Especially if those people also are going to buy support contracts :) > > Looks like it does need a kernel module, does that mean only old 2.6.X > CentOS kernels are supported? Oh, on the contrary. We basically support any kernel beginning with 2.6.16 onwards. Even support for most recent vanilla kernels is usually done within a few weeks after its release. > > Does it work with mainline ofed on qlogic and mellanox hardware? Definitely works with both and RDMA (ibverbs) transfers. As QLogic has some problems with ibverbs, we had a cooperation with QLogic to improve performance on their hardware. Recent QLogic OFED stacks do include performance fixes. Please also see http://www.fhgfs.com/wiki/wikka.php?wakka=NativeInfinibandSupport for (QLogic) tuning advises. > > From a sysadmin point of view I'm also interested in: > * Do blocks auto balance across storage nodes? Actually files are balanced. The default file stripe count is 4, but can be adjusted by the admin. So assuming you would have only one target per server, a large file would be distributed over 4 nodes. The default chunk size is 512kB. For files smaller than that size there is no stripe-overhead. > * Is managing disk space, inodes (or equiv) and related capacity > planning complex? Or does df report useful/obvious numbers? Hmm, right now (unix) "df -i" does not report the inode usage yet for fhgfs. We will fix that in later releases. At least for traditional storage severs we recommend to use ext4 on meta-data partitions for performance reasons. For storage partitions we usually recommend XFS, again for performance. Also, storage and meta-data can be on the very same partion, you just need configure the path were to find those data in the corresponding config files. If you are going to use all your client nodes as fhgfs servers and those already have XFS as scratch partion, XFS is probably also fine. However, due a severe XFS performance issue, you should either need a kernel to have this issue fixed or you should disable meta-data-as-xattr (in fhgfs-meta.conf: storeUseExtendedAttribs = false). Also please see here for a discussion and benchmarks http://oss.sgi.com/archives/xfs/2011-08/msg00233.html Christoph Hellwig then fixed the unlink issue later on and this patch should be in all recent linux-stable kernels. I have not checked RHEL5/RHEL6, though. Anyway, if you are going use ext4 on your meta-data partition, you need to make sure yourself you do have sufficient inodes available. Our wiki has recommendations for mkfs.ext4 options. > * Can storage nodes be added/removed easily by migrating on/off of > hardware? Adding storage nodes on the fly works perfectly fine. Our fhgfs-ctl tool also has a mode to migrate files off a storage node. However, we really recommend not to do that while clients are writing to the file system right now. Reason is that we do not lock files-in-migration yet and a client then might write to unlinked files, which would result in silent data loss. We have on-the-fly data migration on our todo list, but I cannot say yet, when that is going to come. If you are going to use your clients as storage nodes, you could specify that system as preferred system to write files to. That would easily allow to remove that node... > * Is FhGFS handle 100% of the distributed file system responsibilities > or does it layer on top of xfs/ext4 or related? (like ceph) Like ceph on top of other file systems, such as xfs or ext4. > * With large files does performance scale reasonably with storage > servers? Yes, you may also adjust the stripe count by your needs. Default stripe count is 4, which approximately provides the performance of 4 storage targets. > * With small files does performance scale reasonably with metadata > servers? Striping over different meta data servers is done on a per-directory basis. As most users and applications work in different directories, meta data performance usually scales linearily with the number of metadata servers. Please note: Our wiki has tuning advices for meta data performance and with our next major release we also should see a greatly improved meta data performance. Hope it helps and please let me know if you have further questions! Cheers, Bernd PS: We have a GUI, which should help you to just try it out within a few minutes. Please see here: http://www.fhgfs.com/wiki/wikka.php?wakka=GUIbasedInstallation _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From Greg at Keller.net Thu Jun 14 19:17:17 2012 From: Greg at Keller.net (Greg Keller) Date: Thu, 14 Jun 2012 18:17:17 -0500 Subject: [Beowulf] Torrents for HPC In-Reply-To: <4FDA11B5.8010403@aakef.fastmail.fm> References: <4FD2937B.6010408@cse.ucdavis.edu> <4FD89839.2040904@aakef.fastmail.fm> <4FDA11B5.8010403@aakef.fastmail.fm> Message-ID: On Thu, Jun 14, 2012 at 11:30 AM, Bernd Schubert wrote: > [I'm moving that from your digest answer to the general discussion thread] > Doh, Thanks! > > On 06/13/2012 06:25 PM, Greg Keller wrote:> >> >>> What about an easy to setup cluster file system such as FhGFS? As one of >>> its developers I'm a bit biased of course, but then I'm also familiar >>> with Lustre, an I think FhGFS is far more easiy to setup. We also do not >>> have the problem to run clients and servers on the same node and so of >>> our customers make heavy use of that and use their compute nodes as >>> storage servers. That should a provide the same or better throughput as >>> your torrent system. >>> >>> Cheers, >>> Bernd >>> >> >> We've been curious about FhGFS but the licensing did not leave us >> confident we would always have access to it if we integrated it into our >> business and made available to our users. Serious success could >> essentially cause an epic failure if the license made it expensive to us >> (as commercial users) suddenly. As a "cloud" based hpc provider I >> thought it was too risky and have been happy with Lustre and it's >> affiliates. >> >> Specifically this clause could be a problem: >> >> 3.2 LICENSEE may NOT: >> >> ... >> >> - rent or lease the LICENSED SOFTWARE and DOCUMENTATION to any third party >> >> ... >> >> Does anyone think the license was intended to block cloud providers making >> it available as part of a cloud based HPC solution? Am I >> mis-interpreting this? >> Not looking for a legal-ese battle but I am wondering if other licenses >> commonly >> used in cloud contexts have similar language. Anyone think the FS is >> fantastic >> > > enough that I should fight (spend money on lawyers and licenses) to put > it in > > front of "Cloud" HPC users? > > Arg, such issues are exactly the reason why I don't like contracts and > laws written by lawyers. Instead of writing with 'normal' words > understandable by everyone, they have their own language, which nobody can > understand is entirely unclear. I'm not sure if they do understand > themselves what they have written... Given the high number of useless > lawsuits probably not. > > This clause is about charging for the licensed software (i.e. fhgfs), not > about services around fhgfs. Neither this clause nor any other clause in > the EULA is intended or prohibits that you provide fhgfs to cloud users. > That's good to hear. The license has evolved and simplified a lot since I first read it long ago. > > So this particular clause just says that you are not allowed to charge > money for allowing people to use fhgfs. So it actually protects users from > paying for a software, which is in fact free to use for everyone, no matter > if it's a commercial user or not. > > On the other hand, you are still free to charge customers for services > around fhgfs, e.g. you might charge your cloud customers for installing > fhgfs or maintaining it or something like that - if that's what you have in > mind. > We generally just charge for CPU hours, and bundle as much in as we can for that price (Network, Disk, etc). We hate "Gotcha" pricing models and our customers live comfortably in the "Best Effort" support we can offer on free software. If we ever have exotic requirements (100+TB) we work out something special. Any ISP or Software licensing is usually passed through or handled directly between the user and the IP owner, and we host whatever is required to keep the licensing people happy :) Our parallel file-system choices have been limited because our customers are usually not long term commited, so paying annual licenses or buying dedicated storage systems rarely makes sense financially. It's always scratch and backed up at their location, so we can skate on the edge without much risk. And if they really like it they may put it on their internal systems. > Please let us know if this is sufficient for you to consider FhGFS in the > future or if we again should work with our Fraunhofer lawyers to improve > the license. > We will do some initial testing as time permits and get back on the licensing piece if need be then. I appreciate the intent of the licensing line is difficult to communicate, and look forward to learning more. Cheers! Greg > > > -- Mailscanner: Clean -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From j.wender at science-computing.de Fri Jun 15 15:25:33 2012 From: j.wender at science-computing.de (Jan Wender) Date: Fri, 15 Jun 2012 21:25:33 +0200 Subject: [Beowulf] Status of beowulf.org? In-Reply-To: <7a0f510c425b98deb2853d215a3784ed.squirrel@mail.eadline.org> References: <4FD89DFF.9020708@ias.edu> <7a0f510c425b98deb2853d215a3784ed.squirrel@mail.eadline.org> Message-ID: <4FDB8C2D.9030707@science-computing.de> Hi all, Arend from Penguin replied and they are looking for the list. They would like to continue hosting the list, but would ask for some volunteers to administrate it. Cheerio, Jan -- ---- Company Information ---- Vorstandsvorsitzender: Gerd-Lothar Leonhart Vorstand: Dr. Bernd Finkbeiner, Dr. Arno Steitz, Dr. Ingrid Zech Vorsitzender des Aufsichtsrats: Philippe Miltin Sitz: Tuebingen Registergericht: Stuttgart Registernummer: HRB 382196 -- Mailscanner: Clean -------------- next part -------------- A non-text attachment was scrubbed... Name: j_wender.vcf Type: text/x-vcard Size: 340 bytes Desc: not available URL: -------------- next part -------------- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From bernard at vanhpc.org Fri Jun 15 15:30:39 2012 From: bernard at vanhpc.org (Bernard Li) Date: Fri, 15 Jun 2012 12:30:39 -0700 Subject: [Beowulf] Status of beowulf.org? In-Reply-To: <4FDB8C2D.9030707@science-computing.de> References: <4FD89DFF.9020708@ias.edu> <7a0f510c425b98deb2853d215a3784ed.squirrel@mail.eadline.org> <4FDB8C2D.9030707@science-computing.de> Message-ID: Hi Jan: On Fri, Jun 15, 2012 at 12:25 PM, Jan Wender wrote: > Arend from Penguin replied and they are looking for the list. They would > like to continue hosting the list, but would ask for some volunteers to > administrate it. Do you think you can elaborate on what they need help with? Moderating emails? Thanks, Bernard _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From eagles051387 at gmail.com Fri Jun 15 17:22:53 2012 From: eagles051387 at gmail.com (Jonathan Aquilina) Date: Fri, 15 Jun 2012 23:22:53 +0200 Subject: [Beowulf] Status of beowulf.org? In-Reply-To: References: <4FD89DFF.9020708@ias.edu> <7a0f510c425b98deb2853d215a3784ed.squirrel@mail.eadline.org> <4FDB8C2D.9030707@science-computing.de> Message-ID: I would love to help moderate the list :) Regards Jonathan Aquilina On 15 Jun 2012, at 21:30, Bernard Li wrote: > Hi Jan: > > On Fri, Jun 15, 2012 at 12:25 PM, Jan Wender > wrote: > >> Arend from Penguin replied and they are looking for the list. They would >> like to continue hosting the list, but would ask for some volunteers to >> administrate it. > > Do you think you can elaborate on what they need help with? Moderating emails? > > Thanks, > > Bernard > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing > To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From samuel at unimelb.edu.au Fri Jun 15 18:28:20 2012 From: samuel at unimelb.edu.au (Chris Samuel) Date: Sat, 16 Jun 2012 08:28:20 +1000 Subject: [Beowulf] Status of beowulf.org? In-Reply-To: <4FDB8C2D.9030707@science-computing.de> References: <4FD89DFF.9020708@ias.edu> <7a0f510c425b98deb2853d215a3784ed.squirrel@mail.eadline.org> <4FDB8C2D.9030707@science-computing.de> Message-ID: <201206160828.20892.samuel@unimelb.edu.au> On Saturday 16 June 2012 05:25:33 Jan Wender wrote: > Hi all, Hi Jan, > Arend from Penguin replied and they are looking for the list. They > would like to continue hosting the list, but would ask for some > volunteers to administrate it. I've been (and still am) the list owner/admin of various Mailman lists for many years, happy to help out if need be. Did they say anything about the beowulf.org website ? cheers, Chris -- Christopher Samuel - Senior Systems Administrator VLSCI - Victorian Life Sciences Computation Initiative Email: samuel at unimelb.edu.au Phone: +61 (0)3 903 55545 http://www.vlsci.unimelb.edu.au/ _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From bill at cse.ucdavis.edu Fri Jun 15 18:49:27 2012 From: bill at cse.ucdavis.edu (Bill Broadley) Date: Fri, 15 Jun 2012 15:49:27 -0700 Subject: [Beowulf] Status of beowulf.org? In-Reply-To: <4FDB8C2D.9030707@science-computing.de> References: <4FD89DFF.9020708@ias.edu> <7a0f510c425b98deb2853d215a3784ed.squirrel@mail.eadline.org> <4FDB8C2D.9030707@science-computing.de> Message-ID: <4FDBBBF7.1020802@cse.ucdavis.edu> On 06/15/2012 12:25 PM, Jan Wender wrote: > Hi all, > > Arend from Penguin replied and they are looking for the list. They would > like to continue hosting the list, but would ask for some volunteers to > administrate it. Well if they are doing such a poor job and aren't willing to administrate it we should move it elsewhere. _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From landman at scalableinformatics.com Fri Jun 15 19:10:26 2012 From: landman at scalableinformatics.com (Joe Landman) Date: Fri, 15 Jun 2012 19:10:26 -0400 Subject: [Beowulf] Status of beowulf.org? In-Reply-To: <4FDBBBF7.1020802@cse.ucdavis.edu> References: <4FD89DFF.9020708@ias.edu> <7a0f510c425b98deb2853d215a3784ed.squirrel@mail.eadline.org> <4FDB8C2D.9030707@science-computing.de> <4FDBBBF7.1020802@cse.ucdavis.edu> Message-ID: <4FDBC0E2.9090706@scalableinformatics.com> On 06/15/2012 06:49 PM, Bill Broadley wrote: > On 06/15/2012 12:25 PM, Jan Wender wrote: >> Hi all, >> >> Arend from Penguin replied and they are looking for the list. They would >> like to continue hosting the list, but would ask for some volunteers to >> administrate it. > > Well if they are doing such a poor job and aren't willing to > administrate it we should move it elsewhere. Hmmm ... I pinged my contact within Penguin and was told they were working on it. This said, I seem to remember that beowulf.org was Scyld property before the acquisition by Penguin. Looking at the whois output somewhat confirms ownership. If this is the case, "we" can't move it "elsewhere" without the owners (Penguin's) permission. I think that part of why its fallen by the wayside at Penguin is due to Don taking up residence at Nvidia, and no one either stepping up to it or being assigned to it. All of this said, if a reasonable proposal is made to Penguin about helping to run/administer it, I think they might be willing to consider it. If, on the other hand, it is approached in a somewhat more brusque manner, I wouldn't hold a refusal to consider proposals against them. So far, we, Chris Samuel, Doug Eadline, Jon A, and a few others have indicated a willingness to help. I can't say I like mailman very much (set many up, royal PIA to deal with IMO), but Chris Samuel has good mailman-foo. Might make sense to enable admin by Chris and a small group of mailman-gurus. I've got (whether I like it or not) web-foo ... and mail server foo ... and would be happy to help there. Doug/Jon/... have foo of all sorts, and would certainly help out. If we needed distributed carbon-bots for moderation, this is doable (Chris might be able to comment on this). We would (my company) be happy to setup/donate a small server with storage to run this if Penguin wants to get completely out. We could host it as well at our site. Could also run it on EC2, though I can tell you that this is not nearly as cheap as Amazon might wish you to think. The cost benefit doesn't really work so well for this ... Lots of possibilities. Seems to me though, that one of the natural leaders of this would be Doug Eadline. Don't know where ClusterMonkey sits, but that is a well run site. Just sayin... -- Joseph Landman, Ph.D Founder and CEO Scalable Informatics Inc. email: landman at scalableinformatics.com web : http://scalableinformatics.com http://scalableinformatics.com/sicluster phone: +1 734 786 8423 x121 fax : +1 866 888 3112 cell : +1 734 612 4615 _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From lbickley at bickleywest.com Fri Jun 15 19:22:24 2012 From: lbickley at bickleywest.com (Lyle Bickley) Date: Fri, 15 Jun 2012 16:22:24 -0700 Subject: [Beowulf] Status of beowulf.org? In-Reply-To: <4FDBBBF7.1020802@cse.ucdavis.edu> References: <4FD89DFF.9020708@ias.edu> <7a0f510c425b98deb2853d215a3784ed.squirrel@mail.eadline.org> <4FDB8C2D.9030707@science-computing.de> <4FDBBBF7.1020802@cse.ucdavis.edu> Message-ID: <20120615162224.3f0cd6ca@core2.bcwi.net> On Fri, 15 Jun 2012 15:49:27 -0700 Bill Broadley wrote: > On 06/15/2012 12:25 PM, Jan Wender wrote: > > Hi all, > > > > Arend from Penguin replied and they are looking for the list. They > > would like to continue hosting the list, but would ask for some > > volunteers to administrate it. > > Well if they are doing such a poor job and aren't willing to > administrate it we should move it elsewhere. I'll second that! Cheers, Lyle -- Lyle Bickley Bickley Consulting West Inc. http://bickleywest.com "Black holes are where God is dividing by zero" _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From samuel at unimelb.edu.au Fri Jun 15 20:08:53 2012 From: samuel at unimelb.edu.au (Chris Samuel) Date: Sat, 16 Jun 2012 10:08:53 +1000 Subject: [Beowulf] Status of beowulf.org? In-Reply-To: <4FDBC0E2.9090706@scalableinformatics.com> References: <4FD89DFF.9020708@ias.edu> <4FDBBBF7.1020802@cse.ucdavis.edu> <4FDBC0E2.9090706@scalableinformatics.com> Message-ID: <201206161008.53854.samuel@unimelb.edu.au> On Saturday 16 June 2012 09:10:26 Joe Landman wrote: > If we needed distributed carbon-bots for moderation, this is doable > (Chris might be able to comment on this). Quite doable, you can list a number of people as moderators (or admins) of Mailman lists. Admins can attend to moderation requests too (so don't have to be listed twice). -- Christopher Samuel - Senior Systems Administrator VLSCI - Victorian Life Sciences Computation Initiative Email: samuel at unimelb.edu.au Phone: +61 (0)3 903 55545 http://www.vlsci.unimelb.edu.au/ _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From bernard at vanhpc.org Fri Jun 15 20:47:12 2012 From: bernard at vanhpc.org (Bernard Li) Date: Fri, 15 Jun 2012 17:47:12 -0700 Subject: [Beowulf] Status of beowulf.org? In-Reply-To: <201206161008.53854.samuel@unimelb.edu.au> References: <4FD89DFF.9020708@ias.edu> <4FDBBBF7.1020802@cse.ucdavis.edu> <4FDBC0E2.9090706@scalableinformatics.com> <201206161008.53854.samuel@unimelb.edu.au> Message-ID: Hi all: Before we get too deep in this discussion regarding moderation, I'd like to ask two questions: 1) Is this list moderated? And if so, for what specifically? 2) Is it still necessary to moderate the list, moving forward? Thanks, Bernard On Fri, Jun 15, 2012 at 5:08 PM, Chris Samuel wrote: > On Saturday 16 June 2012 09:10:26 Joe Landman wrote: > >> If we needed distributed carbon-bots for moderation, this is doable >> (Chris might be ?able to comment on this). > > Quite doable, you can list a number of people as moderators (or > admins) of Mailman lists. ?Admins can attend to moderation requests > too (so don't have to be listed twice). > > -- > ? Christopher Samuel - Senior Systems Administrator > ?VLSCI - Victorian Life Sciences Computation Initiative > ?Email: samuel at unimelb.edu.au Phone: +61 (0)3 903 55545 > ? ? ? ? http://www.vlsci.unimelb.edu.au/ > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing > To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From samuel at unimelb.edu.au Sat Jun 16 02:35:44 2012 From: samuel at unimelb.edu.au (Chris Samuel) Date: Sat, 16 Jun 2012 16:35:44 +1000 Subject: [Beowulf] Status of beowulf.org? In-Reply-To: References: <4FD89DFF.9020708@ias.edu> <201206161008.53854.samuel@unimelb.edu.au> Message-ID: <201206161635.44453.samuel@unimelb.edu.au> On Saturday 16 June 2012 10:47:12 Bernard Li wrote: > Hi all: > > Before we get too deep in this discussion regarding moderation, I'd > like to ask two questions: > > 1) Is this list moderated? And if so, for what specifically? No idea, it used to be that new subscribers posts were delayed (as if moderated) and at some point that would magically disappear and your posts would go straight through. It's something that's very easy to do with Mailman. > 2) Is it still necessary to moderate the list, moving forward That's another question altogether.. :-) -- Christopher Samuel - Senior Systems Administrator VLSCI - Victorian Life Sciences Computation Initiative Email: samuel at unimelb.edu.au Phone: +61 (0)3 903 55545 http://www.vlsci.unimelb.edu.au/ _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From eugen at leitl.org Sat Jun 16 05:19:43 2012 From: eugen at leitl.org (Eugen Leitl) Date: Sat, 16 Jun 2012 11:19:43 +0200 Subject: [Beowulf] Status of beowulf.org? In-Reply-To: References: <4FD89DFF.9020708@ias.edu> <7a0f510c425b98deb2853d215a3784ed.squirrel@mail.eadline.org> <4FDB8C2D.9030707@science-computing.de> Message-ID: <20120616091943.GW17120@leitl.org> On Fri, Jun 15, 2012 at 11:22:53PM +0200, Jonathan Aquilina wrote: > I would love to help moderate the list :) As I'm already moderating a bunch of lists another one wouldn't be a problem for me. > Regards > > Jonathan Aquilina > > > > On 15 Jun 2012, at 21:30, Bernard Li wrote: > > > Hi Jan: > > > > On Fri, Jun 15, 2012 at 12:25 PM, Jan Wender > > wrote: > > > >> Arend from Penguin replied and they are looking for the list. They would > >> like to continue hosting the list, but would ask for some volunteers to > >> administrate it. > > > > Do you think you can elaborate on what they need help with? Moderating emails? > > > > Thanks, > > > > Bernard > > _______________________________________________ > > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing > > To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf > > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing > To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From herbert.fruchtl at st-andrews.ac.uk Sat Jun 16 06:50:38 2012 From: herbert.fruchtl at st-andrews.ac.uk (Herbert Fruchtl) Date: Sat, 16 Jun 2012 11:50:38 +0100 Subject: [Beowulf] Status of beowulf.org? In-Reply-To: References: Message-ID: <4FDC64FE.4040201@st-andrews.ac.uk> As a lurker of many years' standing, I am vehemently opposed to moderation. It slows down traffic (in the rare case that I do pose a question, it's because I'm desperate and want a response IMMEDIATELY!), is open to abuse, and it comes with a legal minefield (if let's say, a corporate lawyer at Intel/AMD/NVIDIA thinks they have been unfairly slagged off, they may go after the list owner). Having said that, the current situation is obscure to say the least. I know a colleague at a British university, who is on the list, but whose posts always bounce. His attempts at contacting any list owners via the mailman interface were never answered. Back to lurking in my cave... Herbert On 16/06/12 01:47, beowulf-request at beowulf.org wrote: > Send Beowulf mailing list submissions to > beowulf at beowulf.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://www.beowulf.org/mailman/listinfo/beowulf > or, via email, send a message with subject or body 'help' to > beowulf-request at beowulf.org > > You can reach the person managing the list at > beowulf-owner at beowulf.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of Beowulf digest..." > > > Today's Topics: > > 1. Re: Status of beowulf.org? (Jan Wender) > 2. Re: Status of beowulf.org? (Bernard Li) > 3. Re: Status of beowulf.org? (Jonathan Aquilina) > 4. Re: Status of beowulf.org? (Chris Samuel) > 5. Re: Status of beowulf.org? (Bill Broadley) > 6. Re: Status of beowulf.org? (Joe Landman) > 7. Re: Status of beowulf.org? (Lyle Bickley) > 8. Re: Status of beowulf.org? (Chris Samuel) > 9. Re: Status of beowulf.org? (Bernard Li) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Fri, 15 Jun 2012 21:25:33 +0200 > From: Jan Wender > Subject: Re: [Beowulf] Status of beowulf.org? > To: Beowulf Mailing List > Message-ID:<4FDB8C2D.9030707 at science-computing.de> > Content-Type: text/plain; charset="iso-8859-1" > > Hi all, > > Arend from Penguin replied and they are looking for the list. They would > like to continue hosting the list, but would ask for some volunteers to > administrate it. > > Cheerio, Jan -- Herbert Fruchtl Senior Scientific Computing Officer School of Chemistry, School of Mathematics and Statistics University of St Andrews -- The University of St Andrews is a charity registered in Scotland: No SC013532 _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From eugen at leitl.org Sat Jun 16 07:26:59 2012 From: eugen at leitl.org (Eugen Leitl) Date: Sat, 16 Jun 2012 13:26:59 +0200 Subject: [Beowulf] Status of beowulf.org? In-Reply-To: <4FDC64FE.4040201@st-andrews.ac.uk> References: <4FDC64FE.4040201@st-andrews.ac.uk> Message-ID: <20120616112659.GG17120@leitl.org> On Sat, Jun 16, 2012 at 11:50:38AM +0100, Herbert Fruchtl wrote: > As a lurker of many years' standing, I am vehemently opposed to moderation. It > slows down traffic (in the rare case that I do pose a question, it's because I'm > desperate and want a response IMMEDIATELY!), is open to abuse, and it comes with > a legal minefield (if let's say, a corporate lawyer at Intel/AMD/NVIDIA thinks > they have been unfairly slagged off, they may go after the list owner). Moderation for Mailman typically means that new members are moderated by default, and unmoderated after the first post which is not spam. Only chronical offenders are typically put back on moderation. So there is no delay for list traffic, but for new subscribers. Typically, this takes a day or two. > Having said that, the current situation is obscure to say the least. I know a > colleague at a British university, who is on the list, but whose posts always > bounce. His attempts at contacting any list owners via the mailman interface > were never answered. This is not how this is supposed to work. > Back to lurking in my cave... _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From landman at scalableinformatics.com Sat Jun 16 12:33:22 2012 From: landman at scalableinformatics.com (Joe Landman) Date: Sat, 16 Jun 2012 12:33:22 -0400 Subject: [Beowulf] Status of beowulf.org? In-Reply-To: <4FDC64FE.4040201@st-andrews.ac.uk> References: <4FDC64FE.4040201@st-andrews.ac.uk> Message-ID: <4FDCB552.1000506@scalableinformatics.com> On 06/16/2012 06:50 AM, Herbert Fruchtl wrote: > As a lurker of many years' standing, I am vehemently opposed to moderation. It > slows down traffic (in the rare case that I do pose a question, it's because I'm > desperate and want a response IMMEDIATELY!), is open to abuse, and it comes with > a legal minefield (if let's say, a corporate lawyer at Intel/AMD/NVIDIA thinks > they have been unfairly slagged off, they may go after the list owner). so ... moderation stops this (going after the list owner) ... how? I am not generally a huge fan of moderation. However, I've seen some cases where various other list participants generally contribute nothing substantive to discussions, and only serve to annoy and inflame discussions with regular participants. If these people cannot respect the list, the participants, they can either choose leaving or being moderated. This said, in the past, on another list, I've been personally threatened with moderation, and had it enforced. The list owners (wrongly IMO) felt I had done them a grievous insult, and enforced moderation on me. To call their reaction silly would be kind in my book (and no, I won't say who they were/are, so don't ask ... though others in that same situation with that list and those owners contacted me later to commiserate). It was their list, and they have the right to take any action they wished, which they did, no matter how right or wrong headed it was/is. A well moderated list (e.g. very light touch) will have a rich variety of users and be mostly spam and idiot free. A poorly moderated list will turn into a sycophantic echo chamber. One of the side affects of a well moderated list is a stable or growing population of participants. Conversely, a poorly moderated list tends to lose many of the voices one needs for a diverse exchange of views (as it tends towards echo chamber mode). > > Having said that, the current situation is obscure to say the least. I know a > colleague at a British university, who is on the list, but whose posts always > bounce. His attempts at contacting any list owners via the mailman interface > were never answered. I've heard this from a number of folks. Some simply cannot post to the list for whatever reason. Likely RBL/DUL blocking on email servers. We build email annotation pipelines, that do a much better job than DUL/RBL lists. DUL/RBL are daisy cutters*, annotation pipelines are scalpels. Chances are these people are the collateral damage associated with using RBL/DUL. * A "daisy cutter" is a euphemism for a very large explosive device in which the shockwave, traversing a large field, could remove flowers from their stalks. Have a look at youtube (http://www.youtube.com/watch?v=_upy14pesi4) for an example. > > Back to lurking in my cave... > > Herbert > > On 16/06/12 01:47, beowulf-request at beowulf.org wrote: >> Send Beowulf mailing list submissions to >> beowulf at beowulf.org >> >> To subscribe or unsubscribe via the World Wide Web, visit >> http://www.beowulf.org/mailman/listinfo/beowulf >> or, via email, send a message with subject or body 'help' to >> beowulf-request at beowulf.org >> >> You can reach the person managing the list at >> beowulf-owner at beowulf.org >> >> When replying, please edit your Subject line so it is more specific >> than "Re: Contents of Beowulf digest..." >> >> >> Today's Topics: >> >> 1. Re: Status of beowulf.org? (Jan Wender) >> 2. Re: Status of beowulf.org? (Bernard Li) >> 3. Re: Status of beowulf.org? (Jonathan Aquilina) >> 4. Re: Status of beowulf.org? (Chris Samuel) >> 5. Re: Status of beowulf.org? (Bill Broadley) >> 6. Re: Status of beowulf.org? (Joe Landman) >> 7. Re: Status of beowulf.org? (Lyle Bickley) >> 8. Re: Status of beowulf.org? (Chris Samuel) >> 9. Re: Status of beowulf.org? (Bernard Li) >> >> >> ---------------------------------------------------------------------- >> >> Message: 1 >> Date: Fri, 15 Jun 2012 21:25:33 +0200 >> From: Jan Wender >> Subject: Re: [Beowulf] Status of beowulf.org? >> To: Beowulf Mailing List >> Message-ID:<4FDB8C2D.9030707 at science-computing.de> >> Content-Type: text/plain; charset="iso-8859-1" >> >> Hi all, >> >> Arend from Penguin replied and they are looking for the list. They would >> like to continue hosting the list, but would ask for some volunteers to >> administrate it. >> >> Cheerio, Jan > -- Joseph Landman, Ph.D Founder and CEO Scalable Informatics Inc. email: landman at scalableinformatics.com web : http://scalableinformatics.com http://scalableinformatics.com/sicluster phone: +1 734 786 8423 x121 fax : +1 866 888 3112 cell : +1 734 612 4615 _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From peter.st.john at gmail.com Sat Jun 16 15:21:13 2012 From: peter.st.john at gmail.com (Peter St. John) Date: Sat, 16 Jun 2012 15:21:13 -0400 Subject: [Beowulf] list moderation Message-ID: In the old days, we used to have pairs of lists: one un-moderated (world-writable), the other moderated (world readable). You subscribe to either or both; the standards for off-topic witticisms would be easier (depending on taste) at the former, but there'd be more spam. When an interesting and informative post appears on the open list, someone who subscribes to both forwards it to the moderated list. It's extra work to get a persistent troll banned from the open list, and it's extra work to get a new person permed for the moderated list, both require admin attention. Peter -- Mailscanner: Clean -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From herbert.fruchtl at st-andrews.ac.uk Sun Jun 17 12:22:04 2012 From: herbert.fruchtl at st-andrews.ac.uk (Herbert Fruchtl) Date: Sun, 17 Jun 2012 17:22:04 +0100 Subject: [Beowulf] Status of beowulf.org? In-Reply-To: References: Message-ID: <4FDE042C.3070707@st-andrews.ac.uk> Joe Landman wrote: >> As a lurker of many years' standing, I am vehemently opposed to moderation. It >> slows down traffic (in the rare case that I do pose a question, it's because I'm >> desperate and want a response IMMEDIATELY!), is open to abuse, and it comes with >> a legal minefield (if let's say, a corporate lawyer at Intel/AMD/NVIDIA thinks >> they have been unfairly slagged off, they may go after the list owner). > so ... moderation stops this (going after the list owner) ... how? No. It's the lack of moderation that should at least provide some safeguards. The legal argument (occasionally challenged, and depending on your jurisdiction, but broadly accepted) is that if you are moderating, you take responsibility. If you don't, you are equivalent to a phone provider or the post office, who are not responsible for the content they deliver. Herbert _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From samuel at unimelb.edu.au Mon Jun 18 03:02:06 2012 From: samuel at unimelb.edu.au (Christopher Samuel) Date: Mon, 18 Jun 2012 17:02:06 +1000 Subject: [Beowulf] Status of beowulf.org? In-Reply-To: <20120616112659.GG17120@leitl.org> References: <4FDC64FE.4040201@st-andrews.ac.uk> <20120616112659.GG17120@leitl.org> Message-ID: <4FDED26E.1090508@unimelb.edu.au> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 16/06/12 21:26, Eugen Leitl wrote: > Moderation for Mailman typically means that new members are > moderated by default, and unmoderated after the first post which is > not spam. This is certainly how the list seemed to operate, although with a much longer window between starting posting and finding your posts going through unapproved. > Only chronical offenders are typically put back on moderation. So > there is no delay for list traffic, but for new subscribers. > Typically, this takes a day or two. There is also Mailman's "emergency moderation" switch for a list, but something of a last resort (it's useful for announcement only type lists where even with all your carefully chosen rules to list who is allowed to send you still want a backstop so you can check one last time before it goes out). cheers, Chris - -- Christopher Samuel - Senior Systems Administrator VLSCI - Victorian Life Sciences Computation Initiative Email: samuel at unimelb.edu.au Phone: +61 (0)3 903 55545 http://www.vlsci.unimelb.edu.au/ -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk/e0m0ACgkQO2KABBYQAh+ckgCdGyVjgzoTuvROBNyOuzvMUU7K tdIAnR5YXWFm+ZhiTj7ojS9P4sccTfUP =qxqX -----END PGP SIGNATURE----- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From john.hearns at mclaren.com Mon Jun 18 10:31:15 2012 From: john.hearns at mclaren.com (Hearns, John) Date: Mon, 18 Jun 2012 15:31:15 +0100 Subject: [Beowulf] Caption Competition Message-ID: <207BB2F60743C34496BE41039233A8090EAF47A6@MRL-PWEXCHMB02.mil.tagmclarengroup.com> The Register has a good shot of Sequoia under construction: http://regmedia.co.uk/2012/06/17/ibm_sequoia_llnl.jpg There must be some funny caption for this! As an aside, are those very deep false floors? John Hearns | CFD Hardware Specialist | McLaren Racing Limited McLaren Technology Centre, Chertsey Road, Woking, Surrey GU21 4YH, UK T: +44 (0) 1483 262000 D: +44 (0) 1483 262352 F: +44 (0) 1483 261928 E: john.hearns at mclaren.com W: www.mclaren.com The contents of this email are confidential and for the exclusive use of the intended recipient. If you receive this email in error you should not copy it, retransmit it, use it or disclose its contents but should return it to the sender immediately and delete your copy. -- Mailscanner: Clean -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From james.p.lux at jpl.nasa.gov Mon Jun 18 10:48:41 2012 From: james.p.lux at jpl.nasa.gov (Lux, Jim (337C)) Date: Mon, 18 Jun 2012 14:48:41 +0000 Subject: [Beowulf] Caption Competition In-Reply-To: <207BB2F60743C34496BE41039233A8090EAF47A6@MRL-PWEXCHMB02.mil.tagmclarengroup.com> Message-ID: I've worked in places with everything from 12"-18" under the floor to ones where you could stand up underneath the floor. The latter are more pleasant to work in, although it really needs two people then, unless you want to get very good at climbing up and down the ladder. For the former, you just pull up all the tiles except the ones the equipment is standing on, and step from hole to hole. A lot more bending down and threading stuff between holes. From: "Hearns, John" > Date: Mon, 18 Jun 2012 15:31:15 +0100 To: "beowulf at beowulf.org" > Subject: [Beowulf] Caption Competition The Register has a good shot of Sequoia under construction: http://regmedia.co.uk/2012/06/17/ibm_sequoia_llnl.jpg There must be some funny caption for this! As an aside, are those very deep false floors? John Hearns | CFD Hardware Specialist | McLaren Racing Limited McLaren Technology Centre, Chertsey Road, Woking, Surrey GU21 4YH, UK T: +44 (0) 1483 262000 D: +44 (0) 1483 262352 F: +44 (0) 1483 261928 E: john.hearns at mclaren.com W: www.mclaren.com The contents of this email are confidential and for the exclusive use of the intended recipient. If you receive this email in error you should not copy it, retransmit it, use it or disclose its contents but should return it to the sender immediately and delete your copy. _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From hahn at mcmaster.ca Mon Jun 18 11:39:12 2012 From: hahn at mcmaster.ca (Mark Hahn) Date: Mon, 18 Jun 2012 11:39:12 -0400 (EDT) Subject: [Beowulf] Caption Competition In-Reply-To: <207BB2F60743C34496BE41039233A8090EAF47A6@MRL-PWEXCHMB02.mil.tagmclarengroup.com> References: <207BB2F60743C34496BE41039233A8090EAF47A6@MRL-PWEXCHMB02.mil.tagmclarengroup.com> Message-ID: > http://regmedia.co.uk/2012/06/17/ibm_sequoia_llnl.jpg > > There must be some funny caption for this! "do these cables clash with my safety vest?" "as you can see, your DC TCO will improve once you hire cabling gnomes." "down here is where we store the pron." "since this cluster will melt the polar icecaps, it's built on stilts!" > As an aside, are those very deep false floors? we have a location with a raised floor of ~4 ft. I'm not sure how that was chosen, but I also can't think of any reason why not. I mean, in general, raised floors are a chilled air plenum, so it's clearly good to avoid narrow ones. (the DC I set next to has about 16", and 2-8" of that is consumed by cables.) in general, I would advocate engineering DCs with as unobstructed airflow for both hot and cold as possible, and trying hard to keep cables out of the way of either. I'd love to see some CFD simulations of alternative DC layouts. for instance is it a good design to have no raised floor, but sealed H/C aisles fed by separate sets of ducts? how about a "linear" DC, where there is just a row of chillers aligned with a single row of racks? _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From rigved.sharma123 at gmail.com Mon Jun 18 14:58:36 2012 From: rigved.sharma123 at gmail.com (rigved sharma) Date: Tue, 19 Jun 2012 00:28:36 +0530 Subject: [Beowulf] qsub flag for reservation Message-ID: Hi, we are using torque and maui. We have 3 dedicated reservations for user john (john.0, john.1, john.2) on different nodes. Now we want that john should use ADVRES flag while submitting the jobs.We are aware of flag for single reservation id but not for multiple reservation ids.how to give that? _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From deadline at eadline.org Mon Jun 18 18:01:42 2012 From: deadline at eadline.org (Douglas Eadline) Date: Mon, 18 Jun 2012 18:01:42 -0400 Subject: [Beowulf] Some history an my theory (was Status of beowulf.org) In-Reply-To: <4FDC64FE.4040201@st-andrews.ac.uk> References: <4FDC64FE.4040201@st-andrews.ac.uk> Message-ID: <06fb57baefa1dc6f4b69db7600a6674f.squirrel@mail.eadline.org> All, When Don was running the list, moderation was there to eliminate spam, ever notice how clean this list has been? That is, there is a list of white hats that could always post (old timers mostly) everything else was moderated to check for spam. I assume the list is now running on auto pilot (actually with no pilot) where over moderation is the rule to catch spam and no one has assumed Don's roll of releasing the few true Beowulf messages in the sea of spam (see below) You may find this helpful, (From my list archives, Wed, February 8, 2006 3:54 pm) ----- After having a near-perfect record of keeping out spam and virus email, one slipped through yesterday. It's a good example of why mailing lists can't be auto-moderated. The current elaborate system requires heavy human moderation, and this message still slid past everything and was automatically approved. The message appeared to come from a subscribed user, so it passed the first check. (This is actually common: spammers and viruses use pairs of addresses from the same source, so evil mail is likely to come from someone you have heard of.) The message passed both ClamAV and SpamAssasin (although a compressed zip file should have triggered something). It didn't have any of the keywords that are configured in Mailman's "hold" rules. And finally, that user was approved for auto-post for messages that passed all of the previous rules. Please keep this event in mind before you complain that your message was held for moderation. 95-99% (depending on the day) of inbound mail to the mailing lists is immediately discarded as obvious viruses and spam. Only very low scoring mail from approved subscribers is eligible for auto-approval The rest is held for manual moderation. Only about 2% of those held messages are valid postings. That means about 50 messages manually discarded for each manually approved posting. And except for a few weeks scattered over the history of the list, I've been the sole or primary moderator. The bottom line is that we are considering a message board format to replace the mailing list. It would have required logins to post, and retroactive moderation to delete advertising and trolls. Any opinions? -- Donald Becker -- Doug -- Mailscanner: Clean _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From samuel at unimelb.edu.au Mon Jun 18 21:14:45 2012 From: samuel at unimelb.edu.au (Christopher Samuel) Date: Tue, 19 Jun 2012 11:14:45 +1000 Subject: [Beowulf] Caption Competition In-Reply-To: <207BB2F60743C34496BE41039233A8090EAF47A6@MRL-PWEXCHMB02.mil.tagmclarengroup.com> References: <207BB2F60743C34496BE41039233A8090EAF47A6@MRL-PWEXCHMB02.mil.tagmclarengroup.com> Message-ID: <4FDFD285.4030801@unimelb.edu.au> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 19/06/12 00:31, Hearns, John wrote: > There must be some funny caption for this! "This isn't what I was expecting when they said I'd be a support engineer!" > As an aside, are those very deep false floors? They are, a fair bit deeper than what we have for our BG/Q, but then they have 24X more racks than us and so need a lot more plumbing and cabling. cheers! Chris - -- Christopher Samuel - Senior Systems Administrator VLSCI - Victorian Life Sciences Computation Initiative Email: samuel at unimelb.edu.au Phone: +61 (0)3 903 55545 http://www.vlsci.unimelb.edu.au/ -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk/f0oUACgkQO2KABBYQAh9XxQCglUljL9dt+zkCSHULNQPrjtTZ 9SsAnA4BlEenTRKF9a2dUyUmogpi2HYJ =hfyX -----END PGP SIGNATURE----- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From samuel at unimelb.edu.au Mon Jun 18 21:19:25 2012 From: samuel at unimelb.edu.au (Christopher Samuel) Date: Tue, 19 Jun 2012 11:19:25 +1000 Subject: [Beowulf] Status of beowulf.org? In-Reply-To: <3FBA67A9A790594FA9F74DD91A869A7812A695@046-CH1MPN1-081.046d.mgd.msft.net> References: <4FD89DFF.9020708@ias.edu> <4FDBBBF7.1020802@cse.ucdavis.edu> <4FDBC0E2.9090706@scalableinformatics.com> <201206161008.53854.samuel@unimelb.edu.au> <3FBA67A9A790594FA9F74DD91A869A7812A695@046-CH1MPN1-081.046d.mgd.msft.net> Message-ID: <4FDFD39D.1080002@unimelb.edu.au> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 18/06/12 22:47, Pierce, Thomas H (H) wrote: > Hi All, Hiya, > As a "forced" lurker for the last few years, I have seen posts > "moderated" and lost. I agree with Joe L. that moderation "kills" > more discussions than "rescues" diversions. I think this is true for a moderated list with no active moderator (as we have now), but with a (very) light hand and a rapid transition from moderated to unmoderated for new users then it may not be too bad. That said I've never had to set moderation for new users on any lists I've run before, but then they weren't as public as this and Doug has already presented evidence that spammers have got stuff through to the list before now.. > Alas, USENET seems to be gone and "free" newsgroups and mailing > lists are less popular. Very sad (though inevitable for USENET I think). I *really* dislike web based forums.. > If one can vote in a colleague, I would like to see Doug Eadline or > Joe L. host the mailing list. The evidence is that the hosting itself is OK (in that the list is continuing to work despite the website being inaccessible), what's the problem is the lack of list admins and the website. cheers, Chris - -- Christopher Samuel - Senior Systems Administrator VLSCI - Victorian Life Sciences Computation Initiative Email: samuel at unimelb.edu.au Phone: +61 (0)3 903 55545 http://www.vlsci.unimelb.edu.au/ -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk/f05wACgkQO2KABBYQAh9g3gCcCP11KhNy4aGbBHwzIJGS8I+d Y9gAnRgtephRetWHArav8vEVcaKwFLgF =Ur5d -----END PGP SIGNATURE----- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From lindahl at pbm.com Tue Jun 19 03:18:09 2012 From: lindahl at pbm.com (Greg Lindahl) Date: Tue, 19 Jun 2012 00:18:09 -0700 Subject: [Beowulf] Caption Competition Message-ID: <20120619071809.GC23616@bx9.net> On Mon, Jun 18, 2012 at 11:39:12AM -0400, Mark Hahn wrote: > we have a location with a raised floor of ~4 ft. I'm not sure how > that was chosen, but I also can't think of any reason why not. > I mean, in general, raised floors are a chilled air plenum, > so it's clearly good to avoid narrow ones. (the DC I set next to > has about 16", and 2-8" of that is consumed by cables.) Beats me how people design these things, but yeah, deep floors aren't that unusual, although I suppose I've heard of them mostly in places where data cabling is under the floor. The standard in the Silicon Valley these days is to run data cables above the racks, and power cables under the floor, if possible. As for DCs which don't have raised floors at all, they are common in the Silicon Valley. They tell me that they use momentum to get the cold air from the duct in the ceiling to the floor. As far as I can tell, that works fine, i.e., the one data center that I've had nodes in which had inadequate cooling and no raised floor, the coolest nodes were still on the bottom. -- greg p.s. we should grab the archives and move the list to a server that "we" control. Just sayin' _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From john.hearns at mclaren.com Tue Jun 19 04:44:19 2012 From: john.hearns at mclaren.com (Hearns, John) Date: Tue, 19 Jun 2012 09:44:19 +0100 Subject: [Beowulf] Caption Competition References: <207BB2F60743C34496BE41039233A8090EAF47A6@MRL-PWEXCHMB02.mil.tagmclarengroup.com> Message-ID: <207BB2F60743C34496BE41039233A8090EAF50F0@MRL-PWEXCHMB02.mil.tagmclarengroup.com> "Dang, Earl, was that there knit one, purl one or knit two, purl one?" The contents of this email are confidential and for the exclusive use of the intended recipient. If you receive this email in error you should not copy it, retransmit it, use it or disclose its contents but should return it to the sender immediately and delete your copy. _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From eugen at leitl.org Tue Jun 19 08:34:46 2012 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 19 Jun 2012 14:34:46 +0200 Subject: [Beowulf] building a 96-core Ubuntu ARM solar-powered cluster Message-ID: <20120619123446.GE17120@leitl.org> (ob caveat phoronix) http://www.phoronix.com/scan.php?page=article&item=mit_cluster_build&num=1 Building A 96-Core Ubuntu ARM Solar-Powered Cluster Published on June 19, 2012 Written by Michael Larabel Last week I shared results from the Phoronix 12-core ARM Linux mini cluster that was constructed out of six PandaBoard ES development boards. Over the weekend, a 96-core ARM cluster succeeded this build. While packing nearly 100 cores and running Ubuntu Linux, the power consumption was just a bit more than 200 Watts. This array of nearly 100 processor cores was even powered up by a solar panel. This past weekend I was out at the Massachusetts Institute of Technology (MIT) where this build took place. A massive ARM build out has been in the plans for a few months and to even get it running off a solar panel. The build was a success and by Sunday, the goals were realized. Due to my past ARM Linux benchmarking on Phoronix that they have followed, their use of the Phoronix Test Suite, and my experience with Linux benchmarking and performance testing in general, I was invited over to MIT to help with this 96-core ARM build after having collaborated with them for a few months. This cluster / super-computer was built around 48 PandaBoards. The bulk of the PandaBoards were not the ES model (I brought my collection of PandaBoard ES models as back-ups for the PandaBoard nodes that failed), but just the vanilla model. The non-ES model packs a Texas Instruments OMAP4430 with a dual-core 1.0GHz dual-core Cortex-A9 processor. The GPU and CPU of the PandaBoard ES with its OMAP4460 are at higher clock speeds, but aside from that it is very similar to the OMAP4430 model. For maximum density and to make it easier to transport, the PandaBoards ended up being stacked vertically. The enclosure for the 48 PandaBoards was an industrial trashcan. Rather than using AC adapters, the PandaBoards were running off a USB power source. The power consumption on the original PandaBoard is similar to that of the PandaBoard ES or perhaps slightly lower when using the more efficient USB power source. My PandaBoard ES testing usually indicates about a 3 Watt idle per board, 5 Watt under load, or 6 Watts under extreme load. This MIT 96-core cluster would idle at just under 170 Watts and for the loads we hit it with over the weekend usually would just go a bit above 200 Watts. Overall, it was a fairly interesting weekend project! On the software side was a stock Ubuntu 12.04 ARM OMAP4 installation across all 48 PandaBoards on the SD cards. As far as any benchmark results, MIT sent in some numbers for the Green500 and some other performance tests are still being worked out. From the benchmarks I ran on the hardware, they dissented a bit from my expectations based upon what I was achieving with my 12-core PandaBoard ES cluster, so for the moment until all kinks in the new build are worked out I will refrain from sharing any numbers. Such many-core ARM clusters though are showing great potential in performance-per-Watt scenarios. For now, see my 12-core ARM cluster results. I will also have more numbers on the way shortly from the Phoronix build. Over the weekend, there also was not much time for performance tuning. Ubuntu 12.10 presents some very impressive performance gains as Phoronix results from earlier this month have indicated. MIT will be putting out a video, a couple papers, and some other information on this 96-core / 48 PandaBoard cluster so stay tuned for much more information. _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From alscheinine at tuffmail.us Tue Jun 19 11:23:49 2012 From: alscheinine at tuffmail.us (Alan Louis Scheinine) Date: Tue, 19 Jun 2012 10:23:49 -0500 Subject: [Beowulf] Some history an my theory (was Status of beowulf.org) In-Reply-To: <06fb57baefa1dc6f4b69db7600a6674f.squirrel@mail.eadline.org> References: <4FDC64FE.4040201@st-andrews.ac.uk> <06fb57baefa1dc6f4b69db7600a6674f.squirrel@mail.eadline.org> Message-ID: <4FE09985.2030402@tuffmail.us> I remember that post from Donald Becker but did not have a copy. Thank you very much for reminding us, in particular potential volunteers, how much time is involved. The end result has been of high quality over the years, thanks to Don. Regards, Alan Douglas Eadline wrote: > All, > > When Don was running the list, moderation was there to > eliminate spam, ever notice how clean this list has been? > That is, there is a list of white hats that could always > post (old timers mostly) everything else was moderated to check for > spam. I assume the list is now running on auto pilot (actually > with no pilot) where over moderation is the rule to catch spam > and no one has assumed Don's roll of releasing the few > true Beowulf messages in the sea of spam (see below) > > You may find this helpful, (From my list archives, Wed, > February 8, 2006 3:54 pm) > > ----- > > After having a near-perfect record of keeping out spam and virus > email, one slipped through yesterday. > > It's a good example of why mailing lists can't be auto-moderated. > The current elaborate system requires heavy human moderation, and this > message still slid past everything and was automatically approved. > > The message appeared to come from a subscribed user, so it passed the > first check. (This is actually common: spammers and viruses use pairs of > addresses from the same source, so evil mail is likely to come from > someone you have heard of.) > > The message passed both ClamAV and SpamAssasin (although a compressed > zip file should have triggered something). It didn't have any of the > keywords that are configured in Mailman's "hold" rules. And finally, that > user was approved for auto-post for messages that passed all of the > previous rules. > > Please keep this event in mind before you complain that your message was > held for moderation. 95-99% (depending on the day) of inbound mail to the > mailing lists is immediately discarded as obvious viruses and spam. > Only very low scoring mail from approved subscribers is eligible for > auto-approval The rest is held for manual moderation. Only about 2% of > those held messages are valid postings. That means about 50 messages > manually discarded for each manually approved posting. And except for a > few weeks scattered over the history of the list, I've been the sole or > primary moderator. > > The bottom line is that we are considering a message board format to > replace the mailing list. It would have required logins to > post, and retroactive moderation to delete advertising and trolls. > Any opinions? > > -- > Donald Becker > > > > -- > Doug > -- Alan Scheinine 200 Georgann Dr., Apt. E6 Vicksburg, MS 39180 Email: alscheinine at tuffmail.us Mobile phone: 225 288 4176 http://www.flickr.com/photos/ascheinine _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean From hahn at mcmaster.ca Fri Jun 29 15:50:03 2012 From: hahn at mcmaster.ca (Mark Hahn) Date: Fri, 29 Jun 2012 15:50:03 -0400 (EDT) Subject: [Beowulf] water cooling Message-ID: Hi all, I'm involved in some planning that tries to evaluate large HPC datacenter designs for a few years out. One really fundamental issue that seems unclear is whether direct water cooling will be fairly prevalent by then. One train of thought is that power densities will increase to 30KW/rack or so, necessitating water. But will it be rack-back radiators (far less efficient, but fairly routine today) or obtain much higher efficiency by skipping the air-cooling step (like Aquasar, SuperMUC, K Machine, etc). so, how commodity will direct water cooling be? for extra points, what KW/rack density are you planning? (by "commodity", I mean "available from vendors like HP/Dell/IBM as well as parts vendors like Supermicro.") thanks, mark. _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Mailscanner: Clean