From diep at xs4all.nl Fri Dec 2 02:10:27 2011 From: diep at xs4all.nl (Vincent Diepeveen) Date: Fri, 2 Dec 2011 08:10:27 +0100 Subject: [Beowulf] Dutch Airco Message-ID: Hey all, I live in The Netherlands, so no surprise, solving the cooling in a barn for a small cluster i intend to solve by blowing some air to outside. Now the small cluster will be a few kilowatt only, so i wonder how much air i need to blow in and out of the room to outside. As this is a cluster for my chessprogram and it maybe 1 day a year reaches 30C outside, we don't have to worry about outside temperature too much, as i can switch off the cluster when necessary that single lucky day a year. If it's uptime 99% of the time this cluster i'm more than happy. Majority of the year it's underneath 18C. Maybe a day or 60 a year it might be above 18C and maybe 7 days it is above 25C outside. I wouldn't have the cash to buy a real airconditioning for the cluster anyway, as that would increase power usage too much, so intend to solve it Dutch style. Interesting is to have a function or table that plots outside temperature and number of kilowatts used, starting with 1, 1.5, 2, 2.5, 3, 3.5, 4, 4.5 ,5, 5.5 , 6 kilowatts. For sure cluster won't be above 6 kilowatt. First few weeks 1 kilowatt then it will be 2 kilowatt and i doubt it'll reach 4 kilowatt. Which CFM do i need to have to blow outside hot air and suck inside cold air, to get to what i want? Thanks in advance anyone answerring the question. Kind Regards, Vincent _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. From diep at xs4all.nl Fri Dec 2 02:40:15 2011 From: diep at xs4all.nl (Vincent Diepeveen) Date: Fri, 2 Dec 2011 08:40:15 +0100 Subject: [Beowulf] Intel unveils 1 teraflop chip with 50-plus cores In-Reply-To: <20111116095238.GX31847@leitl.org> References: <20111116095238.GX31847@leitl.org> Message-ID: hi, Someone anonymous posted in another forum this Knights corner chip is actually 64 cores, which makes sense, that it should be < 300 watts, though we'll have to see whether that's the case, as the AMD HD Radeon 6990 as well as the Nvidia GTX590 are rated by manufacturer nearly 400 watt, and 512 bits vectors. So seems it's larrabee. The weird mix of a cache coherent chip with big vectors. Sort of 'in between cpu and manycore' hybrid. Which i'd expect to not have a very long life, as it'll be only interesting to matrix calculations, because it's tougher to program than a GPU with those huge vectors for about any other sort of calculation, and it's gonna deliver less Tflops of course for matrix calculations than the next generation gpu's can. So it might have some sales opportunity until the next generation of gpu's gets released. On Nov 16, 2011, at 10:52 AM, Eugen Leitl wrote: > > http://seattletimes.nwsource.com/html/technologybrierdudleysblog/ > 2016775145_wow_intel_unveils_1_teraflop_c.html > > Wow: Intel unveils 1 teraflop chip with 50-plus cores > > Posted by Brier Dudley > > I thought the prospect of quad-core tablet computers was exciting. > > Then I saw Intel's latest -- a 1 teraflop chip, with more than 50 > cores, that > Intel unveiled today, running it on a test machine at the SC11 > supercomputing > conference in Seattle. > > That means my kids may take a teraflop laptop to college -- if > their grades > don't suffer too much having access to 50-core video game consoles. > > It wasn't that long ago that Intel was boasting about the first > supercomputer > with sustained 1 teraflop performance. That was in 1997, on a > system with > 9,298 Pentium II chips that filled 72 computing cabinets. > > Now Intel has squeezed that much performance onto a matchbook-sized > chip, > dubbed "Knights Ferry," based on its new "Many Integrated Core" > architecture, > or MIC. > > It was designed largely in the Portland area and has just started > manufacturing. > > "In 15 years that's what we've been able to do. That is stupendous. > You're > witnessing the 1 teraflop barrier busting," Rajeeb Hazra, general > manager of > Intel's technical computing group, said at an unveiling ceremony. > (He holds > up the chip here) > > A single teraflop is capable of a trillion floating point > operations per > second. > > On hand for the event -- in the cellar of the Ruth's Chris Steak > House in > Seattle -- were the directors of the National Center for Computational > Sciences at Oak Ridge Laboratory and the Application Acceleration > Center of > Excellence. > > Also speaking was the chief science officer of the GENCI > supercomputing > organization in France, which has used its Intel-based system for > molecular > simulations of Alzheimer's, looking at issues such as plaque > formation that's > a hallmark of the disease. > > "The hardware is hardly exciting. ... The exciting part is doing the > science," said Jeff Nichols, acting director of the computational > center at > Oak Ridge. > > The hardware was pretty cool, though. > > George Chrysos, the chief architect of Knights Ferry, came up from the > Portland area with a test system running the new chip, which was > connected to > a speed meter on a laptop to show that it was running around 1 > teraflop. > > Intel had the test system set up behind closed doors -- on a coffee > table in > a hotel suite at the Grand Hyatt, and wouldn't allow reporters to take > pictures of the setup. > > Nor would the company specify how many cores the chip has -- just > more than > 50 -- or its power requirement. > > If you're building a new system and want to future-proof it, the > Knights > Ferry chip uses a double PCI Express slot. Chrysos said the systems > are also > likely to run alongside a few Xeon processors. > > This means that Intel could be producing teraflop chips for personal > computers within a few years, although there's lots of work to be > done on the > software side before you'd want one. > > Another question is whether you'd want a processor that powerful on > a laptop, > for instance, where you may prefer to have a system optimized for > longer > battery life, Hazra said. > > More important, Knights Ferry chips may help engineers build the next > generation of supercomputing systems, which Intel and its partners > hope to > delivery by 2018. > > Power efficiency was a highlight of another big announcement this > week at > SC11. On Monday night, IBM announced its "next generation > supercomputing > project," the Blue Gene/Q system that's heading to Lawrence Livermore > National Laboratory next year. > > Dubbed Sequoia, the system should run at 20 petaflops peak > performance. IBM > expects it to be the world's most power-efficient computer, > processing 2 > gigaflops per watt. > > The first 96 racks of the system could be delivered in December. The > Department of Energy's National Nuclear Security Administration > uses the > systems to work on nuclear weapons, energy reseach and climate > change, among > other things. > > Sequoia complements another Blue Gene/Q system, a 10-petaflop setup > called > "Mira," which was previously announced by Argonne National Laboratory. > > A few images from the conference, which runs through Friday at the > Washington > State Convention & Trade Center, starting with perusal of Intel > boards: > > > Take home a Cray today! > > IBM was sporting Blue Genes, and it wasn't even casual Friday: > > A 94 teraflop rack: > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin > Computing > To change your subscription (digest mode or unsubscribe) visit > http://www.beowulf.org/mailman/listinfo/beowulf _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. From james.p.lux at jpl.nasa.gov Fri Dec 2 08:48:14 2011 From: james.p.lux at jpl.nasa.gov (Lux, Jim (337C)) Date: Fri, 2 Dec 2011 05:48:14 -0800 Subject: [Beowulf] Dutch Airco In-Reply-To: Message-ID: On 12/1/11 11:10 PM, "Vincent Diepeveen" wrote: >Hey all, I live in The Netherlands, so no surprise, solving the >cooling in a barn for a small cluster i intend to solve by blowing >some air to outside. > >Now the small cluster will be a few kilowatt only, so i wonder how >much air i need to blow in and out of the room to outside. > >As this is a cluster for my chessprogram and it maybe 1 day a year >reaches 30C outside, we don't have to worry about outside temperature >too much, >as i can switch off the cluster when necessary that single lucky day >a year. If it's uptime 99% of the time this cluster i'm more than happy. > >Majority of the year it's underneath 18C. Maybe a day or 60 a year it >might be above 18C and maybe 7 days it is above 25C outside. > >I wouldn't have the cash to buy a real airconditioning for the >cluster anyway, as that would increase power usage too much, so >intend to solve it Dutch style. > >Interesting is to have a function or table that plots outside >temperature and number of kilowatts used, starting with 1, 1.5, 2, >2.5, 3, 3.5, 4, 4.5 ,5, 5.5 , 6 >kilowatts. For sure cluster won't be above 6 kilowatt. > >First few weeks 1 kilowatt then it will be 2 kilowatt and i doubt >it'll reach 4 kilowatt. > >Which CFM do i need to have to blow outside hot air and suck inside >cold air, to get to what i want? What you didn't say is what temperature you want your computers to be at (or, more properly, what temperature rise you want in the air going through). It's all about the specific heat of the air, which is in units of joules/(kg K)... That is it tells you how many joules it takes to raise one kilogram of air one degree. For gases, there's two different numbers, one for constant pressure and one for constant temperature, and for real gases those vary with temperature, pressure, etc. Q = cp * m * deltaT Or rearranging M = Q/(cp*deltaT) But for now use Cp (constant pressure) which for air at typical room temp is 1.012 J/(g*K) You want to dump a kilowatt in (1000 Joules/sec), and lets assume a 10 degree rise (bring the air in at 10C, exhaust it at 20C) M = 1000/(1.012E-3*10) = about 0.1 kg/sec If the heat load is 5 times, then you need 5 times the air. If you want half the temp rise, then twice the air, etc. How many CFM is 0.1 kg/sec? At 15 C, the density is 1.225 kg/m3, so you need 0.08 m3/sec (as a practical matter, when doing back of the envelopes, I figure air is about 35 cubic feet/cubic meter... So 0.08*35...) About 170 cubic feet per minute per kilowatt for a 10 degree rise Be aware that life is actually much more complicated and you need more air. For one thing the heat from your box is evenly transmitted to ALL the air.. Some doesn't go through the box, so what happens is you have, say, 200 cfm through the box with a 10degree rise and 200 cfm around the box with zero rise, so the net rise is 5 degrees. Also, the thermodynamics of gases is substantially more complex than my simple "non-compressible constant density" approximation. Since Tin and Tout are close here (280K and 290K) the errors are small, but when you start talking about rises of, say, 20-30C, it starts to make a difference. _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. From mathog at caltech.edu Fri Dec 2 15:29:00 2011 From: mathog at caltech.edu (mathog) Date: Fri, 02 Dec 2011 12:29:00 -0800 Subject: [Beowulf] Dutch Airco In-Reply-To: References: Message-ID: <909dc911ec2ef79358e51241965baeaf@saf.bio.caltech.edu> Heat transfer isn't the only issue to consider. How far is this from the ocean? Salty air is pretty corrosive and you might have a rust problem if you blow that through the cases. What about moisture? If you live in a humid or foggy area there may be condensation problems. Regards, David Mathog mathog at caltech.edu Manager, Sequence Analysis Facility, Biology Division, Caltech _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. From raysonlogin at gmail.com Fri Dec 2 16:01:57 2011 From: raysonlogin at gmail.com (Rayson Ho) Date: Fri, 2 Dec 2011 16:01:57 -0500 Subject: [Beowulf] HPC on the cloud Message-ID: On Tue, Oct 4, 2011 at 3:29 PM, Chris Dagdigian wrote: > Here is a cliche example: Amazon S3 > > Before the S3 object storage service will even *acknowledge* a > successful PUT request, your file is already at rest in at least three > amazon facilities. > > So to "really" compare S3 against what you can do locally you at least > have to factor in the cost of your organization being able to provide 3x > multi-facility replication for whatever object store you choose to deploy... Agreed. Users who need less reliable storage can use Reduced Redundancy Storage (RRS) instead. RRS only creates 2 copies instead of 3, and the price is only 2/3 the price of S3: http://aws.amazon.com/s3/#pricing And Amazon recently introduced the "Heavy Utilization Reserved Instances" and "Light Utilization Reserved Instances", which bring the cost down quite a bit as well: http://aws.typepad.com/aws/2011/12/reserved-instance-options-for-amazon-ec2.html With VFIO, the latency difference between 10Gb Ethernet and Infiniband should be narrowing quite a bit as well: http://blogs.cisco.com/performance/open-mpi-over-linux-vfio/ Finally, Amazon Cloud Supercomputer ranks #42 on the most recent TOP500 list: http://i.top500.org/system/177457 I still think that a lot of companies will keep on buying their own servers for compute farms & HPC clusters. But for those who don't want to own their servers, or want to have a cluster quickly (less than 30 mins to build a basic HPC cluster[1] - of course StarCluster or CycleCloud can do most of the heavy lifting faster), or don't have the expertise, then remote HPC clusters (whether it be Amazon EC2 Cluster Compute Instances or Gridcore/Gompute[2]) are getting very attractive. [1]: http://www.youtube.com/watch?v=5zBxl6HUFA4 [2]: https://www.gompute.com/web/guest/how-it-works Rayson ================================= Grid Engine / Open Grid Scheduler http://gridscheduler.sourceforge.net/ Scalable Grid Engine Support Program http://www.scalablelogic.com/ > I don't want to be seen as a shill so I'll stop with that example. The > results really are surprising once you start down the "true cost of IT > services..." road. > > > As for industry trends with HPC and IaaS ... > > I can assure you that in the super practical & cynical world of biotech > and pharma there is already an HPC migration to IaaS platforms that is > years old already. It's a lot easier to see where and how your money is > being spent inside a biotech startup or pharma and that is (and has) > shunted a decent amount of spending towards cloud platforms. > > The easy stuff is moving to IaaS platforms. The hard stuff, the custom > stuff, the tightly bound stuff and the data/IO-bound stuff is staying > local of course - but that still means lots of stuff is moving externally. > > The article that prompted this thread is a great example of this. The > client company had a boatload of one-off molecular dynamics simulations > to run. So much, in fact, that the problem was computationally > infeasable to even consider doing inhouse. > > So they did it on AWS. > > 30,000 CPU cores. For ~$9,000 dollars. > > Amazing. > > It's a fun time to be in HPC actually. And getting my head around "IaaS" > platforms turned me onto things (like opscode chef) that we are now > bringing inhouse and integrating into our legacy clusters and grids. > > > Sorry for rambling but I think there are 2 main drivers behind what I > see moving HPC users and applications into IaaS cloud platforms ... > > > (1) The economies of scale are real. IaaS providers can run better, > bigger and cheaper than we can and they can still make a profit. This is > real, not hype or sales BS. (as long as you are honest about your actual > costs...) > > > (2) The benefits of "scriptable everything" or "everything has an API". > I'm so freaking sick of companies installing VMWare and excreting a > press release calling themselves a "cloud provider". Virtual servers and > virtual block storage on demand are boring, basic and pedestrian. That > was clever in 2004. I need far more "glue" to build useful stuff in a > virtual world and IaaS platforms deliver more products/services and > "glue" options than anyone else out there. The "scriptable everything" > nature of IaaS is enabling a lot of cool system and workflow building, > much of which would be hard or almost impossible to do in-house with > local resources. > > > > My $.02 > > -Chris > > (corporate hat: chris at bioteam.net) > > > > > > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing > To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Rayson ================================================== Open Grid Scheduler - The Official Open Source Grid Engine http://gridscheduler.sourceforge.net/ _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. From Greg at Keller.net Mon Dec 5 13:35:57 2011 From: Greg at Keller.net (Greg Keller) Date: Mon, 5 Dec 2011 12:35:57 -0600 Subject: [Beowulf] SMB + RDMA? Message-ID: Hi, I'm curious if anyone on the list has seen this "SMB over RDMA in "The Wild" yet: http://www.mellanox.com/content/pages.php?pg=press_release_item&rec_id=642 If so, any initial feedback on it's usefulness? Any hint on where to find more info short of a Mellanox Rep? We run a bunch of WinHPC and have issues with overwhelming SMB2.0 over 10GbE, so I'm curious if this path is likely to help or hurt us. Also curious if it requires the new ConnectX v3 cards, or if we can use our ConnectX v1 and v2 cards. Cheers! Greg -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From Shainer at Mellanox.com Mon Dec 5 13:39:40 2011 From: Shainer at Mellanox.com (Gilad Shainer) Date: Mon, 5 Dec 2011 18:39:40 +0000 Subject: [Beowulf] SMB + RDMA? In-Reply-To: References: Message-ID: Greg, Feel free to contact me directly. It is part of Windows Server 8 and Microsoft has done several demonstrations already. It works with any ConnectX card. Gilad From: beowulf-bounces at beowulf.org [mailto:beowulf-bounces at beowulf.org] On Behalf Of Greg Keller Sent: Monday, December 05, 2011 10:38 AM To: beowulf at beowulf.org Subject: [Beowulf] SMB + RDMA? Hi, I'm curious if anyone on the list has seen this "SMB over RDMA in "The Wild" yet: http://www.mellanox.com/content/pages.php?pg=press_release_item&rec_id=642 If so, any initial feedback on it's usefulness? Any hint on where to find more info short of a Mellanox Rep? We run a bunch of WinHPC and have issues with overwhelming SMB2.0 over 10GbE, so I'm curious if this path is likely to help or hurt us. Also curious if it requires the new ConnectX v3 cards, or if we can use our ConnectX v1 and v2 cards. Cheers! Greg -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From amjad11 at gmail.com Sat Dec 10 15:21:11 2011 From: amjad11 at gmail.com (amjad ali) Date: Sat, 10 Dec 2011 15:21:11 -0500 Subject: [Beowulf] How to justify the use MPI codes on multicore systems/PCs? Message-ID: Hello All, I developed my MPI based parallel code for clusters, but now I use it on multicore/manycore computers (PCs) as well. How to justify (in some thesis/publication) the use of a distributed memory code (in MPI) on a shared memory (multicore) machine. I guess to explain two reasons: (1) Plan is to use several hunderds processes in future. So MPI like stuff is necessary. To maintain code uniformity and save cost/time for developing shared memory solution (using OpenMP, pthreads etc), I use the same MPI code on shared memory systems (like multicore PCs). MPI based codes give reasonable performance on multicore PCs, if not the best. (2) The latest MPI implementations are intelligent enough that they use some efficient mechanism while executing MPI based codes on shared memory (multicore) machines. (please tell me any reference to quote this fact). Please help me in formally justifying this and comment/modify above two justifications. Better if I you can suggent me to quote some reference of any suitable publication in this regard. best regards, Amjad Ali -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From sabujp at gmail.com Sat Dec 10 15:48:51 2011 From: sabujp at gmail.com (Sabuj Pattanayek) Date: Sat, 10 Dec 2011 14:48:51 -0600 Subject: [Beowulf] How to justify the use MPI codes on multicore systems/PCs? In-Reply-To: References: Message-ID: Mallon, et. al., (2009) Performance Evaluation of MPI, UPC and OpenMP on Multicore Architectures : http://gac.udc.es/~gltaboada/papers/mallon_pvmmpi09.pdf newer paper here, says to use a hybrid approach with openmp + mpi : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.190.6479 HTH, Sabuj On Sat, Dec 10, 2011 at 2:21 PM, amjad ali wrote: > Hello All, > > I developed my MPI based parallel code for clusters, but now I use it on > multicore/manycore computers (PCs)?as well. How to justify (in some > thesis/publication) the use of a distributed memory code (in MPI)?on a > shared memory (multicore) machine. I guess to explain two reasons: > > (1) Plan is to use several hunderds processes in future. So MPI like stuff > is necessary. To maintain code uniformity and?save cost/time for developing > shared memory solution (using OpenMP, pthreads etc), I use the same MPI code > on?shared memory systems (like multicore?PCs).?MPI based codes?give > reasonable performance on multicore PCs, if not the best. > > (2) The latest MPI implementations are intelligent enough that they use some > efficient mechanism while executing?MPI based codes on shared memory > (multicore) machines.? (please tell me any reference to quote this fact). > > > Please help me in formally justifying this and comment/modify above two > justifications. Better if I you can suggent me to quote?some reference of > any suitable publication in this regard. > > best regards, > Amjad Ali > > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing > To change your subscription (digest mode or unsubscribe) visit > http://www.beowulf.org/mailman/listinfo/beowulf > _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. From deadline at eadline.org Sat Dec 10 17:04:43 2011 From: deadline at eadline.org (Douglas Eadline) Date: Sat, 10 Dec 2011 17:04:43 -0500 (EST) Subject: [Beowulf] How to justify the use MPI codes on multicore systems/PCs? In-Reply-To: References: Message-ID: <55445.192.168.93.213.1323554683.squirrel@mail.eadline.org> Your question seems based on the assumption that shared memory is always better than message passing on shared memory systems. Though this seems like a safe assumption, it may not be true in all cases: http://www.linux-mag.com/id/7884/ of course it all depends on the compiler, the application, the hardware, .... -- Doug Eadline > Hello All, > > I developed my MPI based parallel code for clusters, but now I use it on > multicore/manycore computers (PCs) as well. How to justify (in some > thesis/publication) the use of a distributed memory code (in MPI) on a > shared memory (multicore) machine. I guess to explain two reasons: > > (1) Plan is to use several hunderds processes in future. So MPI like stuff > is necessary. To maintain code uniformity and save cost/time for > developing > shared memory solution (using OpenMP, pthreads etc), I use the same MPI > code on shared memory systems (like multicore PCs). MPI based codes give > reasonable performance on multicore PCs, if not the best. > > (2) The latest MPI implementations are intelligent enough that they use > some efficient mechanism while executing MPI based codes on shared memory > (multicore) machines. (please tell me any reference to quote this fact). > > > Please help me in formally justifying this and comment/modify above two > justifications. Better if I you can suggent me to quote some reference of > any suitable publication in this regard. > > best regards, > Amjad Ali > > -- > This message has been scanned for viruses and > dangerous content by MailScanner, and is > believed to be clean. > > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing > To change your subscription (digest mode or unsubscribe) visit > http://www.beowulf.org/mailman/listinfo/beowulf > -- Doug -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From raysonlogin at gmail.com Mon Dec 12 11:00:23 2011 From: raysonlogin at gmail.com (Rayson Ho) Date: Mon, 12 Dec 2011 11:00:23 -0500 Subject: [Beowulf] How to justify the use MPI codes on multicore systems/PCs? In-Reply-To: References: Message-ID: On Sat, Dec 10, 2011 at 3:21 PM, amjad ali wrote: > (2) The latest MPI implementations are intelligent enough that they use some > efficient mechanism while executing?MPI based codes on shared memory > (multicore) machines.? (please tell me any reference to quote this fact). Not an academic paper, but from a real MPI library developer/architect: http://blogs.cisco.com/performance/shared-memory-as-an-mpi-transport/ http://blogs.cisco.com/performance/shared-memory-as-an-mpi-transport-part-2/ Open MPI is used by Japan's K computer (current #1 TOP 500 computer) and LANL's RoadRunner (#1 Jun 08 ? Nov 09), and "10^16 Flops Can't Be Wrong" and "10^15 Flops Can't Be Wrong": http://www.open-mpi.org/papers/sc-2008/jsquyres-cisco-booth-talk-2up.pdf Rayson ================================= Grid Engine / Open Grid Scheduler http://gridscheduler.sourceforge.net/ Scalable Grid Engine Support Program http://www.scalablelogic.com/ > > > Please help me in formally justifying this and comment/modify above two > justifications. Better if I you can suggent me to quote?some reference of > any suitable publication in this regard. > > best regards, > Amjad Ali > > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing > To change your subscription (digest mode or unsubscribe) visit > http://www.beowulf.org/mailman/listinfo/beowulf > -- Rayson ================================================== Open Grid Scheduler - The Official Open Source Grid Engine http://gridscheduler.sourceforge.net/ _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. From raysonlogin at gmail.com Wed Dec 14 18:33:19 2011 From: raysonlogin at gmail.com (Rayson Ho) Date: Wed, 14 Dec 2011 18:33:19 -0500 Subject: [Beowulf] How to justify the use MPI codes on multicore systems/PCs? In-Reply-To: References: Message-ID: There is a project called "MVAPICH2-GPU", which is developed by D. K. Panda's research group at Ohio State University. You will find lots of references on Google... and I just briefly gone through the slides of "MVAPICH2-?GPU: Optimized GPU to GPU Communication for InfiniBand Clusters"": http://nowlab.cse.ohio-state.edu/publications/conf-presentations/2011/hao-isc11-slides.pdf It takes advantage of CUDA 4.0's Unified Virtual Addressing (UVA) to pipeline & optimize cudaMemcpyAsync() & RMDA transfers. (MVAPICH 1.8a1p1 also supports Device-Device, Device-Host, Host-Device transfers.) Open MPI also supports similar functionality, but as OpenMPI is not an academic project, there are less academic papers documenting the internals of the latest developments (not saying that it's bad - many products are not academic in nature and thus have less published papers...) Rayson ================================= Grid Engine / Open Grid Scheduler http://gridscheduler.sourceforge.net/ Scalable Grid Engine Support Program http://www.scalablelogic.com/ On Mon, Dec 12, 2011 at 11:40 AM, Durga Choudhury wrote: > I think this is a *great* topic for discussion, so let me throw some > fuel to the fire: the mechanism described in the blog (that makes > perfect sense) is fine for (N)UMA shared memory architectures. But > will it work for asymmetric architectures such as the Cell BE or > discrete GPUs where the data between the compute nodes have to be > explicitly DMA'd in? Is there a middleware layer that makes it > transparent to the upper layer software? > > Best regards > Durga > > On Mon, Dec 12, 2011 at 11:00 AM, Rayson Ho wrote: >> On Sat, Dec 10, 2011 at 3:21 PM, amjad ali wrote: >>> (2) The latest MPI implementations are intelligent enough that they use some >>> efficient mechanism while executing?MPI based codes on shared memory >>> (multicore) machines.? (please tell me any reference to quote this fact). >> >> Not an academic paper, but from a real MPI library developer/architect: >> >> http://blogs.cisco.com/performance/shared-memory-as-an-mpi-transport/ >> http://blogs.cisco.com/performance/shared-memory-as-an-mpi-transport-part-2/ >> >> Open MPI is used by Japan's K computer (current #1 TOP 500 computer) >> and LANL's RoadRunner (#1 Jun 08 ? Nov 09), and "10^16 Flops Can't Be >> Wrong" and "10^15 Flops Can't Be Wrong": >> >> http://www.open-mpi.org/papers/sc-2008/jsquyres-cisco-booth-talk-2up.pdf >> >> Rayson >> >> ================================= >> Grid Engine / Open Grid Scheduler >> http://gridscheduler.sourceforge.net/ >> >> Scalable Grid Engine Support Program >> http://www.scalablelogic.com/ >> >> >>> >>> >>> Please help me in formally justifying this and comment/modify above two >>> justifications. Better if I you can suggent me to quote?some reference of >>> any suitable publication in this regard. >>> >>> best regards, >>> Amjad Ali >>> >>> _______________________________________________ >>> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing >>> To change your subscription (digest mode or unsubscribe) visit >>> http://www.beowulf.org/mailman/listinfo/beowulf >>> >> >> >> >> -- >> Rayson >> >> ================================================== >> Open Grid Scheduler - The Official Open Source Grid Engine >> http://gridscheduler.sourceforge.net/ >> >> _______________________________________________ >> users mailing list >> users at open-mpi.org >> http://www.open-mpi.org/mailman/listinfo.cgi/users > > _______________________________________________ > users mailing list > users at open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users -- Rayson ================================================== Open Grid Scheduler - The Official Open Source Grid Engine http://gridscheduler.sourceforge.net/ _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. From trainor at presciencetrust.org Sun Dec 18 12:42:43 2011 From: trainor at presciencetrust.org (Douglas J. Trainor) Date: Sun, 18 Dec 2011 12:42:43 -0500 Subject: [Beowulf] Nvidia ditches homegrown C/C++ compiler for LLVM Message-ID: "Nobody wants to read the manual," says Gupta with a laugh. And so this expert system has a redesigned visual code profiler that shows bottlenecks in the code, offers hints on how to fix them, and automagically finds the right portions of the CUDA manual to help fix the problem. For instance, the code profiler can show coders how to better use the memory hierarchy in CPU-GPU hybrids, which is a tricky bit of programming. http://www.theregister.co.uk/2011/12/16/nvidia_llvm_cuda_app_dev/print.html _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From eugen at leitl.org Thu Dec 22 04:50:40 2011 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 22 Dec 2011 10:50:40 +0100 Subject: [Beowulf] 3.79 TFlops sp, 0.95 TFlops dp, 264 TByte/s, 3 GByte, 198 W @ 500 EUR Message-ID: <20111222095040.GK31847@leitl.org> 4312711873 transistors, 28 nm, 2048 cores. 925 MHz, 3 TByte GDDR5 (ECC optional), 384 bit bus. http://www.heise.de/newsticker/meldung/Radeon-HD-7970-Mit-2048-Kernen-an-die-Leistungsspitze-1399905.html _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From eugen at leitl.org Thu Dec 22 09:57:44 2011 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 22 Dec 2011 15:57:44 +0100 Subject: [Beowulf] 3.79 TFlops sp, 0.95 TFlops dp, 264 TByte/s, 3 GByte, 198 W @ 500 EUR In-Reply-To: <4EF3422B.2090302@ias.edu> References: <20111222095040.GK31847@leitl.org> <4EF3422B.2090302@ias.edu> Message-ID: <20111222145744.GZ31847@leitl.org> On Thu, Dec 22, 2011 at 09:43:55AM -0500, Prentice Bisbal wrote: > Or if your German is rusty: > > http://www.zdnet.com/blog/computers/amd-radeon-hd-7970-graphics-card-launched-benchmarked-fastest-single-gpu-board-available/7204 Wonder what kind of response will be forthcoming from nVidia, given developments like http://www.theregister.co.uk/2011/11/14/arm_gpu_nvidia_supercomputer/ It does seem that x86 is dead, despite good Bulldozer performance in Interlagos http://www.heise.de/newsticker/meldung/AMDs-Serverprozessoren-mit-Bulldozer-Architektur-legen-los-1378230.html (engage dekrautizer of your choice). _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From prentice at ias.edu Thu Dec 22 10:42:35 2011 From: prentice at ias.edu (Prentice Bisbal) Date: Thu, 22 Dec 2011 10:42:35 -0500 Subject: [Beowulf] 3.79 TFlops sp, 0.95 TFlops dp, 264 TByte/s, 3 GByte, 198 W @ 500 EUR In-Reply-To: <20111222145744.GZ31847@leitl.org> References: <20111222095040.GK31847@leitl.org> <4EF3422B.2090302@ias.edu> <20111222145744.GZ31847@leitl.org> Message-ID: <4EF34FEB.8030903@ias.edu> On 12/22/2011 09:57 AM, Eugen Leitl wrote: > On Thu, Dec 22, 2011 at 09:43:55AM -0500, Prentice Bisbal wrote: > >> Or if your German is rusty: >> >> http://www.zdnet.com/blog/computers/amd-radeon-hd-7970-graphics-card-launched-benchmarked-fastest-single-gpu-board-available/7204 > Wonder what kind of response will be forthcoming from nVidia, > given developments like http://www.theregister.co.uk/2011/11/14/arm_gpu_nvidia_supercomputer/ > > It does seem that x86 is dead, despite good Bulldozer performance > in Interlagos > > http://www.heise.de/newsticker/meldung/AMDs-Serverprozessoren-mit-Bulldozer-Architektur-legen-los-1378230.html > > (engage dekrautizer of your choice). > At SC11, it was clear that everyone was looking for ways around the power wall. I saw 5 or 6 different booths touting the use of FPGAs for improved performance/efficiency. I don't remember there being a single FPGA booth in the past. Whether the accelerator is GPU, FPGA, GRAPE, Intem MIC, or something else, I think it's clear that the future of HPC architecture is going to change radically in the next couple years, unless some major breakthrough occurs for commodity processors. I think DE Shaw Research's Anton computer, which uses FPGAs and custom processors, is an excellent example of what the future of HPC might look like. -- Prentice _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From diep at xs4all.nl Thu Dec 22 11:04:09 2011 From: diep at xs4all.nl (Vincent Diepeveen) Date: Thu, 22 Dec 2011 17:04:09 +0100 Subject: [Beowulf] 3.79 TFlops sp, 0.95 TFlops dp, 264 TByte/s, 3 GByte, 198 W @ 500 EUR In-Reply-To: <20111222145744.GZ31847@leitl.org> References: <20111222095040.GK31847@leitl.org> <4EF3422B.2090302@ias.edu> <20111222145744.GZ31847@leitl.org> Message-ID: <3A177202-E4AF-488A-9AD1-18642883E1CA@xs4all.nl> As for HPC, do they need to do that - did AMD already release a driver for example for OpenCL for the HD 6990 that's using BOTH gpu's? I had back then bought directly a HD 6970 card. Once the driver for the HD 6970 was there for linux, we were months further and the price of the HD 6970 had dropped considerable again at the shops. Multiplying 32 x 32 bits is slow at AMD gpu's, as it needs all 4 procesing elements for that. Nvidia wins it bigtime there. Fast at AMD seemingly is 24 x 24 bits, yet of course you also need the top 16 bits of such multiplication. Then after a while i figured out that OpenCL has no function call for the crucial top 16 bits. Initially there was a poster on the forum saying that this top 16 bits was casted onto the 32 x 32 bits anyway, so would be slow anyway. Raising a ticket at AMD then, we speak again about months later, revealed that the hardware instruction i found in their manual that's doing the top16 bits of a 24x24 bits integer multiplication, total crucial for factorisation work, that this indeed runs at full throttle. Some AMD engineer offered to include it, i gladly accepted that, of course we were months later by then. We are 1 year further nearly now and it's still not there. This HD6970 so far was a massive waste of my money. Can i ask my money back? You sure this will go better with HD7970 not to mention the soon to be released HD7990? From HPC viewpoint AMD has a major software support problem so far... ...also i noticed that the problem was not so much being 'busy', as i saw relative few tickets got raised for their gpgpu team. Regards, Vincent On Dec 22, 2011, at 3:57 PM, Eugen Leitl wrote: > On Thu, Dec 22, 2011 at 09:43:55AM -0500, Prentice Bisbal wrote: > >> Or if your German is rusty: >> >> http://www.zdnet.com/blog/computers/amd-radeon-hd-7970-graphics- >> card-launched-benchmarked-fastest-single-gpu-board-available/7204 > > Wonder what kind of response will be forthcoming from nVidia, > given developments like http://www.theregister.co.uk/2011/11/14/ > arm_gpu_nvidia_supercomputer/ > > It does seem that x86 is dead, despite good Bulldozer performance > in Interlagos > > http://www.heise.de/newsticker/meldung/AMDs-Serverprozessoren-mit- > Bulldozer-Architektur-legen-los-1378230.html > > (engage dekrautizer of your choice). > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin > Computing > To change your subscription (digest mode or unsubscribe) visit > http://www.beowulf.org/mailman/listinfo/beowulf _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From diep at xs4all.nl Thu Dec 22 11:06:43 2011 From: diep at xs4all.nl (Vincent Diepeveen) Date: Thu, 22 Dec 2011 17:06:43 +0100 Subject: [Beowulf] 3.79 TFlops sp, 0.95 TFlops dp, 264 TByte/s, 3 GByte, 198 W @ 500 EUR In-Reply-To: <4EF34FEB.8030903@ias.edu> References: <20111222095040.GK31847@leitl.org> <4EF3422B.2090302@ias.edu> <20111222145744.GZ31847@leitl.org> <4EF34FEB.8030903@ias.edu> Message-ID: <8A51A20D-31D3-4977-944F-EC371EACFE84@xs4all.nl> On Dec 22, 2011, at 4:42 PM, Prentice Bisbal wrote: > On 12/22/2011 09:57 AM, Eugen Leitl wrote: >> On Thu, Dec 22, 2011 at 09:43:55AM -0500, Prentice Bisbal wrote: >> >>> Or if your German is rusty: >>> >>> http://www.zdnet.com/blog/computers/amd-radeon-hd-7970-graphics- >>> card-launched-benchmarked-fastest-single-gpu-board-available/7204 >> Wonder what kind of response will be forthcoming from nVidia, >> given developments like http://www.theregister.co.uk/2011/11/14/ >> arm_gpu_nvidia_supercomputer/ >> >> It does seem that x86 is dead, despite good Bulldozer performance >> in Interlagos >> >> http://www.heise.de/newsticker/meldung/AMDs-Serverprozessoren-mit- >> Bulldozer-Architektur-legen-los-1378230.html >> >> (engage dekrautizer of your choice). >> > > At SC11, it was clear that everyone was looking for ways around the > power wall. The obvious answer to that is clustering machines of course! > I saw 5 or 6 different booths touting the use of FPGAs for > improved performance/efficiency. I don't remember there being a single > FPGA booth in the past. Whether the accelerator is GPU, FPGA, GRAPE, > Intem MIC, or something else, I think it's clear that the future > of HPC > architecture is going to change radically in the next couple years, > unless some major breakthrough occurs for commodity processors. > > I think DE Shaw Research's Anton computer, which uses FPGAs and custom > processors, is an excellent example of what the future of HPC might > look > like. Not unless when they sell dozens of millions of them. To quote Linus: "The tiny processors have won". Because they get massively produced which keeps price cheap. It's about clustering them and then produce software that gets the maximum performance out of it. The software is always a lot behind the hardware! > > -- > Prentice > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin > Computing > To change your subscription (digest mode or unsubscribe) visit > http://www.beowulf.org/mailman/listinfo/beowulf _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From diep at xs4all.nl Thu Dec 22 11:30:15 2011 From: diep at xs4all.nl (Vincent Diepeveen) Date: Thu, 22 Dec 2011 17:30:15 +0100 Subject: [Beowulf] 3.79 TFlops sp, 0.95 TFlops dp, 264 TByte/s, 3 GByte, 198 W @ 500 EUR In-Reply-To: <4EF34FEB.8030903@ias.edu> References: <20111222095040.GK31847@leitl.org> <4EF3422B.2090302@ias.edu> <20111222145744.GZ31847@leitl.org> <4EF34FEB.8030903@ias.edu> Message-ID: <0247F017-B0D1-497C-8CBF-E91BB8CB177E@xs4all.nl> On Dec 22, 2011, at 4:42 PM, Prentice Bisbal wrote: > On 12/22/2011 09:57 AM, Eugen Leitl wrote: >> On Thu, Dec 22, 2011 at 09:43:55AM -0500, Prentice Bisbal wrote: >> >>> Or if your German is rusty: >>> >>> http://www.zdnet.com/blog/computers/amd-radeon-hd-7970-graphics- >>> card-launched-benchmarked-fastest-single-gpu-board-available/7204 >> Wonder what kind of response will be forthcoming from nVidia, >> given developments like http://www.theregister.co.uk/2011/11/14/ >> arm_gpu_nvidia_supercomputer/ >> >> It does seem that x86 is dead, despite good Bulldozer performance >> in Interlagos >> >> http://www.heise.de/newsticker/meldung/AMDs-Serverprozessoren-mit- >> Bulldozer-Architektur-legen-los-1378230.html >> >> (engage dekrautizer of your choice). >> > > At SC11, it was clear that everyone was looking for ways around the > power wall. I saw 5 or 6 different booths touting the use of FPGAs for > improved performance/efficiency. If you have 1 specific problem other than multiplying massively, then FPGA's can be fast. They can parallellize a number of sequential actions bigtime. However majority on this list is busy with HPC and majority of HPC codes need the mutliplication unit bigtime. You're not gonna beat optimized GPU's with a fpga card when all what you need is some multiplications of low number of bits. Sure some hidden NSA team might have cooked a math processor low power that's kick butt and can handle big numbers. But what's price of development of that team? Can you afford such team? In such case a FPGA isn't soon gonna beat pricewise a combination of a good node with good processor cores with good GPU in the PCI-E 3.0 and with a network card. What's price of such node? Your guess is as good as mine, but it's always going to be cheaper than a FPGA card, as so far history has told us those get sold real expensive when they can do something useful. Furthermore the cpu and gpu node can run other codes as well and are cheap to scale in a cluster. That eats more power, sure, but we all must face that performance brings more power usage with it nowadays. At home this might be difficult to solve, but factories get the power 20x cheaper, especially Nuclear power. Now this is not a good forum to start an energy debate (again), with me having the advantage having sut in an energy commission and then you might be confronted with numbers a tad different than what you find on google; yet regrettably it's a fact that average person on this planet eat s more and more power for each person. As for HPC, not too many on this planet are busy with HPC, so you have to ask yourself, if a simple plastic factory making a few plastic boards and plastic knifes and plastic forks and plastic spoons; if a tiny compnay doing that already eats 7.5 megawatt (actually that's a factory around the corner here), is it realistic to eat less with HPC? 7.5 megawatt, depending upon what place you try to get the power, is doing around 0.4 cents per kilowatt hour. With prices like that. using 7.5 megawatt a year, price of energy is around 0.004 * 7.5 * 1000 = 30 euro an hour A year that is: 365 * 24 * 30 = 262800 euro a year. Now what eats 7.5 megawatt if we speak about a cluster. Let's assume an intel 2 cpu Xeon Sandy Bridge 8 core node and say FDR network, with a gpu eating 1000 watt a node. That's 7500 nodes. What will price be of such node. Say 6000 euro? So a machine that has a cost of 7500 * 6k = 7.5k * 6k = 45 million euro, has an energy price of 262800 euro a year. What are we talking about? Vincent > I don't remember there being a single > FPGA booth in the past. Whether the accelerator is GPU, FPGA, GRAPE, > Intem MIC, or something else, I think it's clear that the future > of HPC > architecture is going to change radically in the next couple years, > unless some major breakthrough occurs for commodity processors. > > I think DE Shaw Research's Anton computer, which uses FPGAs and custom > processors, is an excellent example of what the future of HPC might > look > like. > > -- > Prentice > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin > Computing > To change your subscription (digest mode or unsubscribe) visit > http://www.beowulf.org/mailman/listinfo/beowulf _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From deadline at eadline.org Thu Dec 22 11:51:17 2011 From: deadline at eadline.org (Douglas Eadline) Date: Thu, 22 Dec 2011 11:51:17 -0500 (EST) Subject: [Beowulf] personal HPC Message-ID: <34813.192.168.93.213.1324572677.squirrel@mail.eadline.org> For those that don't know, I have been working on a commodity "desk side" cluster for a while. I have been writing about the progress at: http://limulus.basement-supercomputing.com/ Recently I was able to get 200 GFLOPS using Intel i5-2400S processors connected by GigE (58% of peak). Of course these are CPU FLOPS not GPU FLOPS and the design has a power/heat/performance/noise envelope that makes it suitable for true desk side computing. (for things like software development, education, small production work, and cloud staging) You can find the raw HPC numbers and specifications here: http://limulus.basement-supercomputing.com/wiki/CommercialLimulus BTW, if click the "Nexlink Limulus" link, you can take a survey for a chance to win one of these systems. Happy holidays -- Doug -- MailScanner: clean _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From prentice at ias.edu Thu Dec 22 11:53:39 2011 From: prentice at ias.edu (Prentice Bisbal) Date: Thu, 22 Dec 2011 11:53:39 -0500 Subject: [Beowulf] 3.79 TFlops sp, 0.95 TFlops dp, 264 TByte/s, 3 GByte, 198 W @ 500 EUR In-Reply-To: References: Message-ID: <4EF36093.9040807@ias.edu> Just for the record - I'm only the messenger. I noticed a not-insignificant number of booths touting FPGAs at SC11 this year, so I reported on it. I also mentioned other forms of accelerators, like GPUs and Intel's MIC architecture. The Anton computer architecture isn't just a FPGA - it also has custom-designed processors (ASICS). The ASICs handle the parts of the molecular dynamics (MD) algorithms that are well-understood, and unlikely to change, and the FPGAs handle the parts of the algorithms that may change or might have room for further optimization. As far as I know, only 8 or 9 Antons have been built. One is at the Pittsburgh Supercomputing Center (PSC), the rest are for internal use at DE Shaw. A single Anton consists of 512 cores, and takes up 6 or 8 racks. Despite it's small size, it's orders of magnitude faster at doing MD calculations than even super computers like Jaguar and Roadrunner with hundreds of thousands of processors. So overall, Anton is several orders of magnitudes faster than an general-purpose processor based supercomputer. And sI'm sure it uses a LOT less power. I don't think the Anton's are clustered together, so I'm pretty sure the published performance on MD simulations is for a single Anton with 512 cores Keep in mind that Anton was designed to do only 1 thing: MD, so it probably can't even run LinPack, and if it did, I'm sure it's score would be awful. Also, the designers cut corners where they knew the safely could, like using fixed-precision (or is it fixed-point?) math, so the hardware design is only half the story in this example. Prentice On 12/22/2011 11:27 AM, Lux, Jim (337C) wrote: > The problem with FPGAs (and I use a fair number of them) is that you're > never going to get the same picojoules/bit transition kind of power > consumption that you do with a purpose designed processor. The extra > logic needed to get it "reconfigurable", and the physical junction sizes > as well, make it so. > > What you will find is that on certain kinds of problems, you can implement > a more efficient algorithm in FPGA than you can in a conventional > processor or GPU. So, for that class of problem, the FPGA is a winner > (things lending themselves to fixed point systolic array type processes > are a good candidate). > > Bear in mind also that while an FPGA may have, say, 10-million gate > equivalent, any given practical design is going to use a small fraction of > those gates. Fortunately, most of those unused gates aren't toggling, so > they don't consume clock related power, but they do consume leakage > current, so the whole clock rate vs core voltage trade winds up a bit > different for FPGAs. > > The biggest problem with FPGAs is that they are difficult to write high > performance software for. With FORTRAN on conventional and vectorized and > pipelined processors, we've got 50 years of compiler writing expertise, > and real high performance libraries. And, literally millions of people > who know how to code in FORTRAN or C or something, so if you're looking > for the highest performance coders, even at the 4 sigma level, you've got > a fair number to choose from. For numerical computation in FPGAs, not so > many. I'd guess that a large fraction of FPGA developers are doing one of > two things: 1) digital signal processing, flow through kinds of stuff > (error correcting codes, compression/decompression, crypto; 2) bus > interface and data handling (PCI bus, disk drive controls, etc.). > > Interestingly, even with the relative scarcity of FPGA developers versus > conventional CPU software, the average salaries aren't that far apart. > The distribution on "generic coders" is wider (particularly on the low > end.. Barriers to entry are lower for C,Java,whathaveyou code monkeys), > but there are very, very few people making more than, say, 150-200k/yr > doing either. (except in a few anomalous industries, where compensation > is higher than normal in general). (also leaving out "equity > participation" type deals) > > > > On 12/22/11 7:42 AM, "Prentice Bisbal" wrote: > >> On 12/22/2011 09:57 AM, Eugen Leitl wrote: >>> On Thu, Dec 22, 2011 at 09:43:55AM -0500, Prentice Bisbal wrote: >>> >>>> Or if your German is rusty: >>>> >>>> >>>> http://www.zdnet.com/blog/computers/amd-radeon-hd-7970-graphics-card-lau >>>> nched-benchmarked-fastest-single-gpu-board-available/7204 >>> Wonder what kind of response will be forthcoming from nVidia, >>> given developments like >>> http://www.theregister.co.uk/2011/11/14/arm_gpu_nvidia_supercomputer/ >>> >>> It does seem that x86 is dead, despite good Bulldozer performance >>> in Interlagos >>> >>> >>> http://www.heise.de/newsticker/meldung/AMDs-Serverprozessoren-mit-Bulldoz >>> er-Architektur-legen-los-1378230.html >>> >>> (engage dekrautizer of your choice). >>> >> At SC11, it was clear that everyone was looking for ways around the >> power wall. I saw 5 or 6 different booths touting the use of FPGAs for >> improved performance/efficiency. I don't remember there being a single >> FPGA booth in the past. Whether the accelerator is GPU, FPGA, GRAPE, >> Intem MIC, or something else, I think it's clear that the future of HPC >> architecture is going to change radically in the next couple years, >> unless some major breakthrough occurs for commodity processors. >> >> I think DE Shaw Research's Anton computer, which uses FPGAs and custom >> processors, is an excellent example of what the future of HPC might look >> like. >> >> -- >> Prentice >> _______________________________________________ >> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing >> To change your subscription (digest mode or unsubscribe) visit >> http://www.beowulf.org/mailman/listinfo/beowulf > _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From james.p.lux at jpl.nasa.gov Thu Dec 22 11:27:46 2011 From: james.p.lux at jpl.nasa.gov (Lux, Jim (337C)) Date: Thu, 22 Dec 2011 08:27:46 -0800 Subject: [Beowulf] 3.79 TFlops sp, 0.95 TFlops dp, 264 TByte/s, 3 GByte, 198 W @ 500 EUR In-Reply-To: <4EF34FEB.8030903@ias.edu> Message-ID: The problem with FPGAs (and I use a fair number of them) is that you're never going to get the same picojoules/bit transition kind of power consumption that you do with a purpose designed processor. The extra logic needed to get it "reconfigurable", and the physical junction sizes as well, make it so. What you will find is that on certain kinds of problems, you can implement a more efficient algorithm in FPGA than you can in a conventional processor or GPU. So, for that class of problem, the FPGA is a winner (things lending themselves to fixed point systolic array type processes are a good candidate). Bear in mind also that while an FPGA may have, say, 10-million gate equivalent, any given practical design is going to use a small fraction of those gates. Fortunately, most of those unused gates aren't toggling, so they don't consume clock related power, but they do consume leakage current, so the whole clock rate vs core voltage trade winds up a bit different for FPGAs. The biggest problem with FPGAs is that they are difficult to write high performance software for. With FORTRAN on conventional and vectorized and pipelined processors, we've got 50 years of compiler writing expertise, and real high performance libraries. And, literally millions of people who know how to code in FORTRAN or C or something, so if you're looking for the highest performance coders, even at the 4 sigma level, you've got a fair number to choose from. For numerical computation in FPGAs, not so many. I'd guess that a large fraction of FPGA developers are doing one of two things: 1) digital signal processing, flow through kinds of stuff (error correcting codes, compression/decompression, crypto; 2) bus interface and data handling (PCI bus, disk drive controls, etc.). Interestingly, even with the relative scarcity of FPGA developers versus conventional CPU software, the average salaries aren't that far apart. The distribution on "generic coders" is wider (particularly on the low end.. Barriers to entry are lower for C,Java,whathaveyou code monkeys), but there are very, very few people making more than, say, 150-200k/yr doing either. (except in a few anomalous industries, where compensation is higher than normal in general). (also leaving out "equity participation" type deals) On 12/22/11 7:42 AM, "Prentice Bisbal" wrote: >On 12/22/2011 09:57 AM, Eugen Leitl wrote: >> On Thu, Dec 22, 2011 at 09:43:55AM -0500, Prentice Bisbal wrote: >> >>> Or if your German is rusty: >>> >>> >>>http://www.zdnet.com/blog/computers/amd-radeon-hd-7970-graphics-card-lau >>>nched-benchmarked-fastest-single-gpu-board-available/7204 >> Wonder what kind of response will be forthcoming from nVidia, >> given developments like >>http://www.theregister.co.uk/2011/11/14/arm_gpu_nvidia_supercomputer/ >> >> It does seem that x86 is dead, despite good Bulldozer performance >> in Interlagos >> >> >>http://www.heise.de/newsticker/meldung/AMDs-Serverprozessoren-mit-Bulldoz >>er-Architektur-legen-los-1378230.html >> >> (engage dekrautizer of your choice). >> > >At SC11, it was clear that everyone was looking for ways around the >power wall. I saw 5 or 6 different booths touting the use of FPGAs for >improved performance/efficiency. I don't remember there being a single >FPGA booth in the past. Whether the accelerator is GPU, FPGA, GRAPE, >Intem MIC, or something else, I think it's clear that the future of HPC >architecture is going to change radically in the next couple years, >unless some major breakthrough occurs for commodity processors. > >I think DE Shaw Research's Anton computer, which uses FPGAs and custom >processors, is an excellent example of what the future of HPC might look >like. > >-- >Prentice >_______________________________________________ >Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing >To change your subscription (digest mode or unsubscribe) visit >http://www.beowulf.org/mailman/listinfo/beowulf _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From james.p.lux at jpl.nasa.gov Thu Dec 22 12:33:37 2011 From: james.p.lux at jpl.nasa.gov (Lux, Jim (337C)) Date: Thu, 22 Dec 2011 09:33:37 -0800 Subject: [Beowulf] 3.79 TFlops sp, 0.95 TFlops dp, 264 TByte/s, 3 GByte, 198 W @ 500 EUR In-Reply-To: <4EF36093.9040807@ias.edu> Message-ID: That's an interesting approach of combining ASICs with FPGAs. ASICs will blow the doors off anything else in a FLOP/Joule contest or a FLOPS/kg or FLOPS/dollar.. For tasks for which the ASIC is designed. FPGAs to handle the routing/sequencing/variable parts of the problem and ASICs to do the crunching is a great idea. Sort of the same idea as including DSP or PowerPC cores on a Xilinx FPGA, at a more macro scale. (and of interest in the HPC world, since early 2nd generation Hypercubes from Intel used Xilinx FPGAs as their routing fabric) The challenge with this kind of hardware design is PWB design. Sure, you have 1100+ pins coming out of that FPGA.. Now you have to route them somewhere. And do it in a manufacturable board: I've worked recently with a board that had 22 layers, and we were at the ragged edge of tolerances with the close pitch column grid array parts we had to use. I would expect the clever folks at DE Shaw did an integrated design with their ASIC.. Make the ASIC pinouts such that they line up with the FPGAs, and make the routing problem simpler. On 12/22/11 8:53 AM, "Prentice Bisbal" wrote: >Just for the record - I'm only the messenger. I noticed a >not-insignificant number of booths touting FPGAs at SC11 this year, so I >reported on it. I also mentioned other forms of accelerators, like GPUs >and Intel's MIC architecture. > >The Anton computer architecture isn't just a FPGA - it also has >custom-designed processors (ASICS). The ASICs handle the parts of the >molecular dynamics (MD) algorithms that are well-understood, and >unlikely to change, and the FPGAs handle the parts of the algorithms >that may change or might have room for further optimization. > >As far as I know, only 8 or 9 Antons have been built. One is at the >Pittsburgh Supercomputing Center (PSC), the rest are for internal use at >DE Shaw. A single Anton consists of 512 cores, and takes up 6 or 8 >racks. Despite it's small size, it's orders of magnitude faster at >doing MD calculations than even super computers like Jaguar and >Roadrunner with hundreds of thousands of processors. So overall, Anton >is several orders of magnitudes faster than an general-purpose processor >based supercomputer. And sI'm sure it uses a LOT less power. I don't >think the Anton's are clustered together, so I'm pretty sure the >published performance on MD simulations is for a single Anton with 512 >cores > >Keep in mind that Anton was designed to do only 1 thing: MD, so it >probably can't even run LinPack, and if it did, I'm sure it's score >would be awful. Also, the designers cut corners where they knew the >safely could, like using fixed-precision (or is it fixed-point?) math, >so the hardware design is only half the story in this example. > >Prentice > > > >On 12/22/2011 11:27 AM, Lux, Jim (337C) wrote: >> The problem with FPGAs (and I use a fair number of them) is that you're >> never going to get the same picojoules/bit transition kind of power >> consumption that you do with a purpose designed processor. The extra >> logic needed to get it "reconfigurable", and the physical junction sizes >> as well, make it so. >> >> What you will find is that on certain kinds of problems, you can >>implement >> a more efficient algorithm in FPGA than you can in a conventional >> processor or GPU. So, for that class of problem, the FPGA is a winner >> (things lending themselves to fixed point systolic array type processes >> are a good candidate). >> >> Bear in mind also that while an FPGA may have, say, 10-million gate >> equivalent, any given practical design is going to use a small fraction >>of >> those gates. Fortunately, most of those unused gates aren't toggling, >>so >> they don't consume clock related power, but they do consume leakage >> current, so the whole clock rate vs core voltage trade winds up a bit >> different for FPGAs. >> >> The biggest problem with FPGAs is that they are difficult to write high >> performance software for. With FORTRAN on conventional and vectorized >>and >> pipelined processors, we've got 50 years of compiler writing expertise, >> and real high performance libraries. And, literally millions of people >> who know how to code in FORTRAN or C or something, so if you're looking >> for the highest performance coders, even at the 4 sigma level, you've >>got >> a fair number to choose from. For numerical computation in FPGAs, not >>so >> many. I'd guess that a large fraction of FPGA developers are doing one >>of >> two things: 1) digital signal processing, flow through kinds of stuff >> (error correcting codes, compression/decompression, crypto; 2) bus >> interface and data handling (PCI bus, disk drive controls, etc.). >> >> Interestingly, even with the relative scarcity of FPGA developers versus >> conventional CPU software, the average salaries aren't that far apart. >> The distribution on "generic coders" is wider (particularly on the low >> end.. Barriers to entry are lower for C,Java,whathaveyou code monkeys), >> but there are very, very few people making more than, say, 150-200k/yr >> doing either. (except in a few anomalous industries, where compensation >> is higher than normal in general). (also leaving out "equity >> participation" type deals) >> >> >> >> On 12/22/11 7:42 AM, "Prentice Bisbal" wrote: >> >>> On 12/22/2011 09:57 AM, Eugen Leitl wrote: >>>> On Thu, Dec 22, 2011 at 09:43:55AM -0500, Prentice Bisbal wrote: >>>> >>>>> Or if your German is rusty: >>>>> >>>>> >>>>> >>>>>http://www.zdnet.com/blog/computers/amd-radeon-hd-7970-graphics-card-l >>>>>au >>>>> nched-benchmarked-fastest-single-gpu-board-available/7204 >>>> Wonder what kind of response will be forthcoming from nVidia, >>>> given developments like >>>> http://www.theregister.co.uk/2011/11/14/arm_gpu_nvidia_supercomputer/ >>>> >>>> It does seem that x86 is dead, despite good Bulldozer performance >>>> in Interlagos >>>> >>>> >>>> >>>>http://www.heise.de/newsticker/meldung/AMDs-Serverprozessoren-mit-Bulld >>>>oz >>>> er-Architektur-legen-los-1378230.html >>>> >>>> (engage dekrautizer of your choice). >>>> >>> At SC11, it was clear that everyone was looking for ways around the >>> power wall. I saw 5 or 6 different booths touting the use of FPGAs for >>> improved performance/efficiency. I don't remember there being a single >>> FPGA booth in the past. Whether the accelerator is GPU, FPGA, GRAPE, >>> Intem MIC, or something else, I think it's clear that the future of >>>HPC >>> architecture is going to change radically in the next couple years, >>> unless some major breakthrough occurs for commodity processors. >>> >>> I think DE Shaw Research's Anton computer, which uses FPGAs and custom >>> processors, is an excellent example of what the future of HPC might >>>look >>> like. >>> >>> -- >>> Prentice >>> _______________________________________________ >>> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin >>>Computing >>> To change your subscription (digest mode or unsubscribe) visit >>> http://www.beowulf.org/mailman/listinfo/beowulf >> >_______________________________________________ >Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing >To change your subscription (digest mode or unsubscribe) visit >http://www.beowulf.org/mailman/listinfo/beowulf _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From prentice at ias.edu Thu Dec 22 14:49:15 2011 From: prentice at ias.edu (Prentice Bisbal) Date: Thu, 22 Dec 2011 14:49:15 -0500 Subject: [Beowulf] 3.79 TFlops sp, 0.95 TFlops dp, 264 TByte/s, 3 GByte, 198 W @ 500 EUR In-Reply-To: References: Message-ID: <4EF389BB.5070608@ias.edu> Jim, If you or anyone else on this are interested in learning more about the anton architecture, there a bunch of links here: http://www.deshawresearch.com/publications.html There's a couple that give good descriptions of the anton architecture. I read most of the computer-related ones over the summer. Yes, that's my idea of light summer reading! Prentice On 12/22/2011 12:33 PM, Lux, Jim (337C) wrote: > That's an interesting approach of combining ASICs with FPGAs. ASICs will > blow the doors off anything else in a FLOP/Joule contest or a FLOPS/kg or > FLOPS/dollar.. For tasks for which the ASIC is designed. FPGAs to handle > the routing/sequencing/variable parts of the problem and ASICs to do the > crunching is a great idea. Sort of the same idea as including DSP or > PowerPC cores on a Xilinx FPGA, at a more macro scale. > (and of interest in the HPC world, since early 2nd generation Hypercubes > from Intel used Xilinx FPGAs as their routing fabric) > > The challenge with this kind of hardware design is PWB design. Sure, you > have 1100+ pins coming out of that FPGA.. Now you have to route them > somewhere. And do it in a manufacturable board: I've worked recently with > a board that had 22 layers, and we were at the ragged edge of tolerances > with the close pitch column grid array parts we had to use. > > I would expect the clever folks at DE Shaw did an integrated design with > their ASIC.. Make the ASIC pinouts such that they line up with the FPGAs, > and make the routing problem simpler. > > > > > On 12/22/11 8:53 AM, "Prentice Bisbal" wrote: > >> Just for the record - I'm only the messenger. I noticed a >> not-insignificant number of booths touting FPGAs at SC11 this year, so I >> reported on it. I also mentioned other forms of accelerators, like GPUs >> and Intel's MIC architecture. >> >> The Anton computer architecture isn't just a FPGA - it also has >> custom-designed processors (ASICS). The ASICs handle the parts of the >> molecular dynamics (MD) algorithms that are well-understood, and >> unlikely to change, and the FPGAs handle the parts of the algorithms >> that may change or might have room for further optimization. >> >> As far as I know, only 8 or 9 Antons have been built. One is at the >> Pittsburgh Supercomputing Center (PSC), the rest are for internal use at >> DE Shaw. A single Anton consists of 512 cores, and takes up 6 or 8 >> racks. Despite it's small size, it's orders of magnitude faster at >> doing MD calculations than even super computers like Jaguar and >> Roadrunner with hundreds of thousands of processors. So overall, Anton >> is several orders of magnitudes faster than an general-purpose processor >> based supercomputer. And sI'm sure it uses a LOT less power. I don't >> think the Anton's are clustered together, so I'm pretty sure the >> published performance on MD simulations is for a single Anton with 512 >> cores >> >> Keep in mind that Anton was designed to do only 1 thing: MD, so it >> probably can't even run LinPack, and if it did, I'm sure it's score >> would be awful. Also, the designers cut corners where they knew the >> safely could, like using fixed-precision (or is it fixed-point?) math, >> so the hardware design is only half the story in this example. >> >> Prentice >> >> >> >> On 12/22/2011 11:27 AM, Lux, Jim (337C) wrote: >>> The problem with FPGAs (and I use a fair number of them) is that you're >>> never going to get the same picojoules/bit transition kind of power >>> consumption that you do with a purpose designed processor. The extra >>> logic needed to get it "reconfigurable", and the physical junction sizes >>> as well, make it so. >>> >>> What you will find is that on certain kinds of problems, you can >>> implement >>> a more efficient algorithm in FPGA than you can in a conventional >>> processor or GPU. So, for that class of problem, the FPGA is a winner >>> (things lending themselves to fixed point systolic array type processes >>> are a good candidate). >>> >>> Bear in mind also that while an FPGA may have, say, 10-million gate >>> equivalent, any given practical design is going to use a small fraction >>> of >>> those gates. Fortunately, most of those unused gates aren't toggling, >>> so >>> they don't consume clock related power, but they do consume leakage >>> current, so the whole clock rate vs core voltage trade winds up a bit >>> different for FPGAs. >>> >>> The biggest problem with FPGAs is that they are difficult to write high >>> performance software for. With FORTRAN on conventional and vectorized >>> and >>> pipelined processors, we've got 50 years of compiler writing expertise, >>> and real high performance libraries. And, literally millions of people >>> who know how to code in FORTRAN or C or something, so if you're looking >>> for the highest performance coders, even at the 4 sigma level, you've >>> got >>> a fair number to choose from. For numerical computation in FPGAs, not >>> so >>> many. I'd guess that a large fraction of FPGA developers are doing one >>> of >>> two things: 1) digital signal processing, flow through kinds of stuff >>> (error correcting codes, compression/decompression, crypto; 2) bus >>> interface and data handling (PCI bus, disk drive controls, etc.). >>> >>> Interestingly, even with the relative scarcity of FPGA developers versus >>> conventional CPU software, the average salaries aren't that far apart. >>> The distribution on "generic coders" is wider (particularly on the low >>> end.. Barriers to entry are lower for C,Java,whathaveyou code monkeys), >>> but there are very, very few people making more than, say, 150-200k/yr >>> doing either. (except in a few anomalous industries, where compensation >>> is higher than normal in general). (also leaving out "equity >>> participation" type deals) >>> >>> >>> >>> On 12/22/11 7:42 AM, "Prentice Bisbal" wrote: >>> >>>> On 12/22/2011 09:57 AM, Eugen Leitl wrote: >>>>> On Thu, Dec 22, 2011 at 09:43:55AM -0500, Prentice Bisbal wrote: >>>>> >>>>>> Or if your German is rusty: >>>>>> >>>>>> >>>>>> >>>>>> http://www.zdnet.com/blog/computers/amd-radeon-hd-7970-graphics-card-l >>>>>> au >>>>>> nched-benchmarked-fastest-single-gpu-board-available/7204 >>>>> Wonder what kind of response will be forthcoming from nVidia, >>>>> given developments like >>>>> http://www.theregister.co.uk/2011/11/14/arm_gpu_nvidia_supercomputer/ >>>>> >>>>> It does seem that x86 is dead, despite good Bulldozer performance >>>>> in Interlagos >>>>> >>>>> >>>>> >>>>> http://www.heise.de/newsticker/meldung/AMDs-Serverprozessoren-mit-Bulld >>>>> oz >>>>> er-Architektur-legen-los-1378230.html >>>>> >>>>> (engage dekrautizer of your choice). >>>>> >>>> At SC11, it was clear that everyone was looking for ways around the >>>> power wall. I saw 5 or 6 different booths touting the use of FPGAs for >>>> improved performance/efficiency. I don't remember there being a single >>>> FPGA booth in the past. Whether the accelerator is GPU, FPGA, GRAPE, >>>> Intem MIC, or something else, I think it's clear that the future of >>>> HPC >>>> architecture is going to change radically in the next couple years, >>>> unless some major breakthrough occurs for commodity processors. >>>> >>>> I think DE Shaw Research's Anton computer, which uses FPGAs and custom >>>> processors, is an excellent example of what the future of HPC might >>>> look >>>> like. >>>> >>>> -- >>>> Prentice >>>> _______________________________________________ >>>> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin >>>> Computing >>>> To change your subscription (digest mode or unsubscribe) visit >>>> http://www.beowulf.org/mailman/listinfo/beowulf >> _______________________________________________ >> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing >> To change your subscription (digest mode or unsubscribe) visit >> http://www.beowulf.org/mailman/listinfo/beowulf > _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From tegner at renget.se Fri Dec 23 10:54:22 2011 From: tegner at renget.se (Jon Tegner) Date: Fri, 23 Dec 2011 16:54:22 +0100 Subject: [Beowulf] personal HPC In-Reply-To: <34813.192.168.93.213.1324572677.squirrel@mail.eadline.org> References: <34813.192.168.93.213.1324572677.squirrel@mail.eadline.org> Message-ID: <4EF4A42E.1030208@renget.se> Cool! Impressive to have taken it this far! What are the dimensions of the system? And the mainbord for the compute nodes, are you using mini-itx there? Regards, /jon On 12/22/2011 05:51 PM, Douglas Eadline wrote: > For those that don't know, I have been working > on a commodity "desk side" cluster for a while. > I have been writing about the progress at: > > http://limulus.basement-supercomputing.com/ > > Recently I was able to get 200 GFLOPS using Intel > i5-2400S processors connected by GigE (58% of peak). > Of course these are CPU FLOPS not GPU FLOPS and the > design has a power/heat/performance/noise envelope > that makes it suitable for true desk side computing. > (for things like software development, education, > small production work, and cloud staging) > > You can find the raw HPC numbers and specifications here: > > http://limulus.basement-supercomputing.com/wiki/CommercialLimulus > > BTW, if click the "Nexlink Limulus" link, you can take a survey > for a chance to win one of these systems. > > Happy holidays > > -- > Doug > _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From deadline at eadline.org Fri Dec 23 12:31:09 2011 From: deadline at eadline.org (Douglas Eadline) Date: Fri, 23 Dec 2011 12:31:09 -0500 (EST) Subject: [Beowulf] personal HPC In-Reply-To: <4EF4A42E.1030208@renget.se> References: <34813.192.168.93.213.1324572677.squirrel@mail.eadline.org> <4EF4A42E.1030208@renget.se> Message-ID: <49637.192.168.93.213.1324661469.squirrel@mail.eadline.org> > Cool! Impressive to have taken it this far! > > What are the dimensions of the system? And the mainbord > for the compute nodes, are you using mini-itx there? Hey Jon, It is a standard Antec 1200 case, the approximate size is 20x22x8.5 inches or 51x56x22 cm. It uses micro-ATX boards. BTW, there is no case modification needed, it all slides and screws in. The FAQ may have can provide more info: http://limulus.basement-supercomputing.com/wiki/LimulusFAQ -- Doug > > Regards, > > /jon > > On 12/22/2011 05:51 PM, Douglas Eadline wrote: >> For those that don't know, I have been working >> on a commodity "desk side" cluster for a while. >> I have been writing about the progress at: >> >> http://limulus.basement-supercomputing.com/ >> >> Recently I was able to get 200 GFLOPS using Intel >> i5-2400S processors connected by GigE (58% of peak). >> Of course these are CPU FLOPS not GPU FLOPS and the >> design has a power/heat/performance/noise envelope >> that makes it suitable for true desk side computing. >> (for things like software development, education, >> small production work, and cloud staging) >> >> You can find the raw HPC numbers and specifications here: >> >> http://limulus.basement-supercomputing.com/wiki/CommercialLimulus >> >> BTW, if click the "Nexlink Limulus" link, you can take a survey >> for a chance to win one of these systems. >> >> Happy holidays >> >> -- >> Doug >> > > > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing > To change your subscription (digest mode or unsubscribe) visit > http://www.beowulf.org/mailman/listinfo/beowulf > > -- > MailScanner: clean > -- Doug -- MailScanner: clean _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From eagles051387 at gmail.com Fri Dec 23 14:32:23 2011 From: eagles051387 at gmail.com (Jonathan Aquilina) Date: Fri, 23 Dec 2011 13:32:23 -0600 Subject: [Beowulf] clustering using off the shelf systems in a fish tank full of oil. Message-ID: <4EF4D747.3080200@gmail.com> I am just curious as to everyones take on this http://www.youtube.com/watch?v=PtufuXLvOok Being able to over clock the systems how much more performance gains can one get out of them _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From samuel at unimelb.edu.au Tue Dec 27 07:04:02 2011 From: samuel at unimelb.edu.au (Chris Samuel) Date: Tue, 27 Dec 2011 23:04:02 +1100 Subject: [Beowulf] =?iso-8859-1?q?3=2E79_TFlops_sp=2C_0=2E95_TFlops_dp=2C_?= =?iso-8859-1?q?264_TByte/s=2C_3=09GByte_=2C_198_W_=40_500_EUR?= In-Reply-To: <4EF34FEB.8030903@ias.edu> References: <20111222095040.GK31847@leitl.org> <20111222145744.GZ31847@leitl.org> <4EF34FEB.8030903@ias.edu> Message-ID: <201112272304.02320.samuel@unimelb.edu.au> On Fri, 23 Dec 2011 02:42:35 AM Prentice Bisbal wrote: > At SC11, it was clear that everyone was looking for ways around the > power wall. I saw 5 or 6 different booths touting the use of FPGAs > for improved performance/efficiency. I don't remember there being > a single FPGA booth in the past. I couldn't be at SC'11 due to family health issues, but I'm sure I remember a number of FPGA booths at previous SC's. I remember one at SC'07 or so that had FPGA's that would go into an AMD Opteron CPU socket for instance. Ah yes, I even took a photo of it (the FPGA in the socket, not the booth I'm afraid) at SC'07: http://www.flickr.com/photos/chrissamuel/2267611323/in/set-72157603919719911 Looks like an Altera FPGA. cheers! Chris -- Christopher Samuel - Senior Systems Administrator VLSCI - Victorian Life Sciences Computation Initiative Email: samuel at unimelb.edu.au Phone: +61 (0)3 903 55545 http://www.vlsci.unimelb.edu.au/ _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From prentice at ias.edu Wed Dec 28 10:40:57 2011 From: prentice at ias.edu (Prentice Bisbal) Date: Wed, 28 Dec 2011 10:40:57 -0500 Subject: [Beowulf] clustering using off the shelf systems in a fish tank full of oil. In-Reply-To: <4EF4D747.3080200@gmail.com> References: <4EF4D747.3080200@gmail.com> Message-ID: <4EFB3889.6090401@ias.edu> There has been a company at the SC conferences for the past 3 years trying to sell exactly that (server cooling by submersion in mineral oil) for the past 3 years. In my opinion it, suffers from a few major problems: 1. It's messy. If you every have to take hardware out of the oil to repair/replace, it's messy. The oil could drip all over, creating safety hazards. And if you need to remove a hardware component from a server, good luck! Now that everything is oily and slippery, there definitely will be a problem with that hard drive once it flies out of your hands, even if there wasn't a problem with it before! 2. The weight of the mineral oil. Despite the density of current 1-U and blade systems, I still think that air makes up a not-significant percentage of volume of the full rack. Fill that space with a liquid like mineral oil, and I'm sure you double, triple, or maybe even quadruple the weight load on your datacenter's raised floor. -- Prentice http://msds.farnam.com/m000712.htm On 12/23/2011 2:32 PM, Jonathan Aquilina wrote: > > I am just curious as to everyones take on this > > http://www.youtube.com/watch?v=PtufuXLvOok > > Being able to over clock the systems how much more performance gains can > one get out of them > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing > To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf > _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From landman at scalableinformatics.com Wed Dec 28 10:49:00 2011 From: landman at scalableinformatics.com (Joe Landman) Date: Wed, 28 Dec 2011 10:49:00 -0500 Subject: [Beowulf] clustering using off the shelf systems in a fish tank full of oil. In-Reply-To: <4EFB3889.6090401@ias.edu> References: <4EF4D747.3080200@gmail.com> <4EFB3889.6090401@ias.edu> Message-ID: <4EFB3A6C.8040608@scalableinformatics.com> On 12/28/2011 10:40 AM, Prentice Bisbal wrote: > There has been a company at the SC conferences for the past 3 years > trying to sell exactly that (server cooling by submersion in mineral > oil) for the past 3 years. > > In my opinion it, suffers from a few major problems: [...] Those are the costs in the cost-benefit analysis. Not really complete, as you need to include filtering, mineral oil supply/monitoring, etc. The benefits are that it cools really ... really well. -- Joseph Landman, Ph.D Founder and CEO Scalable Informatics Inc. email: landman at scalableinformatics.com web : http://scalableinformatics.com http://scalableinformatics.com/sicluster phone: +1 734 786 8423 x121 fax : +1 866 888 3112 cell : +1 734 612 4615 _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From eagles051387 at gmail.com Wed Dec 28 11:05:08 2011 From: eagles051387 at gmail.com (Jonathan Aquilina) Date: Wed, 28 Dec 2011 10:05:08 -0600 Subject: [Beowulf] clustering using off the shelf systems in a fish tank full of oil. In-Reply-To: <4EFB3A6C.8040608@scalableinformatics.com> References: <4EF4D747.3080200@gmail.com> <4EFB3889.6090401@ias.edu> <4EFB3A6C.8040608@scalableinformatics.com> Message-ID: <4EFB3E34.1070006@gmail.com> On 12/28/2011 9:49 AM, Joe Landman wrote: > On 12/28/2011 10:40 AM, Prentice Bisbal wrote: >> There has been a company at the SC conferences for the past 3 years >> trying to sell exactly that (server cooling by submersion in mineral >> oil) for the past 3 years. >> >> In my opinion it, suffers from a few major problems: > [...] > > Those are the costs in the cost-benefit analysis. Not really complete, > as you need to include filtering, mineral oil supply/monitoring, etc. > > The benefits are that it cools really ... really well. > Im curious though to see a cluster like that how much one can actually overclock a given system or cluster of these systems. is overclocking used any more in current day clusters? _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From deadline at eadline.org Wed Dec 28 11:11:53 2011 From: deadline at eadline.org (Douglas Eadline) Date: Wed, 28 Dec 2011 11:11:53 -0500 (EST) Subject: [Beowulf] clustering using off the shelf systems in a fish tank full of oil. In-Reply-To: <4EFB3889.6090401@ias.edu> References: <4EF4D747.3080200@gmail.com> <4EFB3889.6090401@ias.edu> Message-ID: <23763.130.219.8.226.1325088713.squirrel@mail.eadline.org> However, if you really overclock, you can make french fries -- Doug > There has been a company at the SC conferences for the past 3 years > trying to sell exactly that (server cooling by submersion in mineral > oil) for the past 3 years. > > In my opinion it, suffers from a few major problems: > > 1. It's messy. If you every have to take hardware out of the oil to > repair/replace, it's messy. The oil could drip all over, creating safety > hazards. And if you need to remove a hardware component from a server, > good luck! Now that everything is oily and slippery, there definitely > will be a problem with that hard drive once it flies out of your hands, > even if there wasn't a problem with it before! > > 2. The weight of the mineral oil. Despite the density of current 1-U and > blade systems, I still think that air makes up a not-significant > percentage of volume of the full rack. Fill that space with a liquid > like mineral oil, and I'm sure you double, triple, or maybe even > quadruple the weight load on your datacenter's raised floor. > > -- > Prentice > > > > http://msds.farnam.com/m000712.htm > > On 12/23/2011 2:32 PM, Jonathan Aquilina wrote: >> >> I am just curious as to everyones take on this >> >> http://www.youtube.com/watch?v=PtufuXLvOok >> >> Being able to over clock the systems how much more performance gains can >> one get out of them >> _______________________________________________ >> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing >> To change your subscription (digest mode or unsubscribe) visit >> http://www.beowulf.org/mailman/listinfo/beowulf >> > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing > To change your subscription (digest mode or unsubscribe) visit > http://www.beowulf.org/mailman/listinfo/beowulf > > -- > MailScanner: clean > -- Doug -- MailScanner: clean _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From cbergstrom at pathscale.com Wed Dec 28 11:15:27 2011 From: cbergstrom at pathscale.com (=?ISO-8859-1?Q?=22C=2E_Bergstr=F6m=22?=) Date: Wed, 28 Dec 2011 23:15:27 +0700 Subject: [Beowulf] clustering using off the shelf systems in a fish tank full of oil. In-Reply-To: <23763.130.219.8.226.1325088713.squirrel@mail.eadline.org> References: <4EF4D747.3080200@gmail.com> <4EFB3889.6090401@ias.edu> <23763.130.219.8.226.1325088713.squirrel@mail.eadline.org> Message-ID: <4EFB409F.20302@pathscale.com> On 12/28/11 11:11 PM, Douglas Eadline wrote: > However, if you really overclock, you can make french fries I think smores from the oncoming grease fire would be more fun ;) _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From eagles051387 at gmail.com Wed Dec 28 11:18:33 2011 From: eagles051387 at gmail.com (Jonathan Aquilina) Date: Wed, 28 Dec 2011 10:18:33 -0600 Subject: [Beowulf] clustering using off the shelf systems in a fish tank full of oil. In-Reply-To: <23763.130.219.8.226.1325088713.squirrel@mail.eadline.org> References: <4EF4D747.3080200@gmail.com> <4EFB3889.6090401@ias.edu> <23763.130.219.8.226.1325088713.squirrel@mail.eadline.org> Message-ID: <4EFB4159.1040206@gmail.com> Was thinking that after i sent the email. I think the solution to part one of your answer Prentice is the following. You would have spare machines on hand that you would swap out with a faulty machine allowing you the necessary time to replace parts as needed with out the risk of spilling the oil on the floor and creating any hazards in the workplace. On 12/28/2011 10:11 AM, Douglas Eadline wrote: > However, if you really overclock, you can make french fries > > -- > Doug > > >> There has been a company at the SC conferences for the past 3 years >> trying to sell exactly that (server cooling by submersion in mineral >> oil) for the past 3 years. >> >> In my opinion it, suffers from a few major problems: >> >> 1. It's messy. If you every have to take hardware out of the oil to >> repair/replace, it's messy. The oil could drip all over, creating safety >> hazards. And if you need to remove a hardware component from a server, >> good luck! Now that everything is oily and slippery, there definitely >> will be a problem with that hard drive once it flies out of your hands, >> even if there wasn't a problem with it before! >> >> 2. The weight of the mineral oil. Despite the density of current 1-U and >> blade systems, I still think that air makes up a not-significant >> percentage of volume of the full rack. Fill that space with a liquid >> like mineral oil, and I'm sure you double, triple, or maybe even >> quadruple the weight load on your datacenter's raised floor. >> >> -- >> Prentice >> >> >> >> http://msds.farnam.com/m000712.htm >> >> On 12/23/2011 2:32 PM, Jonathan Aquilina wrote: >>> I am just curious as to everyones take on this >>> >>> http://www.youtube.com/watch?v=PtufuXLvOok >>> >>> Being able to over clock the systems how much more performance gains can >>> one get out of them >>> _______________________________________________ >>> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing >>> To change your subscription (digest mode or unsubscribe) visit >>> http://www.beowulf.org/mailman/listinfo/beowulf >>> >> _______________________________________________ >> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing >> To change your subscription (digest mode or unsubscribe) visit >> http://www.beowulf.org/mailman/listinfo/beowulf >> >> -- >> MailScanner: clean >> > _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From landman at scalableinformatics.com Wed Dec 28 11:31:12 2011 From: landman at scalableinformatics.com (Joe Landman) Date: Wed, 28 Dec 2011 11:31:12 -0500 Subject: [Beowulf] clustering using off the shelf systems in a fish tank full of oil. In-Reply-To: <23763.130.219.8.226.1325088713.squirrel@mail.eadline.org> References: <4EF4D747.3080200@gmail.com> <4EFB3889.6090401@ias.edu> <23763.130.219.8.226.1325088713.squirrel@mail.eadline.org> Message-ID: <4EFB4450.4020607@scalableinformatics.com> On 12/28/2011 11:11 AM, Douglas Eadline wrote: > > However, if you really overclock, you can make french fries Mmmmm server fries .... tasty ! -- Joseph Landman, Ph.D Founder and CEO Scalable Informatics Inc. email: landman at scalableinformatics.com web : http://scalableinformatics.com http://scalableinformatics.com/sicluster phone: +1 734 786 8423 x121 fax : +1 866 888 3112 cell : +1 734 612 4615 _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From james.p.lux at jpl.nasa.gov Wed Dec 28 12:17:12 2011 From: james.p.lux at jpl.nasa.gov (Lux, Jim (337C)) Date: Wed, 28 Dec 2011 09:17:12 -0800 Subject: [Beowulf] clustering using off the shelf systems in a fish tank full of oil. In-Reply-To: <4EFB3889.6090401@ias.edu> Message-ID: On 12/28/11 7:40 AM, "Prentice Bisbal" wrote: >There has been a company at the SC conferences for the past 3 years >trying to sell exactly that (server cooling by submersion in mineral >oil) for the past 3 years. > >In my opinion it, suffers from a few major problems: > >1. It's messy. If you every have to take hardware out of the oil to >repair/replace, it's messy. The oil could drip all over, creating safety >hazards. And if you need to remove a hardware component from a server, >good luck! Now that everything is oily and slippery, there definitely >will be a problem with that hard drive once it flies out of your hands, >even if there wasn't a problem with it before! > >2. The weight of the mineral oil. Despite the density of current 1-U and >blade systems, I still think that air makes up a not-significant >percentage of volume of the full rack. Fill that space with a liquid >like mineral oil, and I'm sure you double, triple, or maybe even >quadruple the weight load on your datacenter's raised floor. > > I've worked quite a lot with oil insulation in the high voltage world. Prentice's comments (particularly #1) are spot on. ALL oil filled equipment that is designed for servicing leaks. ALL. Maybe it's just a fine oil film on the outside, maybe it's a puddle on the floor, but it all leaks. (Exception.. Things that are welded closed with oil inside, but that's not serviceable) When you do remove the equipment from the tank, yes, it drips, and it's a mess. Slipperyness isn't as big a problem.. You lift the stuff out of the tank, and let is sit for a long while while it drips back into the tank. Pick a real low viscosity oil (good for other reasons) and it's not too bad. The problem is that there is some nook or cranny that retains oil because of its orientation or capillary effects, and that oil comes oozing/spilling out later. Fluorinert is a different story (albeit hideously more expensive than oil). It's very low viscosity, has low capillary attraction, etc. and will (if chosen properly) evaporate. Equipment that cools by ebullient (boiling) Fluorinert cleans up very nicely, because the boiling point is chosen to be quite low. I'm not sure I'd be plunging a disk drive into oil. Most drive cases I've seen have a vent plug. Maybe the holes are small enough so that the oil molecules don't make it through, but air does, but temperature cycling is going to force oil into the case eventually. > _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From james.p.lux at jpl.nasa.gov Wed Dec 28 12:20:47 2011 From: james.p.lux at jpl.nasa.gov (Lux, Jim (337C)) Date: Wed, 28 Dec 2011 09:20:47 -0800 Subject: [Beowulf] clustering using off the shelf systems in a fish tank full of oil. In-Reply-To: <4EFB409F.20302@pathscale.com> Message-ID: And this is why PCBs were used instead of oil. No burning, much more chemically inert. Too bad there's inevitable manufacturing contaminants which are carcinogenic in very low quantities, and because they are so persistent, cause problems for a long time. Oil does spoil, after all. Slowly, for good insulating mineral oil (they put anti-oxidants like BHT, BHA, or alpha-tocopherol in it), but it does degrade. Silicones are essentially inert and don't really spoil, but are a LOT more expensive, and have other disadvantages (real hard to remove with a solvent, for instance) On 12/28/11 8:15 AM, "C. Bergstr?m" wrote: >On 12/28/11 11:11 PM, Douglas Eadline wrote: >> However, if you really overclock, you can make french fries >I think smores from the oncoming grease fire would be more fun ;) >_______________________________________________ >Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing >To change your subscription (digest mode or unsubscribe) visit >http://www.beowulf.org/mailman/listinfo/beowulf _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From james.p.lux at jpl.nasa.gov Wed Dec 28 12:30:15 2011 From: james.p.lux at jpl.nasa.gov (Lux, Jim (337C)) Date: Wed, 28 Dec 2011 09:30:15 -0800 Subject: [Beowulf] clustering using off the shelf systems in a fish tank full of oil. In-Reply-To: <4EFB4159.1040206@gmail.com> Message-ID: On 12/28/11 8:18 AM, "Jonathan Aquilina" wrote: >Was thinking that after i sent the email. > >I think the solution to part one of your answer Prentice is the following. > >You would have spare machines on hand that you would swap out with a >faulty machine allowing you the necessary time to replace parts as >needed with out the risk of spilling the oil on the floor and creating >any hazards in the workplace. And you'll have your oily floor "service depot" somewhere else... (and you'll still have oily floors under your racks.. Oil WILL move through the wires by capillary attraction and/or thermal/atmospheric pumping. Home experiment: Get a piece of stranded wire about 30 cm long. Fill a cup or glass with oil to within a couple cm of the top. Drape the wire over the edge of the cup with one end in the oil and the other end on a piece of paper on the surface of the table. (do all this within a raised edge pan or cookie sheet). Wait a day or two. Observe. Clean up. Bear in mind that a 4 U case full of oil is going to be pretty heavy. Oil has a specific gravity/density of around .7 kg/liter. It's gonna be right around the OSHA 1 person lift limit of 55 lb, and I wouldn't want to be the guy standing under the chassis as you pull it out of the top slot of the rack. So you're going to need a rolling cart with a suitable lifting mechanism or maybe a chain hoist on a rail down between your server aisles, sort of like in a slaughter house or metal plating plant? > _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From eagles051387 at gmail.com Wed Dec 28 13:04:57 2011 From: eagles051387 at gmail.com (Jonathan Aquilina) Date: Wed, 28 Dec 2011 12:04:57 -0600 Subject: [Beowulf] clustering using off the shelf systems in a fish tank full of oil. In-Reply-To: References: Message-ID: <4EFB5A49.20706@gmail.com> On 12/28/2011 11:17 AM, Lux, Jim (337C) wrote: > > On 12/28/11 7:40 AM, "Prentice Bisbal" wrote: > >> There has been a company at the SC conferences for the past 3 years >> trying to sell exactly that (server cooling by submersion in mineral >> oil) for the past 3 years. >> >> In my opinion it, suffers from a few major problems: >> >> 1. It's messy. If you every have to take hardware out of the oil to >> repair/replace, it's messy. The oil could drip all over, creating safety >> hazards. And if you need to remove a hardware component from a server, >> good luck! Now that everything is oily and slippery, there definitely >> will be a problem with that hard drive once it flies out of your hands, >> even if there wasn't a problem with it before! >> >> 2. The weight of the mineral oil. Despite the density of current 1-U and >> blade systems, I still think that air makes up a not-significant >> percentage of volume of the full rack. Fill that space with a liquid >> like mineral oil, and I'm sure you double, triple, or maybe even >> quadruple the weight load on your datacenter's raised floor. >> >> > I've worked quite a lot with oil insulation in the high voltage world. > Prentice's comments (particularly #1) are spot on. > > ALL oil filled equipment that is designed for servicing leaks. ALL. > Maybe it's just a fine oil film on the outside, maybe it's a puddle on the > floor, but it all leaks. (Exception.. Things that are welded closed with > oil inside, but that's not serviceable) > > When you do remove the equipment from the tank, yes, it drips, and it's a > mess. Slipperyness isn't as big a problem.. You lift the stuff out of > the tank, and let is sit for a long while while it drips back into the > tank. Pick a real low viscosity oil (good for other reasons) and it's > not too bad. The problem is that there is some nook or cranny that retains > oil because of its orientation or capillary effects, and that oil comes > oozing/spilling out later. > > Fluorinert is a different story (albeit hideously more expensive than > oil). It's very low viscosity, has low capillary attraction, etc. and > will (if chosen properly) evaporate. Equipment that cools by ebullient > (boiling) Fluorinert cleans up very nicely, because the boiling point is > chosen to be quite low. > > > I'm not sure I'd be plunging a disk drive into oil. Most drive cases I've > seen have a vent plug. Maybe the holes are small enough so that the oil > molecules don't make it through, but air does, but temperature cycling is > going to force oil into the case eventually. Jim would you plunge an SSD in there? So you wouldnt advise using mineral oil like the video shows? > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing > To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From eagles051387 at gmail.com Wed Dec 28 13:06:38 2011 From: eagles051387 at gmail.com (Jonathan Aquilina) Date: Wed, 28 Dec 2011 12:06:38 -0600 Subject: [Beowulf] clustering using off the shelf systems in a fish tank full of oil. In-Reply-To: References: Message-ID: <4EFB5AAE.3030900@gmail.com> On 12/28/2011 11:30 AM, Lux, Jim (337C) wrote: > > On 12/28/11 8:18 AM, "Jonathan Aquilina" wrote: > >> Was thinking that after i sent the email. >> >> I think the solution to part one of your answer Prentice is the following. >> >> You would have spare machines on hand that you would swap out with a >> faulty machine allowing you the necessary time to replace parts as >> needed with out the risk of spilling the oil on the floor and creating >> any hazards in the workplace. > > And you'll have your oily floor "service depot" somewhere else... (and > you'll still have oily floors under your racks.. Oil WILL move through the > wires by capillary attraction and/or thermal/atmospheric pumping. Home > experiment: Get a piece of stranded wire about 30 cm long. Fill a cup or > glass with oil to within a couple cm of the top. Drape the wire over the > edge of the cup with one end in the oil and the other end on a piece of > paper on the surface of the table. (do all this within a raised edge pan > or cookie sheet). Wait a day or two. Observe. Clean up. > > Bear in mind that a 4 U case full of oil is going to be pretty heavy. Oil > has a specific gravity/density of around .7 kg/liter. It's gonna be right > around the OSHA 1 person lift limit of 55 lb, and I wouldn't want to be > the guy standing under the chassis as you pull it out of the top slot of > the rack. So you're going to need a rolling cart with a suitable lifting > mechanism or maybe a chain hoist on a rail down between your server > aisles, sort of like in a slaughter house or metal plating plant? > Wait a min guys maybe i wasnt clear, im not saying using standard server cases here. I am talking about actually using fish tanks instead. would you still have that leaking issue? _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From diep at xs4all.nl Wed Dec 28 13:43:50 2011 From: diep at xs4all.nl (Vincent Diepeveen) Date: Wed, 28 Dec 2011 19:43:50 +0100 Subject: [Beowulf] clustering using off the shelf systems in a fish tank full of oil. In-Reply-To: <4EFB5AAE.3030900@gmail.com> References: <4EFB5AAE.3030900@gmail.com> Message-ID: <715C5657-461B-41E7-9591-5DF89F3CC285@xs4all.nl> On Dec 28, 2011, at 7:06 PM, Jonathan Aquilina wrote: > On 12/28/2011 11:30 AM, Lux, Jim (337C) wrote: >> >> On 12/28/11 8:18 AM, "Jonathan Aquilina" >> wrote: >> >>> Was thinking that after i sent the email. >>> >>> I think the solution to part one of your answer Prentice is the >>> following. >>> >>> You would have spare machines on hand that you would swap out with a >>> faulty machine allowing you the necessary time to replace parts as >>> needed with out the risk of spilling the oil on the floor and >>> creating >>> any hazards in the workplace. >> >> And you'll have your oily floor "service depot" somewhere else... >> (and >> you'll still have oily floors under your racks.. Oil WILL move >> through the >> wires by capillary attraction and/or thermal/atmospheric >> pumping. Home >> experiment: Get a piece of stranded wire about 30 cm long. Fill >> a cup or >> glass with oil to within a couple cm of the top. Drape the wire >> over the >> edge of the cup with one end in the oil and the other end on a >> piece of >> paper on the surface of the table. (do all this within a raised >> edge pan >> or cookie sheet). Wait a day or two. Observe. Clean up. >> >> Bear in mind that a 4 U case full of oil is going to be pretty >> heavy. Oil >> has a specific gravity/density of around .7 kg/liter. It's gonna >> be right >> around the OSHA 1 person lift limit of 55 lb, and I wouldn't want >> to be >> the guy standing under the chassis as you pull it out of the top >> slot of >> the rack. So you're going to need a rolling cart with a suitable >> lifting >> mechanism or maybe a chain hoist on a rail down between your server >> aisles, sort of like in a slaughter house or metal plating plant? >> > Wait a min guys maybe i wasnt clear, im not saying using standard > server > cases here. That's because i guess Jim had already given his sysadmin a few flippers as a Christmas gift to service the rackmounts. > I am talking about actually using fish tanks instead. would > you still have that leaking issue? And after a few days it'll get really hot inside that fish tank. You'll remember then the bubbles which do a great cooling job and considering the huge temperature difference it'll remove quite some watts - yet it'll keep heating up if you use a box with 4 cores or more as those consume more than double the watts than what the shown systems used. But as you had explained to me you only have some old junk there anyway so it's worth a try, especially interesting to know is how much watts the fishing tank removes by itself. Maybe you can measure that for us. It's interesting to know how much a few bubbles remove, as that should be very efficient way to remove heat once it approaches a 100C+ isn't it? Jonathan, maybe you can get air from outside, i see now at the weather report that it's 13C in Malta, is that correct or is that only during nights? Maybe Jim wants to explain the huge temperature difference that the high voltage power cables cause default and the huge active cooling that gets used for the small parts that are underground. Even then they can't really put in the ground such solutions for high voltages over too long of a distance, that's technical not possible yet. Above me is 2 * 450 megawatt, which is tough to put underground for more than a kilometer or so, besides that they need the trajectory to be 8 meters wide as well as a minimum. Not sure you want that high temperature in your aquarium, the components might not withstand it for too long :) Anyway, I found it a very entertaining "pimp your computer" youtube video from 2007 that aquarium and i had a good laugh! Vincent > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin > Computing > To change your subscription (digest mode or unsubscribe) visit > http://www.beowulf.org/mailman/listinfo/beowulf _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From james.p.lux at jpl.nasa.gov Wed Dec 28 13:42:56 2011 From: james.p.lux at jpl.nasa.gov (Lux, Jim (337C)) Date: Wed, 28 Dec 2011 10:42:56 -0800 Subject: [Beowulf] clustering using off the shelf systems in a fish tank full of oil. In-Reply-To: <4EFB5A49.20706@gmail.com> Message-ID: An SSD wouldn't be a problem. No spinning disks in air, etc. And, in general, I'd work real hard to find a better cooling solution than oil immersion. It's a mess. About 5-10 years ago on this list we had some discussions on this - I was thinking about a portable cluster for use in the field by archaeologists, so it had to be cheap.. No Defense Department weapons system scale budgets in the social sciences. And it also had to be rugged and work in wide temperatures My "use case" was processing electrical resistance tomography or ground penetrating radar (generically, iterative inversion) in Central American jungle or Middle Eastern deserts. (Where Indiana Jones goes, so goes the Lux Field'wulf) If I were building something that had to be sealed, and needed to get the heat out to the outer surface (e.g. A minicluster in a box for a dusty field environment) and I wanted to use inexpensive commodity components, what I would think about is some scheme where you have a pump that sprays an inert cooling liquid (one of the inexpensive Freons, I think.. Not necessarily Fluorinert) over the boards. Sort of like a "dry sump" lubrication system in a racing engine. But it would take some serious engineering.. And one might wonder whether it would be easier and cheaper just to design for conduction cooling with things like wedgelocks to hold the cards in (and provide a thermal path. Or do something like package a small airconditioner with the cluster (although my notional package is "checkable as luggage/carryable on back of pack animal or backseat of car, so full sized rack is out of the question) As a production item, I think the wedgelock/conduction cooled scheme might be better (and I'd spend some time with some mobos looking at their thermal properties. A suitable "clamp" scheme for the edges might be enough, along with existing heatpipe type technologies. On 12/28/11 10:04 AM, "Jonathan Aquilina" wrote: >On 12/28/2011 11:17 AM, Lux, Jim (337C) wrote: >> >> I'm not sure I'd be plunging a disk drive into oil. Most drive cases >>I've >> seen have a vent plug. Maybe the holes are small enough so that the oil >> molecules don't make it through, but air does, but temperature cycling >>is >> going to force oil into the case eventually. > >Jim would you plunge an SSD in there? So you wouldnt advise using >mineral oil like the video shows? _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From james.p.lux at jpl.nasa.gov Wed Dec 28 13:51:03 2011 From: james.p.lux at jpl.nasa.gov (Lux, Jim (337C)) Date: Wed, 28 Dec 2011 10:51:03 -0800 Subject: [Beowulf] clustering using off the shelf systems in a fish tank full of oil. In-Reply-To: <4EFB5AAE.3030900@gmail.com> Message-ID: On 12/28/11 10:06 AM, "Jonathan Aquilina" wrote: >On 12/28/2011 11:30 AM, Lux, Jim (337C) wrote: >> >> And you'll have your oily floor "service depot" somewhere else... (and >> you'll still have oily floors under your racks.. Oil WILL move through >>the >> wires by capillary attraction and/or thermal/atmospheric pumping. >>Home >> experiment: Get a piece of stranded wire about 30 cm long. Fill a cup >>or >> glass with oil to within a couple cm of the top. Drape the wire over >>the >> edge of the cup with one end in the oil and the other end on a piece of >> paper on the surface of the table. (do all this within a raised edge pan >> or cookie sheet). Wait a day or two. Observe. Clean up. >> >> >Wait a min guys maybe i wasnt clear, im not saying using standard server >cases here. I am talking about actually using fish tanks instead. would >you still have that leaking issue? Almost certainly. Unless you arrange for all the wires to end up higher than the surface of the oil, the tube formed by the insulation serves as a nice siphon, started by capillary effects, to drain your tank on to the floor. (Faraday mentioned this effect with the shaving towel over the edge of the basin). And "open container of oil" (your fishtank) works for the short run, but you have to figure out how to keep it clean, while still vented, and keep moisture out (you're not insulating for HV, so that's not a big problem, but moisture in the oil will wind up in places where it might cause corrosion.. Running a bit warm helps "boil off" the water. Works great as a demo, not so hot for the long term. Try the experiment with the wire and glass of oil (use cheap cooking oil or motor oil...). Or to be fancy, how about a cluster of arduinos? BUT, if you do go oil.. Shell Diala AX is probably what you want (or the Univolt 65 equivalent). Runs about $5-10/gallon in a 5 gallon pail, cheaper in drums or truckload lots ($2-3/gallon, like most other non-exotic industrial liquids) You might find gallons of USP mineral oil at a feed store (used as a laxative for farm animals) at a competitive price, and for this application, the water content isn't as important, and it probably won't spoil too fast. > _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From diep at xs4all.nl Wed Dec 28 14:00:03 2011 From: diep at xs4all.nl (Vincent Diepeveen) Date: Wed, 28 Dec 2011 20:00:03 +0100 Subject: [Beowulf] watercooling In-Reply-To: References: Message-ID: <3081ACC9-051A-4875-B615-3B5C8A1E530A@xs4all.nl> Yeah Jim good comments, I was thinking for my cluster to overclock, which is why i guess some posted the overclocking sentences, and wanted to do it a bit more cheapskate. Latest idea now was to save costs by using for say a node or 16, to order 16 cpu blocks and 16 small pumps and 2 cheap big reservoirs: Coldreservoir ==> 16 minipumps ==> 16 cpu blocks ==> Hotreservoir Now have a big pump from the hotreservoir to outside, or maybe even 2, and put on the roof a big car radiatior, dirt cheap in fact, and a big fan which works on 24 volts. Maybe even 2. Then pump it back into the coldreservoir (gravity). Guessing i can get at most nodes around a 4.5Ghz or so @ 6 cores gulftown maybe (gulftown is fastest cpu for Diep of course sandy bridge with 6 cores or more as well when at same Ghz, in fact sandy bridge has 4 channels so is a tad faster than the 3 channel gulftown but that's peanuts). Not sure this setup works as i fear pressure differences if the huge pump doesn't pump at the same speed like the 16 small pumps. Anyone? Vincent On Dec 28, 2011, at 7:42 PM, Lux, Jim (337C) wrote: > An SSD wouldn't be a problem. No spinning disks in air, etc. > > And, in general, I'd work real hard to find a better cooling > solution than > oil immersion. It's a mess. > About 5-10 years ago on this list we had some discussions on this > - I was > thinking about a portable cluster for use in the field by > archaeologists, > so it had to be cheap.. No Defense Department weapons system scale > budgets > in the social sciences. And it also had to be rugged and work in wide > temperatures My "use case" was processing electrical resistance > tomography > or ground penetrating radar (generically, iterative inversion) in > Central > American jungle or Middle Eastern deserts. (Where Indiana Jones > goes, so > goes the Lux Field'wulf) > > > If I were building something that had to be sealed, and needed to > get the > heat out to the outer surface (e.g. A minicluster in a box for a dusty > field environment) and I wanted to use inexpensive commodity > components, > what I would think about is some scheme where you have a pump that > sprays > an inert cooling liquid (one of the inexpensive Freons, I think.. Not > necessarily Fluorinert) over the boards. Sort of like a "dry sump" > lubrication system in a racing engine. > > But it would take some serious engineering.. And one might wonder > whether > it would be easier and cheaper just to design for conduction > cooling with > things like wedgelocks to hold the cards in (and provide a thermal > path. > Or do something like package a small airconditioner with the cluster > (although my notional package is "checkable as luggage/carryable on > back > of pack animal or backseat of car, so full sized rack is out of the > question) > > As a production item, I think the wedgelock/conduction cooled > scheme might > be better (and I'd spend some time with some mobos looking at their > thermal properties. A suitable "clamp" scheme for the edges might be > enough, along with existing heatpipe type technologies. > > > On 12/28/11 10:04 AM, "Jonathan Aquilina" > wrote: > >> On 12/28/2011 11:17 AM, Lux, Jim (337C) wrote: >>> >>> I'm not sure I'd be plunging a disk drive into oil. Most drive >>> cases >>> I've >>> seen have a vent plug. Maybe the holes are small enough so that >>> the oil >>> molecules don't make it through, but air does, but temperature >>> cycling >>> is >>> going to force oil into the case eventually. >> >> Jim would you plunge an SSD in there? So you wouldnt advise using >> mineral oil like the video shows? > > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin > Computing > To change your subscription (digest mode or unsubscribe) visit > http://www.beowulf.org/mailman/listinfo/beowulf _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From james.p.lux at jpl.nasa.gov Wed Dec 28 14:17:11 2011 From: james.p.lux at jpl.nasa.gov (Lux, Jim (337C)) Date: Wed, 28 Dec 2011 11:17:11 -0800 Subject: [Beowulf] watercooling In-Reply-To: <3081ACC9-051A-4875-B615-3B5C8A1E530A@xs4all.nl> Message-ID: On 12/28/11 11:00 AM, "Vincent Diepeveen" wrote: >Yeah Jim good comments, > >I was thinking for my cluster to overclock, which is why i guess some >posted the overclocking sentences, >and wanted to do it a bit more cheapskate. > >Latest idea now was to save costs by using for say a node or 16, >to order 16 cpu blocks and 16 small pumps and 2 cheap big reservoirs: > >Coldreservoir ==> 16 minipumps ==> 16 cpu blocks ==> Hotreservoir Hmm.. Over the past few years I've been trying different schemes to keep a bunch (a cluster?) of glass bottles full of 750ml of an 12-15% alcohol solution in water at a reasonable temperature (15C or thereabouts), and I've gone through a wide variety of improvised schemes. (aside from buying a purpose built refrigerator.. Where's the fun in that?) Unless you need small size with high power density, very quiet operation, or sealed cases, BY FAR the easiest way is a conventional air conditioner blowing cold air through the system. Schemes with pumps and radiators and heat exchangers of one kind or another have maintenance and unexpected problems (stuff grows in almost any liquid, metals corrode, pumps fail, plastics degrade). A very inexpensive window airconditioner (US$99, 8000 BTU/hr = 2400 Watts) draws about 500-800 Watts (depending on mfr etc). The Coefficient of Performance (COP) of these things is terrible, but still, you ARE pumping more heat out than electricity you're putting in. A "split system" would put the noisy part outside and the cold part inside. The other strategy... Get a surplus laboratory chiller. Put THAT outside and run your insulated cold water tubes down to a radiator/heat exchanger in your computer box. At least the lab chiller already has the pumps and packaging put together. Run a suitable mix of commercial antifreeze and water (which will include various corrosion inhibitors, etc.) But really, cold air cooling is by far and away the easiest, most trouble free way to do things, unless it just won't work for some other reason. > _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From diep at xs4all.nl Wed Dec 28 14:46:10 2011 From: diep at xs4all.nl (Vincent Diepeveen) Date: Wed, 28 Dec 2011 20:46:10 +0100 Subject: [Beowulf] watercooling In-Reply-To: References: Message-ID: <46131205-E9D0-4A0A-A201-31A2C289DCF6@xs4all.nl> On Dec 28, 2011, at 8:17 PM, Lux, Jim (337C) wrote: > > > On 12/28/11 11:00 AM, "Vincent Diepeveen" wrote: > >> Yeah Jim good comments, >> >> I was thinking for my cluster to overclock, which is why i guess some >> posted the overclocking sentences, >> and wanted to do it a bit more cheapskate. >> >> Latest idea now was to save costs by using for say a node or 16, >> to order 16 cpu blocks and 16 small pumps and 2 cheap big reservoirs: >> >> Coldreservoir ==> 16 minipumps ==> 16 cpu blocks ==> Hotreservoir > > > > Hmm.. Over the past few years I've been trying different schemes to > keep a > bunch (a cluster?) of glass bottles full of 750ml of an 12-15% alcohol > solution in water at a reasonable temperature (15C or thereabouts), > and > I've gone through a wide variety of improvised schemes. (aside from > buying a purpose built refrigerator.. Where's the fun in that?) > > > Unless you need small size with high power density, very quiet > operation, > or sealed cases, BY FAR the easiest way is a conventional air > conditioner > blowing cold air through the system. > > Schemes with pumps and radiators and heat exchangers of one kind or > another have maintenance and unexpected problems (stuff grows in > almost > any liquid, metals corrode, pumps fail, plastics degrade). > > A very inexpensive window airconditioner (US$99, 8000 BTU/hr = 2400 > Watts) > draws about 500-800 Watts (depending on mfr etc). The Coefficient of > Performance (COP) of these things is terrible, but still, you ARE > pumping > more heat out than electricity you're putting in. > > > A "split system" would put the noisy part outside and the cold part > inside. > > > The other strategy... Get a surplus laboratory chiller. Put THAT > outside > and run your insulated cold water tubes down to a radiator/heat > exchanger > in your computer box. At least the lab chiller already has the > pumps and > packaging put together. Run a suitable mix of commercial > antifreeze and > water (which will include various corrosion inhibitors, etc.) > > But really, cold air cooling is by far and away the easiest, most > trouble > free way to do things, unless it just won't work for some other > reason. > How about 2 feet thick reinforced concrete walls? Nah.... From ease viewpoint we totally agree. yet that won't get even close to that 4.4-4.6Ghz overclock. For that overclock you really need stable watercooling with low temperatures. So those cooling kits are there anyway. Just i can choose how many radiators i put inside the room. Good radiators that use the same tube system are expensive. Just a single big huge car radiator that you put on the roof is of course cheaper than 16 huge ones with each 3 to 4 fans. Realize that for home built clusters so much heat inside a room and burning that much watts is a physical office limit. Like you can burn a watt or 2000 without too much of a problem, above that it gets really problematic. This office has 3 fuses available. Each 16 amps. Practical it's over 230 volt. In itself one fuse can't be used as the washing machine is on it. So 2 left. Now on paper it would be possible to get 4 kilowatt from those 2. Yet that's paper. All the airco's also consume from that. With the 16 radiators and 3 to 4 fans a radiator we speak of a lousy 48-64 huge fans just for cooling 16 cpu's. Also eats space. The airco here is rated using a 1440 watt maximum and uses practical a 770 watt or so when i measured. The noise is ear deafening. Now for the switch i can build a case that removes a lot of sound from it, also because switch isn't eating much, yet it's a different story for the machines. So removal of noise sure is an important issue as well, as i sit the next room. As for the nodes themselves, realize idea is mainboards with underneath say 0.8 cm of space, and 16 PSU's. Next posting i'll try to do an email with a photo of current setup using an existing mainboard. You'll see the constraints then :) > >> > > _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From diep at xs4all.nl Wed Dec 28 15:02:41 2011 From: diep at xs4all.nl (Vincent Diepeveen) Date: Wed, 28 Dec 2011 21:02:41 +0100 Subject: [Beowulf] watercooling Message-ID: Photos i put on my facebook: http://www.facebook.com/media/set/?set=a. 2906369387734.146499.1515523963&type=1#!/photo.php? fbid=2906377587939&set=a.2906369387734.146499.1515523963&type=3&theater _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From eagles051387 at gmail.com Thu Dec 29 10:53:39 2011 From: eagles051387 at gmail.com (Jonathan Aquilina) Date: Thu, 29 Dec 2011 09:53:39 -0600 Subject: [Beowulf] clustering using off the shelf systems in a fish tank full of oil. In-Reply-To: <715C5657-461B-41E7-9591-5DF89F3CC285@xs4all.nl> References: <4EFB5AAE.3030900@gmail.com> <715C5657-461B-41E7-9591-5DF89F3CC285@xs4all.nl> Message-ID: <4EFC8D03.4020406@gmail.com> On 12/28/2011 12:43 PM, Vincent Diepeveen wrote: > > On Dec 28, 2011, at 7:06 PM, Jonathan Aquilina wrote: > >> On 12/28/2011 11:30 AM, Lux, Jim (337C) wrote: >>> >>> On 12/28/11 8:18 AM, "Jonathan Aquilina" >>> wrote: >>> >>>> Was thinking that after i sent the email. >>>> >>>> I think the solution to part one of your answer Prentice is the >>>> following. >>>> >>>> You would have spare machines on hand that you would swap out with a >>>> faulty machine allowing you the necessary time to replace parts as >>>> needed with out the risk of spilling the oil on the floor and creating >>>> any hazards in the workplace. >>> >>> And you'll have your oily floor "service depot" somewhere else... (and >>> you'll still have oily floors under your racks.. Oil WILL move >>> through the >>> wires by capillary attraction and/or thermal/atmospheric pumping. >>> Home >>> experiment: Get a piece of stranded wire about 30 cm long. Fill a >>> cup or >>> glass with oil to within a couple cm of the top. Drape the wire >>> over the >>> edge of the cup with one end in the oil and the other end on a piece of >>> paper on the surface of the table. (do all this within a raised edge >>> pan >>> or cookie sheet). Wait a day or two. Observe. Clean up. >>> >>> Bear in mind that a 4 U case full of oil is going to be pretty >>> heavy. Oil >>> has a specific gravity/density of around .7 kg/liter. It's gonna be >>> right >>> around the OSHA 1 person lift limit of 55 lb, and I wouldn't want to be >>> the guy standing under the chassis as you pull it out of the top >>> slot of >>> the rack. So you're going to need a rolling cart with a suitable >>> lifting >>> mechanism or maybe a chain hoist on a rail down between your server >>> aisles, sort of like in a slaughter house or metal plating plant? >>> >> Wait a min guys maybe i wasnt clear, im not saying using standard server >> cases here. > > That's because i guess Jim had already given his sysadmin a few > flippers as a Christmas gift to service the rackmounts. > >> I am talking about actually using fish tanks instead. would >> you still have that leaking issue? > > And after a few days it'll get really hot inside that fish tank. > > You'll remember then the bubbles which do a great cooling job > and considering the huge temperature difference it'll remove quite > some watts - yet it'll keep heating up if you use a box with 4 cores > or more > as those consume more than double the watts than what the shown systems > used. > > But as you had explained to me you only have some old junk there anyway > so it's worth a try, especially interesting to know is how much watts > the fishing > tank removes by itself. Maybe you can measure that for us. > > It's interesting to know how much a few bubbles remove, as that > should be very efficient > way to remove heat once it approaches a 100C+ isn't it? > > Jonathan, maybe you can get air from outside, i see now at the weather > report that it's 13C in Malta, is that correct > or is that only during nights? > Honestly not sure as I am back state side till next tuesday, but it is possible that that is at night or during the day. As of right now I am not sure. > Maybe Jim wants to explain the huge temperature difference that the > high voltage power cables > cause default and the huge active cooling that gets used for the small > parts that are underground. > Even then they can't really put in the ground such solutions for high > voltages over too long of a distance, > that's technical not possible yet. > > Above me is 2 * 450 megawatt, which is tough to put underground for > more than a kilometer or so, besides that > they need the trajectory to be 8 meters wide as well as a minimum. > > Not sure you want that high temperature in your aquarium, the > components might not withstand it for too long :) > > Anyway, I found it a very entertaining "pimp your computer" youtube > video from 2007 that aquarium and i had a good laugh! > > Vincent > >> _______________________________________________ >> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing >> To change your subscription (digest mode or unsubscribe) visit >> http://www.beowulf.org/mailman/listinfo/beowulf > _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From diep at xs4all.nl Thu Dec 29 11:24:58 2011 From: diep at xs4all.nl (Vincent Diepeveen) Date: Thu, 29 Dec 2011 17:24:58 +0100 Subject: [Beowulf] clustering using off the shelf systems in a fish tank full of oil. In-Reply-To: <4EFC8D03.4020406@gmail.com> References: <4EFB5AAE.3030900@gmail.com> <715C5657-461B-41E7-9591-5DF89F3CC285@xs4all.nl> <4EFC8D03.4020406@gmail.com> Message-ID: <5AF52A05-28AA-4EE5-A081-EA60BD1E9B32@xs4all.nl> On Dec 29, 2011, at 4:53 PM, Jonathan Aquilina wrote: >> >> Jonathan, maybe you can get air from outside, i see now at the >> weather report that it's 13C in Malta, is that correct >> or is that only during nights? >> > > Honestly not sure as I am back state side till next tuesday, but it > is possible that that is at night or during the day. As of right > now I am not sure. > Jonathan, You're basically saying you lied to me on MSN that you live in Malta and have a job there and use a few old junk computers (P3 and such) to build a cluster, like you posted about 1 or 2 days (some years ago) after we chatted onto this mailing list? Vincent _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From eagles051387 at gmail.com Thu Dec 29 11:28:48 2011 From: eagles051387 at gmail.com (Jonathan Aquilina) Date: Thu, 29 Dec 2011 10:28:48 -0600 Subject: [Beowulf] clustering using off the shelf systems in a fish tank full of oil. In-Reply-To: <5AF52A05-28AA-4EE5-A081-EA60BD1E9B32@xs4all.nl> References: <4EFB5AAE.3030900@gmail.com> <715C5657-461B-41E7-9591-5DF89F3CC285@xs4all.nl> <4EFC8D03.4020406@gmail.com> <5AF52A05-28AA-4EE5-A081-EA60BD1E9B32@xs4all.nl> Message-ID: <4EFC9540.5010906@gmail.com> On 12/29/2011 10:24 AM, Vincent Diepeveen wrote: > On Dec 29, 2011, at 4:53 PM, Jonathan Aquilina wrote: >>> Jonathan, maybe you can get air from outside, i see now at the >>> weather report that it's 13C in Malta, is that correct >>> or is that only during nights? >>> >> Honestly not sure as I am back state side till next tuesday, but it >> is possible that that is at night or during the day. As of right >> now I am not sure. >> > Jonathan, > > You're basically saying you lied to me on MSN that you live in Malta > and have a job there and use a few old junk computers (P3 and such) > to build a cluster, > like you posted about 1 or 2 days (some years ago) after we chatted > onto this mailing list? I have not lied. I do live there. I have my dad who still travels between texas and malta as he still works, and he couldnt take time off to come to malta for the holidays I am here till next tuesday visiting him. > Vincent > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing > To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From hahn at mcmaster.ca Thu Dec 29 14:49:37 2011 From: hahn at mcmaster.ca (Mark Hahn) Date: Thu, 29 Dec 2011 14:49:37 -0500 (EST) Subject: [Beowulf] clustering using off the shelf systems in a fish tank full of oil. In-Reply-To: <4EFC9540.5010906@gmail.com> References: <4EFB5AAE.3030900@gmail.com> <715C5657-461B-41E7-9591-5DF89F3CC285@xs4all.nl> <4EFC8D03.4020406@gmail.com> <5AF52A05-28AA-4EE5-A081-EA60BD1E9B32@xs4all.nl> <4EFC9540.5010906@gmail.com> Message-ID: guys, this isn't a dating site. _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From diep at xs4all.nl Thu Dec 29 19:50:45 2011 From: diep at xs4all.nl (Vincent Diepeveen) Date: Fri, 30 Dec 2011 01:50:45 +0100 Subject: [Beowulf] clustering using off the shelf systems in a fish tank full of oil. In-Reply-To: References: <4EFB5AAE.3030900@gmail.com> <715C5657-461B-41E7-9591-5DF89F3CC285@xs4all.nl> <4EFC8D03.4020406@gmail.com> <5AF52A05-28AA-4EE5-A081-EA60BD1E9B32@xs4all.nl> <4EFC9540.5010906@gmail.com> Message-ID: it's very useful Mark, as we know now he works for the company and also for which nation. Vincent On Dec 29, 2011, at 8:49 PM, Mark Hahn wrote: > guys, this isn't a dating site. > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin > Computing > To change your subscription (digest mode or unsubscribe) visit > http://www.beowulf.org/mailman/listinfo/beowulf _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From samuel at unimelb.edu.au Fri Dec 30 21:09:32 2011 From: samuel at unimelb.edu.au (Chris Samuel) Date: Sat, 31 Dec 2011 13:09:32 +1100 Subject: [Beowulf] personal HPC In-Reply-To: <34813.192.168.93.213.1324572677.squirrel@mail.eadline.org> References: <34813.192.168.93.213.1324572677.squirrel@mail.eadline.org> Message-ID: <201112311309.32578.samuel@unimelb.edu.au> On Fri, 23 Dec 2011 03:51:17 AM Douglas Eadline wrote: > BTW, if click the "Nexlink Limulus" link, you can take a survey > for a chance to win one of these systems. That survey requires you to pick a US state, which isn't really an option for those of us outside the USA.. is there any chance of getting that fixed up? Or should I just pick the Federated States of Micronesia? It's about the closest geographically to me I'd guess! :-) cheers! Chris -- Christopher Samuel - Senior Systems Administrator VLSCI - Victorian Life Sciences Computation Initiative Email: samuel at unimelb.edu.au Phone: +61 (0)3 903 55545 http://www.vlsci.unimelb.edu.au/ _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From deadline at eadline.org Sat Dec 31 09:16:35 2011 From: deadline at eadline.org (Douglas Eadline) Date: Sat, 31 Dec 2011 09:16:35 -0500 (EST) Subject: [Beowulf] personal HPC In-Reply-To: <201112311309.32578.samuel@unimelb.edu.au> References: <34813.192.168.93.213.1324572677.squirrel@mail.eadline.org> <201112311309.32578.samuel@unimelb.edu.au> Message-ID: <55819.192.168.93.213.1325340995.squirrel@mail.eadline.org> Oh, sorry the contest is only open to US residents. There should be some rules posted somewhere, let me look in to it. -- Doug > On Fri, 23 Dec 2011 03:51:17 AM Douglas Eadline wrote: > >> BTW, if click the "Nexlink Limulus" link, you can take a survey >> for a chance to win one of these systems. > > That survey requires you to pick a US state, which isn't really an > option for those of us outside the USA.. is there any chance of > getting that fixed up? > > Or should I just pick the Federated States of Micronesia? It's about > the closest geographically to me I'd guess! :-) > > cheers! > Chris > -- > Christopher Samuel - Senior Systems Administrator > VLSCI - Victorian Life Sciences Computation Initiative > Email: samuel at unimelb.edu.au Phone: +61 (0)3 903 55545 > http://www.vlsci.unimelb.edu.au/ > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing > To change your subscription (digest mode or unsubscribe) visit > http://www.beowulf.org/mailman/listinfo/beowulf > > -- > MailScanner: clean > -- Doug -- MailScanner: clean _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From diep at xs4all.nl Fri Dec 2 02:10:27 2011 From: diep at xs4all.nl (Vincent Diepeveen) Date: Fri, 2 Dec 2011 08:10:27 +0100 Subject: [Beowulf] Dutch Airco Message-ID: Hey all, I live in The Netherlands, so no surprise, solving the cooling in a barn for a small cluster i intend to solve by blowing some air to outside. Now the small cluster will be a few kilowatt only, so i wonder how much air i need to blow in and out of the room to outside. As this is a cluster for my chessprogram and it maybe 1 day a year reaches 30C outside, we don't have to worry about outside temperature too much, as i can switch off the cluster when necessary that single lucky day a year. If it's uptime 99% of the time this cluster i'm more than happy. Majority of the year it's underneath 18C. Maybe a day or 60 a year it might be above 18C and maybe 7 days it is above 25C outside. I wouldn't have the cash to buy a real airconditioning for the cluster anyway, as that would increase power usage too much, so intend to solve it Dutch style. Interesting is to have a function or table that plots outside temperature and number of kilowatts used, starting with 1, 1.5, 2, 2.5, 3, 3.5, 4, 4.5 ,5, 5.5 , 6 kilowatts. For sure cluster won't be above 6 kilowatt. First few weeks 1 kilowatt then it will be 2 kilowatt and i doubt it'll reach 4 kilowatt. Which CFM do i need to have to blow outside hot air and suck inside cold air, to get to what i want? Thanks in advance anyone answerring the question. Kind Regards, Vincent _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. From diep at xs4all.nl Fri Dec 2 02:40:15 2011 From: diep at xs4all.nl (Vincent Diepeveen) Date: Fri, 2 Dec 2011 08:40:15 +0100 Subject: [Beowulf] Intel unveils 1 teraflop chip with 50-plus cores In-Reply-To: <20111116095238.GX31847@leitl.org> References: <20111116095238.GX31847@leitl.org> Message-ID: hi, Someone anonymous posted in another forum this Knights corner chip is actually 64 cores, which makes sense, that it should be < 300 watts, though we'll have to see whether that's the case, as the AMD HD Radeon 6990 as well as the Nvidia GTX590 are rated by manufacturer nearly 400 watt, and 512 bits vectors. So seems it's larrabee. The weird mix of a cache coherent chip with big vectors. Sort of 'in between cpu and manycore' hybrid. Which i'd expect to not have a very long life, as it'll be only interesting to matrix calculations, because it's tougher to program than a GPU with those huge vectors for about any other sort of calculation, and it's gonna deliver less Tflops of course for matrix calculations than the next generation gpu's can. So it might have some sales opportunity until the next generation of gpu's gets released. On Nov 16, 2011, at 10:52 AM, Eugen Leitl wrote: > > http://seattletimes.nwsource.com/html/technologybrierdudleysblog/ > 2016775145_wow_intel_unveils_1_teraflop_c.html > > Wow: Intel unveils 1 teraflop chip with 50-plus cores > > Posted by Brier Dudley > > I thought the prospect of quad-core tablet computers was exciting. > > Then I saw Intel's latest -- a 1 teraflop chip, with more than 50 > cores, that > Intel unveiled today, running it on a test machine at the SC11 > supercomputing > conference in Seattle. > > That means my kids may take a teraflop laptop to college -- if > their grades > don't suffer too much having access to 50-core video game consoles. > > It wasn't that long ago that Intel was boasting about the first > supercomputer > with sustained 1 teraflop performance. That was in 1997, on a > system with > 9,298 Pentium II chips that filled 72 computing cabinets. > > Now Intel has squeezed that much performance onto a matchbook-sized > chip, > dubbed "Knights Ferry," based on its new "Many Integrated Core" > architecture, > or MIC. > > It was designed largely in the Portland area and has just started > manufacturing. > > "In 15 years that's what we've been able to do. That is stupendous. > You're > witnessing the 1 teraflop barrier busting," Rajeeb Hazra, general > manager of > Intel's technical computing group, said at an unveiling ceremony. > (He holds > up the chip here) > > A single teraflop is capable of a trillion floating point > operations per > second. > > On hand for the event -- in the cellar of the Ruth's Chris Steak > House in > Seattle -- were the directors of the National Center for Computational > Sciences at Oak Ridge Laboratory and the Application Acceleration > Center of > Excellence. > > Also speaking was the chief science officer of the GENCI > supercomputing > organization in France, which has used its Intel-based system for > molecular > simulations of Alzheimer's, looking at issues such as plaque > formation that's > a hallmark of the disease. > > "The hardware is hardly exciting. ... The exciting part is doing the > science," said Jeff Nichols, acting director of the computational > center at > Oak Ridge. > > The hardware was pretty cool, though. > > George Chrysos, the chief architect of Knights Ferry, came up from the > Portland area with a test system running the new chip, which was > connected to > a speed meter on a laptop to show that it was running around 1 > teraflop. > > Intel had the test system set up behind closed doors -- on a coffee > table in > a hotel suite at the Grand Hyatt, and wouldn't allow reporters to take > pictures of the setup. > > Nor would the company specify how many cores the chip has -- just > more than > 50 -- or its power requirement. > > If you're building a new system and want to future-proof it, the > Knights > Ferry chip uses a double PCI Express slot. Chrysos said the systems > are also > likely to run alongside a few Xeon processors. > > This means that Intel could be producing teraflop chips for personal > computers within a few years, although there's lots of work to be > done on the > software side before you'd want one. > > Another question is whether you'd want a processor that powerful on > a laptop, > for instance, where you may prefer to have a system optimized for > longer > battery life, Hazra said. > > More important, Knights Ferry chips may help engineers build the next > generation of supercomputing systems, which Intel and its partners > hope to > delivery by 2018. > > Power efficiency was a highlight of another big announcement this > week at > SC11. On Monday night, IBM announced its "next generation > supercomputing > project," the Blue Gene/Q system that's heading to Lawrence Livermore > National Laboratory next year. > > Dubbed Sequoia, the system should run at 20 petaflops peak > performance. IBM > expects it to be the world's most power-efficient computer, > processing 2 > gigaflops per watt. > > The first 96 racks of the system could be delivered in December. The > Department of Energy's National Nuclear Security Administration > uses the > systems to work on nuclear weapons, energy reseach and climate > change, among > other things. > > Sequoia complements another Blue Gene/Q system, a 10-petaflop setup > called > "Mira," which was previously announced by Argonne National Laboratory. > > A few images from the conference, which runs through Friday at the > Washington > State Convention & Trade Center, starting with perusal of Intel > boards: > > > Take home a Cray today! > > IBM was sporting Blue Genes, and it wasn't even casual Friday: > > A 94 teraflop rack: > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin > Computing > To change your subscription (digest mode or unsubscribe) visit > http://www.beowulf.org/mailman/listinfo/beowulf _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. From james.p.lux at jpl.nasa.gov Fri Dec 2 08:48:14 2011 From: james.p.lux at jpl.nasa.gov (Lux, Jim (337C)) Date: Fri, 2 Dec 2011 05:48:14 -0800 Subject: [Beowulf] Dutch Airco In-Reply-To: Message-ID: On 12/1/11 11:10 PM, "Vincent Diepeveen" wrote: >Hey all, I live in The Netherlands, so no surprise, solving the >cooling in a barn for a small cluster i intend to solve by blowing >some air to outside. > >Now the small cluster will be a few kilowatt only, so i wonder how >much air i need to blow in and out of the room to outside. > >As this is a cluster for my chessprogram and it maybe 1 day a year >reaches 30C outside, we don't have to worry about outside temperature >too much, >as i can switch off the cluster when necessary that single lucky day >a year. If it's uptime 99% of the time this cluster i'm more than happy. > >Majority of the year it's underneath 18C. Maybe a day or 60 a year it >might be above 18C and maybe 7 days it is above 25C outside. > >I wouldn't have the cash to buy a real airconditioning for the >cluster anyway, as that would increase power usage too much, so >intend to solve it Dutch style. > >Interesting is to have a function or table that plots outside >temperature and number of kilowatts used, starting with 1, 1.5, 2, >2.5, 3, 3.5, 4, 4.5 ,5, 5.5 , 6 >kilowatts. For sure cluster won't be above 6 kilowatt. > >First few weeks 1 kilowatt then it will be 2 kilowatt and i doubt >it'll reach 4 kilowatt. > >Which CFM do i need to have to blow outside hot air and suck inside >cold air, to get to what i want? What you didn't say is what temperature you want your computers to be at (or, more properly, what temperature rise you want in the air going through). It's all about the specific heat of the air, which is in units of joules/(kg K)... That is it tells you how many joules it takes to raise one kilogram of air one degree. For gases, there's two different numbers, one for constant pressure and one for constant temperature, and for real gases those vary with temperature, pressure, etc. Q = cp * m * deltaT Or rearranging M = Q/(cp*deltaT) But for now use Cp (constant pressure) which for air at typical room temp is 1.012 J/(g*K) You want to dump a kilowatt in (1000 Joules/sec), and lets assume a 10 degree rise (bring the air in at 10C, exhaust it at 20C) M = 1000/(1.012E-3*10) = about 0.1 kg/sec If the heat load is 5 times, then you need 5 times the air. If you want half the temp rise, then twice the air, etc. How many CFM is 0.1 kg/sec? At 15 C, the density is 1.225 kg/m3, so you need 0.08 m3/sec (as a practical matter, when doing back of the envelopes, I figure air is about 35 cubic feet/cubic meter... So 0.08*35...) About 170 cubic feet per minute per kilowatt for a 10 degree rise Be aware that life is actually much more complicated and you need more air. For one thing the heat from your box is evenly transmitted to ALL the air.. Some doesn't go through the box, so what happens is you have, say, 200 cfm through the box with a 10degree rise and 200 cfm around the box with zero rise, so the net rise is 5 degrees. Also, the thermodynamics of gases is substantially more complex than my simple "non-compressible constant density" approximation. Since Tin and Tout are close here (280K and 290K) the errors are small, but when you start talking about rises of, say, 20-30C, it starts to make a difference. _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. From mathog at caltech.edu Fri Dec 2 15:29:00 2011 From: mathog at caltech.edu (mathog) Date: Fri, 02 Dec 2011 12:29:00 -0800 Subject: [Beowulf] Dutch Airco In-Reply-To: References: Message-ID: <909dc911ec2ef79358e51241965baeaf@saf.bio.caltech.edu> Heat transfer isn't the only issue to consider. How far is this from the ocean? Salty air is pretty corrosive and you might have a rust problem if you blow that through the cases. What about moisture? If you live in a humid or foggy area there may be condensation problems. Regards, David Mathog mathog at caltech.edu Manager, Sequence Analysis Facility, Biology Division, Caltech _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. From raysonlogin at gmail.com Fri Dec 2 16:01:57 2011 From: raysonlogin at gmail.com (Rayson Ho) Date: Fri, 2 Dec 2011 16:01:57 -0500 Subject: [Beowulf] HPC on the cloud Message-ID: On Tue, Oct 4, 2011 at 3:29 PM, Chris Dagdigian wrote: > Here is a cliche example: Amazon S3 > > Before the S3 object storage service will even *acknowledge* a > successful PUT request, your file is already at rest in at least three > amazon facilities. > > So to "really" compare S3 against what you can do locally you at least > have to factor in the cost of your organization being able to provide 3x > multi-facility replication for whatever object store you choose to deploy... Agreed. Users who need less reliable storage can use Reduced Redundancy Storage (RRS) instead. RRS only creates 2 copies instead of 3, and the price is only 2/3 the price of S3: http://aws.amazon.com/s3/#pricing And Amazon recently introduced the "Heavy Utilization Reserved Instances" and "Light Utilization Reserved Instances", which bring the cost down quite a bit as well: http://aws.typepad.com/aws/2011/12/reserved-instance-options-for-amazon-ec2.html With VFIO, the latency difference between 10Gb Ethernet and Infiniband should be narrowing quite a bit as well: http://blogs.cisco.com/performance/open-mpi-over-linux-vfio/ Finally, Amazon Cloud Supercomputer ranks #42 on the most recent TOP500 list: http://i.top500.org/system/177457 I still think that a lot of companies will keep on buying their own servers for compute farms & HPC clusters. But for those who don't want to own their servers, or want to have a cluster quickly (less than 30 mins to build a basic HPC cluster[1] - of course StarCluster or CycleCloud can do most of the heavy lifting faster), or don't have the expertise, then remote HPC clusters (whether it be Amazon EC2 Cluster Compute Instances or Gridcore/Gompute[2]) are getting very attractive. [1]: http://www.youtube.com/watch?v=5zBxl6HUFA4 [2]: https://www.gompute.com/web/guest/how-it-works Rayson ================================= Grid Engine / Open Grid Scheduler http://gridscheduler.sourceforge.net/ Scalable Grid Engine Support Program http://www.scalablelogic.com/ > I don't want to be seen as a shill so I'll stop with that example. The > results really are surprising once you start down the "true cost of IT > services..." road. > > > As for industry trends with HPC and IaaS ... > > I can assure you that in the super practical & cynical world of biotech > and pharma there is already an HPC migration to IaaS platforms that is > years old already. It's a lot easier to see where and how your money is > being spent inside a biotech startup or pharma and that is (and has) > shunted a decent amount of spending towards cloud platforms. > > The easy stuff is moving to IaaS platforms. The hard stuff, the custom > stuff, the tightly bound stuff and the data/IO-bound stuff is staying > local of course - but that still means lots of stuff is moving externally. > > The article that prompted this thread is a great example of this. The > client company had a boatload of one-off molecular dynamics simulations > to run. So much, in fact, that the problem was computationally > infeasable to even consider doing inhouse. > > So they did it on AWS. > > 30,000 CPU cores. For ~$9,000 dollars. > > Amazing. > > It's a fun time to be in HPC actually. And getting my head around "IaaS" > platforms turned me onto things (like opscode chef) that we are now > bringing inhouse and integrating into our legacy clusters and grids. > > > Sorry for rambling but I think there are 2 main drivers behind what I > see moving HPC users and applications into IaaS cloud platforms ... > > > (1) The economies of scale are real. IaaS providers can run better, > bigger and cheaper than we can and they can still make a profit. This is > real, not hype or sales BS. (as long as you are honest about your actual > costs...) > > > (2) The benefits of "scriptable everything" or "everything has an API". > I'm so freaking sick of companies installing VMWare and excreting a > press release calling themselves a "cloud provider". Virtual servers and > virtual block storage on demand are boring, basic and pedestrian. That > was clever in 2004. I need far more "glue" to build useful stuff in a > virtual world and IaaS platforms deliver more products/services and > "glue" options than anyone else out there. The "scriptable everything" > nature of IaaS is enabling a lot of cool system and workflow building, > much of which would be hard or almost impossible to do in-house with > local resources. > > > > My $.02 > > -Chris > > (corporate hat: chris at bioteam.net) > > > > > > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing > To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Rayson ================================================== Open Grid Scheduler - The Official Open Source Grid Engine http://gridscheduler.sourceforge.net/ _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. From Greg at Keller.net Mon Dec 5 13:35:57 2011 From: Greg at Keller.net (Greg Keller) Date: Mon, 5 Dec 2011 12:35:57 -0600 Subject: [Beowulf] SMB + RDMA? Message-ID: Hi, I'm curious if anyone on the list has seen this "SMB over RDMA in "The Wild" yet: http://www.mellanox.com/content/pages.php?pg=press_release_item&rec_id=642 If so, any initial feedback on it's usefulness? Any hint on where to find more info short of a Mellanox Rep? We run a bunch of WinHPC and have issues with overwhelming SMB2.0 over 10GbE, so I'm curious if this path is likely to help or hurt us. Also curious if it requires the new ConnectX v3 cards, or if we can use our ConnectX v1 and v2 cards. Cheers! Greg -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From Shainer at Mellanox.com Mon Dec 5 13:39:40 2011 From: Shainer at Mellanox.com (Gilad Shainer) Date: Mon, 5 Dec 2011 18:39:40 +0000 Subject: [Beowulf] SMB + RDMA? In-Reply-To: References: Message-ID: Greg, Feel free to contact me directly. It is part of Windows Server 8 and Microsoft has done several demonstrations already. It works with any ConnectX card. Gilad From: beowulf-bounces at beowulf.org [mailto:beowulf-bounces at beowulf.org] On Behalf Of Greg Keller Sent: Monday, December 05, 2011 10:38 AM To: beowulf at beowulf.org Subject: [Beowulf] SMB + RDMA? Hi, I'm curious if anyone on the list has seen this "SMB over RDMA in "The Wild" yet: http://www.mellanox.com/content/pages.php?pg=press_release_item&rec_id=642 If so, any initial feedback on it's usefulness? Any hint on where to find more info short of a Mellanox Rep? We run a bunch of WinHPC and have issues with overwhelming SMB2.0 over 10GbE, so I'm curious if this path is likely to help or hurt us. Also curious if it requires the new ConnectX v3 cards, or if we can use our ConnectX v1 and v2 cards. Cheers! Greg -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From amjad11 at gmail.com Sat Dec 10 15:21:11 2011 From: amjad11 at gmail.com (amjad ali) Date: Sat, 10 Dec 2011 15:21:11 -0500 Subject: [Beowulf] How to justify the use MPI codes on multicore systems/PCs? Message-ID: Hello All, I developed my MPI based parallel code for clusters, but now I use it on multicore/manycore computers (PCs) as well. How to justify (in some thesis/publication) the use of a distributed memory code (in MPI) on a shared memory (multicore) machine. I guess to explain two reasons: (1) Plan is to use several hunderds processes in future. So MPI like stuff is necessary. To maintain code uniformity and save cost/time for developing shared memory solution (using OpenMP, pthreads etc), I use the same MPI code on shared memory systems (like multicore PCs). MPI based codes give reasonable performance on multicore PCs, if not the best. (2) The latest MPI implementations are intelligent enough that they use some efficient mechanism while executing MPI based codes on shared memory (multicore) machines. (please tell me any reference to quote this fact). Please help me in formally justifying this and comment/modify above two justifications. Better if I you can suggent me to quote some reference of any suitable publication in this regard. best regards, Amjad Ali -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From sabujp at gmail.com Sat Dec 10 15:48:51 2011 From: sabujp at gmail.com (Sabuj Pattanayek) Date: Sat, 10 Dec 2011 14:48:51 -0600 Subject: [Beowulf] How to justify the use MPI codes on multicore systems/PCs? In-Reply-To: References: Message-ID: Mallon, et. al., (2009) Performance Evaluation of MPI, UPC and OpenMP on Multicore Architectures : http://gac.udc.es/~gltaboada/papers/mallon_pvmmpi09.pdf newer paper here, says to use a hybrid approach with openmp + mpi : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.190.6479 HTH, Sabuj On Sat, Dec 10, 2011 at 2:21 PM, amjad ali wrote: > Hello All, > > I developed my MPI based parallel code for clusters, but now I use it on > multicore/manycore computers (PCs)?as well. How to justify (in some > thesis/publication) the use of a distributed memory code (in MPI)?on a > shared memory (multicore) machine. I guess to explain two reasons: > > (1) Plan is to use several hunderds processes in future. So MPI like stuff > is necessary. To maintain code uniformity and?save cost/time for developing > shared memory solution (using OpenMP, pthreads etc), I use the same MPI code > on?shared memory systems (like multicore?PCs).?MPI based codes?give > reasonable performance on multicore PCs, if not the best. > > (2) The latest MPI implementations are intelligent enough that they use some > efficient mechanism while executing?MPI based codes on shared memory > (multicore) machines.? (please tell me any reference to quote this fact). > > > Please help me in formally justifying this and comment/modify above two > justifications. Better if I you can suggent me to quote?some reference of > any suitable publication in this regard. > > best regards, > Amjad Ali > > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing > To change your subscription (digest mode or unsubscribe) visit > http://www.beowulf.org/mailman/listinfo/beowulf > _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. From deadline at eadline.org Sat Dec 10 17:04:43 2011 From: deadline at eadline.org (Douglas Eadline) Date: Sat, 10 Dec 2011 17:04:43 -0500 (EST) Subject: [Beowulf] How to justify the use MPI codes on multicore systems/PCs? In-Reply-To: References: Message-ID: <55445.192.168.93.213.1323554683.squirrel@mail.eadline.org> Your question seems based on the assumption that shared memory is always better than message passing on shared memory systems. Though this seems like a safe assumption, it may not be true in all cases: http://www.linux-mag.com/id/7884/ of course it all depends on the compiler, the application, the hardware, .... -- Doug Eadline > Hello All, > > I developed my MPI based parallel code for clusters, but now I use it on > multicore/manycore computers (PCs) as well. How to justify (in some > thesis/publication) the use of a distributed memory code (in MPI) on a > shared memory (multicore) machine. I guess to explain two reasons: > > (1) Plan is to use several hunderds processes in future. So MPI like stuff > is necessary. To maintain code uniformity and save cost/time for > developing > shared memory solution (using OpenMP, pthreads etc), I use the same MPI > code on shared memory systems (like multicore PCs). MPI based codes give > reasonable performance on multicore PCs, if not the best. > > (2) The latest MPI implementations are intelligent enough that they use > some efficient mechanism while executing MPI based codes on shared memory > (multicore) machines. (please tell me any reference to quote this fact). > > > Please help me in formally justifying this and comment/modify above two > justifications. Better if I you can suggent me to quote some reference of > any suitable publication in this regard. > > best regards, > Amjad Ali > > -- > This message has been scanned for viruses and > dangerous content by MailScanner, and is > believed to be clean. > > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing > To change your subscription (digest mode or unsubscribe) visit > http://www.beowulf.org/mailman/listinfo/beowulf > -- Doug -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From raysonlogin at gmail.com Mon Dec 12 11:00:23 2011 From: raysonlogin at gmail.com (Rayson Ho) Date: Mon, 12 Dec 2011 11:00:23 -0500 Subject: [Beowulf] How to justify the use MPI codes on multicore systems/PCs? In-Reply-To: References: Message-ID: On Sat, Dec 10, 2011 at 3:21 PM, amjad ali wrote: > (2) The latest MPI implementations are intelligent enough that they use some > efficient mechanism while executing?MPI based codes on shared memory > (multicore) machines.? (please tell me any reference to quote this fact). Not an academic paper, but from a real MPI library developer/architect: http://blogs.cisco.com/performance/shared-memory-as-an-mpi-transport/ http://blogs.cisco.com/performance/shared-memory-as-an-mpi-transport-part-2/ Open MPI is used by Japan's K computer (current #1 TOP 500 computer) and LANL's RoadRunner (#1 Jun 08 ? Nov 09), and "10^16 Flops Can't Be Wrong" and "10^15 Flops Can't Be Wrong": http://www.open-mpi.org/papers/sc-2008/jsquyres-cisco-booth-talk-2up.pdf Rayson ================================= Grid Engine / Open Grid Scheduler http://gridscheduler.sourceforge.net/ Scalable Grid Engine Support Program http://www.scalablelogic.com/ > > > Please help me in formally justifying this and comment/modify above two > justifications. Better if I you can suggent me to quote?some reference of > any suitable publication in this regard. > > best regards, > Amjad Ali > > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing > To change your subscription (digest mode or unsubscribe) visit > http://www.beowulf.org/mailman/listinfo/beowulf > -- Rayson ================================================== Open Grid Scheduler - The Official Open Source Grid Engine http://gridscheduler.sourceforge.net/ _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. From raysonlogin at gmail.com Wed Dec 14 18:33:19 2011 From: raysonlogin at gmail.com (Rayson Ho) Date: Wed, 14 Dec 2011 18:33:19 -0500 Subject: [Beowulf] How to justify the use MPI codes on multicore systems/PCs? In-Reply-To: References: Message-ID: There is a project called "MVAPICH2-GPU", which is developed by D. K. Panda's research group at Ohio State University. You will find lots of references on Google... and I just briefly gone through the slides of "MVAPICH2-?GPU: Optimized GPU to GPU Communication for InfiniBand Clusters"": http://nowlab.cse.ohio-state.edu/publications/conf-presentations/2011/hao-isc11-slides.pdf It takes advantage of CUDA 4.0's Unified Virtual Addressing (UVA) to pipeline & optimize cudaMemcpyAsync() & RMDA transfers. (MVAPICH 1.8a1p1 also supports Device-Device, Device-Host, Host-Device transfers.) Open MPI also supports similar functionality, but as OpenMPI is not an academic project, there are less academic papers documenting the internals of the latest developments (not saying that it's bad - many products are not academic in nature and thus have less published papers...) Rayson ================================= Grid Engine / Open Grid Scheduler http://gridscheduler.sourceforge.net/ Scalable Grid Engine Support Program http://www.scalablelogic.com/ On Mon, Dec 12, 2011 at 11:40 AM, Durga Choudhury wrote: > I think this is a *great* topic for discussion, so let me throw some > fuel to the fire: the mechanism described in the blog (that makes > perfect sense) is fine for (N)UMA shared memory architectures. But > will it work for asymmetric architectures such as the Cell BE or > discrete GPUs where the data between the compute nodes have to be > explicitly DMA'd in? Is there a middleware layer that makes it > transparent to the upper layer software? > > Best regards > Durga > > On Mon, Dec 12, 2011 at 11:00 AM, Rayson Ho wrote: >> On Sat, Dec 10, 2011 at 3:21 PM, amjad ali wrote: >>> (2) The latest MPI implementations are intelligent enough that they use some >>> efficient mechanism while executing?MPI based codes on shared memory >>> (multicore) machines.? (please tell me any reference to quote this fact). >> >> Not an academic paper, but from a real MPI library developer/architect: >> >> http://blogs.cisco.com/performance/shared-memory-as-an-mpi-transport/ >> http://blogs.cisco.com/performance/shared-memory-as-an-mpi-transport-part-2/ >> >> Open MPI is used by Japan's K computer (current #1 TOP 500 computer) >> and LANL's RoadRunner (#1 Jun 08 ? Nov 09), and "10^16 Flops Can't Be >> Wrong" and "10^15 Flops Can't Be Wrong": >> >> http://www.open-mpi.org/papers/sc-2008/jsquyres-cisco-booth-talk-2up.pdf >> >> Rayson >> >> ================================= >> Grid Engine / Open Grid Scheduler >> http://gridscheduler.sourceforge.net/ >> >> Scalable Grid Engine Support Program >> http://www.scalablelogic.com/ >> >> >>> >>> >>> Please help me in formally justifying this and comment/modify above two >>> justifications. Better if I you can suggent me to quote?some reference of >>> any suitable publication in this regard. >>> >>> best regards, >>> Amjad Ali >>> >>> _______________________________________________ >>> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing >>> To change your subscription (digest mode or unsubscribe) visit >>> http://www.beowulf.org/mailman/listinfo/beowulf >>> >> >> >> >> -- >> Rayson >> >> ================================================== >> Open Grid Scheduler - The Official Open Source Grid Engine >> http://gridscheduler.sourceforge.net/ >> >> _______________________________________________ >> users mailing list >> users at open-mpi.org >> http://www.open-mpi.org/mailman/listinfo.cgi/users > > _______________________________________________ > users mailing list > users at open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users -- Rayson ================================================== Open Grid Scheduler - The Official Open Source Grid Engine http://gridscheduler.sourceforge.net/ _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. From trainor at presciencetrust.org Sun Dec 18 12:42:43 2011 From: trainor at presciencetrust.org (Douglas J. Trainor) Date: Sun, 18 Dec 2011 12:42:43 -0500 Subject: [Beowulf] Nvidia ditches homegrown C/C++ compiler for LLVM Message-ID: "Nobody wants to read the manual," says Gupta with a laugh. And so this expert system has a redesigned visual code profiler that shows bottlenecks in the code, offers hints on how to fix them, and automagically finds the right portions of the CUDA manual to help fix the problem. For instance, the code profiler can show coders how to better use the memory hierarchy in CPU-GPU hybrids, which is a tricky bit of programming. http://www.theregister.co.uk/2011/12/16/nvidia_llvm_cuda_app_dev/print.html _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From eugen at leitl.org Thu Dec 22 04:50:40 2011 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 22 Dec 2011 10:50:40 +0100 Subject: [Beowulf] 3.79 TFlops sp, 0.95 TFlops dp, 264 TByte/s, 3 GByte, 198 W @ 500 EUR Message-ID: <20111222095040.GK31847@leitl.org> 4312711873 transistors, 28 nm, 2048 cores. 925 MHz, 3 TByte GDDR5 (ECC optional), 384 bit bus. http://www.heise.de/newsticker/meldung/Radeon-HD-7970-Mit-2048-Kernen-an-die-Leistungsspitze-1399905.html _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From eugen at leitl.org Thu Dec 22 09:57:44 2011 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 22 Dec 2011 15:57:44 +0100 Subject: [Beowulf] 3.79 TFlops sp, 0.95 TFlops dp, 264 TByte/s, 3 GByte, 198 W @ 500 EUR In-Reply-To: <4EF3422B.2090302@ias.edu> References: <20111222095040.GK31847@leitl.org> <4EF3422B.2090302@ias.edu> Message-ID: <20111222145744.GZ31847@leitl.org> On Thu, Dec 22, 2011 at 09:43:55AM -0500, Prentice Bisbal wrote: > Or if your German is rusty: > > http://www.zdnet.com/blog/computers/amd-radeon-hd-7970-graphics-card-launched-benchmarked-fastest-single-gpu-board-available/7204 Wonder what kind of response will be forthcoming from nVidia, given developments like http://www.theregister.co.uk/2011/11/14/arm_gpu_nvidia_supercomputer/ It does seem that x86 is dead, despite good Bulldozer performance in Interlagos http://www.heise.de/newsticker/meldung/AMDs-Serverprozessoren-mit-Bulldozer-Architektur-legen-los-1378230.html (engage dekrautizer of your choice). _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From prentice at ias.edu Thu Dec 22 10:42:35 2011 From: prentice at ias.edu (Prentice Bisbal) Date: Thu, 22 Dec 2011 10:42:35 -0500 Subject: [Beowulf] 3.79 TFlops sp, 0.95 TFlops dp, 264 TByte/s, 3 GByte, 198 W @ 500 EUR In-Reply-To: <20111222145744.GZ31847@leitl.org> References: <20111222095040.GK31847@leitl.org> <4EF3422B.2090302@ias.edu> <20111222145744.GZ31847@leitl.org> Message-ID: <4EF34FEB.8030903@ias.edu> On 12/22/2011 09:57 AM, Eugen Leitl wrote: > On Thu, Dec 22, 2011 at 09:43:55AM -0500, Prentice Bisbal wrote: > >> Or if your German is rusty: >> >> http://www.zdnet.com/blog/computers/amd-radeon-hd-7970-graphics-card-launched-benchmarked-fastest-single-gpu-board-available/7204 > Wonder what kind of response will be forthcoming from nVidia, > given developments like http://www.theregister.co.uk/2011/11/14/arm_gpu_nvidia_supercomputer/ > > It does seem that x86 is dead, despite good Bulldozer performance > in Interlagos > > http://www.heise.de/newsticker/meldung/AMDs-Serverprozessoren-mit-Bulldozer-Architektur-legen-los-1378230.html > > (engage dekrautizer of your choice). > At SC11, it was clear that everyone was looking for ways around the power wall. I saw 5 or 6 different booths touting the use of FPGAs for improved performance/efficiency. I don't remember there being a single FPGA booth in the past. Whether the accelerator is GPU, FPGA, GRAPE, Intem MIC, or something else, I think it's clear that the future of HPC architecture is going to change radically in the next couple years, unless some major breakthrough occurs for commodity processors. I think DE Shaw Research's Anton computer, which uses FPGAs and custom processors, is an excellent example of what the future of HPC might look like. -- Prentice _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From diep at xs4all.nl Thu Dec 22 11:04:09 2011 From: diep at xs4all.nl (Vincent Diepeveen) Date: Thu, 22 Dec 2011 17:04:09 +0100 Subject: [Beowulf] 3.79 TFlops sp, 0.95 TFlops dp, 264 TByte/s, 3 GByte, 198 W @ 500 EUR In-Reply-To: <20111222145744.GZ31847@leitl.org> References: <20111222095040.GK31847@leitl.org> <4EF3422B.2090302@ias.edu> <20111222145744.GZ31847@leitl.org> Message-ID: <3A177202-E4AF-488A-9AD1-18642883E1CA@xs4all.nl> As for HPC, do they need to do that - did AMD already release a driver for example for OpenCL for the HD 6990 that's using BOTH gpu's? I had back then bought directly a HD 6970 card. Once the driver for the HD 6970 was there for linux, we were months further and the price of the HD 6970 had dropped considerable again at the shops. Multiplying 32 x 32 bits is slow at AMD gpu's, as it needs all 4 procesing elements for that. Nvidia wins it bigtime there. Fast at AMD seemingly is 24 x 24 bits, yet of course you also need the top 16 bits of such multiplication. Then after a while i figured out that OpenCL has no function call for the crucial top 16 bits. Initially there was a poster on the forum saying that this top 16 bits was casted onto the 32 x 32 bits anyway, so would be slow anyway. Raising a ticket at AMD then, we speak again about months later, revealed that the hardware instruction i found in their manual that's doing the top16 bits of a 24x24 bits integer multiplication, total crucial for factorisation work, that this indeed runs at full throttle. Some AMD engineer offered to include it, i gladly accepted that, of course we were months later by then. We are 1 year further nearly now and it's still not there. This HD6970 so far was a massive waste of my money. Can i ask my money back? You sure this will go better with HD7970 not to mention the soon to be released HD7990? From HPC viewpoint AMD has a major software support problem so far... ...also i noticed that the problem was not so much being 'busy', as i saw relative few tickets got raised for their gpgpu team. Regards, Vincent On Dec 22, 2011, at 3:57 PM, Eugen Leitl wrote: > On Thu, Dec 22, 2011 at 09:43:55AM -0500, Prentice Bisbal wrote: > >> Or if your German is rusty: >> >> http://www.zdnet.com/blog/computers/amd-radeon-hd-7970-graphics- >> card-launched-benchmarked-fastest-single-gpu-board-available/7204 > > Wonder what kind of response will be forthcoming from nVidia, > given developments like http://www.theregister.co.uk/2011/11/14/ > arm_gpu_nvidia_supercomputer/ > > It does seem that x86 is dead, despite good Bulldozer performance > in Interlagos > > http://www.heise.de/newsticker/meldung/AMDs-Serverprozessoren-mit- > Bulldozer-Architektur-legen-los-1378230.html > > (engage dekrautizer of your choice). > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin > Computing > To change your subscription (digest mode or unsubscribe) visit > http://www.beowulf.org/mailman/listinfo/beowulf _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From diep at xs4all.nl Thu Dec 22 11:06:43 2011 From: diep at xs4all.nl (Vincent Diepeveen) Date: Thu, 22 Dec 2011 17:06:43 +0100 Subject: [Beowulf] 3.79 TFlops sp, 0.95 TFlops dp, 264 TByte/s, 3 GByte, 198 W @ 500 EUR In-Reply-To: <4EF34FEB.8030903@ias.edu> References: <20111222095040.GK31847@leitl.org> <4EF3422B.2090302@ias.edu> <20111222145744.GZ31847@leitl.org> <4EF34FEB.8030903@ias.edu> Message-ID: <8A51A20D-31D3-4977-944F-EC371EACFE84@xs4all.nl> On Dec 22, 2011, at 4:42 PM, Prentice Bisbal wrote: > On 12/22/2011 09:57 AM, Eugen Leitl wrote: >> On Thu, Dec 22, 2011 at 09:43:55AM -0500, Prentice Bisbal wrote: >> >>> Or if your German is rusty: >>> >>> http://www.zdnet.com/blog/computers/amd-radeon-hd-7970-graphics- >>> card-launched-benchmarked-fastest-single-gpu-board-available/7204 >> Wonder what kind of response will be forthcoming from nVidia, >> given developments like http://www.theregister.co.uk/2011/11/14/ >> arm_gpu_nvidia_supercomputer/ >> >> It does seem that x86 is dead, despite good Bulldozer performance >> in Interlagos >> >> http://www.heise.de/newsticker/meldung/AMDs-Serverprozessoren-mit- >> Bulldozer-Architektur-legen-los-1378230.html >> >> (engage dekrautizer of your choice). >> > > At SC11, it was clear that everyone was looking for ways around the > power wall. The obvious answer to that is clustering machines of course! > I saw 5 or 6 different booths touting the use of FPGAs for > improved performance/efficiency. I don't remember there being a single > FPGA booth in the past. Whether the accelerator is GPU, FPGA, GRAPE, > Intem MIC, or something else, I think it's clear that the future > of HPC > architecture is going to change radically in the next couple years, > unless some major breakthrough occurs for commodity processors. > > I think DE Shaw Research's Anton computer, which uses FPGAs and custom > processors, is an excellent example of what the future of HPC might > look > like. Not unless when they sell dozens of millions of them. To quote Linus: "The tiny processors have won". Because they get massively produced which keeps price cheap. It's about clustering them and then produce software that gets the maximum performance out of it. The software is always a lot behind the hardware! > > -- > Prentice > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin > Computing > To change your subscription (digest mode or unsubscribe) visit > http://www.beowulf.org/mailman/listinfo/beowulf _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From diep at xs4all.nl Thu Dec 22 11:30:15 2011 From: diep at xs4all.nl (Vincent Diepeveen) Date: Thu, 22 Dec 2011 17:30:15 +0100 Subject: [Beowulf] 3.79 TFlops sp, 0.95 TFlops dp, 264 TByte/s, 3 GByte, 198 W @ 500 EUR In-Reply-To: <4EF34FEB.8030903@ias.edu> References: <20111222095040.GK31847@leitl.org> <4EF3422B.2090302@ias.edu> <20111222145744.GZ31847@leitl.org> <4EF34FEB.8030903@ias.edu> Message-ID: <0247F017-B0D1-497C-8CBF-E91BB8CB177E@xs4all.nl> On Dec 22, 2011, at 4:42 PM, Prentice Bisbal wrote: > On 12/22/2011 09:57 AM, Eugen Leitl wrote: >> On Thu, Dec 22, 2011 at 09:43:55AM -0500, Prentice Bisbal wrote: >> >>> Or if your German is rusty: >>> >>> http://www.zdnet.com/blog/computers/amd-radeon-hd-7970-graphics- >>> card-launched-benchmarked-fastest-single-gpu-board-available/7204 >> Wonder what kind of response will be forthcoming from nVidia, >> given developments like http://www.theregister.co.uk/2011/11/14/ >> arm_gpu_nvidia_supercomputer/ >> >> It does seem that x86 is dead, despite good Bulldozer performance >> in Interlagos >> >> http://www.heise.de/newsticker/meldung/AMDs-Serverprozessoren-mit- >> Bulldozer-Architektur-legen-los-1378230.html >> >> (engage dekrautizer of your choice). >> > > At SC11, it was clear that everyone was looking for ways around the > power wall. I saw 5 or 6 different booths touting the use of FPGAs for > improved performance/efficiency. If you have 1 specific problem other than multiplying massively, then FPGA's can be fast. They can parallellize a number of sequential actions bigtime. However majority on this list is busy with HPC and majority of HPC codes need the mutliplication unit bigtime. You're not gonna beat optimized GPU's with a fpga card when all what you need is some multiplications of low number of bits. Sure some hidden NSA team might have cooked a math processor low power that's kick butt and can handle big numbers. But what's price of development of that team? Can you afford such team? In such case a FPGA isn't soon gonna beat pricewise a combination of a good node with good processor cores with good GPU in the PCI-E 3.0 and with a network card. What's price of such node? Your guess is as good as mine, but it's always going to be cheaper than a FPGA card, as so far history has told us those get sold real expensive when they can do something useful. Furthermore the cpu and gpu node can run other codes as well and are cheap to scale in a cluster. That eats more power, sure, but we all must face that performance brings more power usage with it nowadays. At home this might be difficult to solve, but factories get the power 20x cheaper, especially Nuclear power. Now this is not a good forum to start an energy debate (again), with me having the advantage having sut in an energy commission and then you might be confronted with numbers a tad different than what you find on google; yet regrettably it's a fact that average person on this planet eat s more and more power for each person. As for HPC, not too many on this planet are busy with HPC, so you have to ask yourself, if a simple plastic factory making a few plastic boards and plastic knifes and plastic forks and plastic spoons; if a tiny compnay doing that already eats 7.5 megawatt (actually that's a factory around the corner here), is it realistic to eat less with HPC? 7.5 megawatt, depending upon what place you try to get the power, is doing around 0.4 cents per kilowatt hour. With prices like that. using 7.5 megawatt a year, price of energy is around 0.004 * 7.5 * 1000 = 30 euro an hour A year that is: 365 * 24 * 30 = 262800 euro a year. Now what eats 7.5 megawatt if we speak about a cluster. Let's assume an intel 2 cpu Xeon Sandy Bridge 8 core node and say FDR network, with a gpu eating 1000 watt a node. That's 7500 nodes. What will price be of such node. Say 6000 euro? So a machine that has a cost of 7500 * 6k = 7.5k * 6k = 45 million euro, has an energy price of 262800 euro a year. What are we talking about? Vincent > I don't remember there being a single > FPGA booth in the past. Whether the accelerator is GPU, FPGA, GRAPE, > Intem MIC, or something else, I think it's clear that the future > of HPC > architecture is going to change radically in the next couple years, > unless some major breakthrough occurs for commodity processors. > > I think DE Shaw Research's Anton computer, which uses FPGAs and custom > processors, is an excellent example of what the future of HPC might > look > like. > > -- > Prentice > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin > Computing > To change your subscription (digest mode or unsubscribe) visit > http://www.beowulf.org/mailman/listinfo/beowulf _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From deadline at eadline.org Thu Dec 22 11:51:17 2011 From: deadline at eadline.org (Douglas Eadline) Date: Thu, 22 Dec 2011 11:51:17 -0500 (EST) Subject: [Beowulf] personal HPC Message-ID: <34813.192.168.93.213.1324572677.squirrel@mail.eadline.org> For those that don't know, I have been working on a commodity "desk side" cluster for a while. I have been writing about the progress at: http://limulus.basement-supercomputing.com/ Recently I was able to get 200 GFLOPS using Intel i5-2400S processors connected by GigE (58% of peak). Of course these are CPU FLOPS not GPU FLOPS and the design has a power/heat/performance/noise envelope that makes it suitable for true desk side computing. (for things like software development, education, small production work, and cloud staging) You can find the raw HPC numbers and specifications here: http://limulus.basement-supercomputing.com/wiki/CommercialLimulus BTW, if click the "Nexlink Limulus" link, you can take a survey for a chance to win one of these systems. Happy holidays -- Doug -- MailScanner: clean _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From prentice at ias.edu Thu Dec 22 11:53:39 2011 From: prentice at ias.edu (Prentice Bisbal) Date: Thu, 22 Dec 2011 11:53:39 -0500 Subject: [Beowulf] 3.79 TFlops sp, 0.95 TFlops dp, 264 TByte/s, 3 GByte, 198 W @ 500 EUR In-Reply-To: References: Message-ID: <4EF36093.9040807@ias.edu> Just for the record - I'm only the messenger. I noticed a not-insignificant number of booths touting FPGAs at SC11 this year, so I reported on it. I also mentioned other forms of accelerators, like GPUs and Intel's MIC architecture. The Anton computer architecture isn't just a FPGA - it also has custom-designed processors (ASICS). The ASICs handle the parts of the molecular dynamics (MD) algorithms that are well-understood, and unlikely to change, and the FPGAs handle the parts of the algorithms that may change or might have room for further optimization. As far as I know, only 8 or 9 Antons have been built. One is at the Pittsburgh Supercomputing Center (PSC), the rest are for internal use at DE Shaw. A single Anton consists of 512 cores, and takes up 6 or 8 racks. Despite it's small size, it's orders of magnitude faster at doing MD calculations than even super computers like Jaguar and Roadrunner with hundreds of thousands of processors. So overall, Anton is several orders of magnitudes faster than an general-purpose processor based supercomputer. And sI'm sure it uses a LOT less power. I don't think the Anton's are clustered together, so I'm pretty sure the published performance on MD simulations is for a single Anton with 512 cores Keep in mind that Anton was designed to do only 1 thing: MD, so it probably can't even run LinPack, and if it did, I'm sure it's score would be awful. Also, the designers cut corners where they knew the safely could, like using fixed-precision (or is it fixed-point?) math, so the hardware design is only half the story in this example. Prentice On 12/22/2011 11:27 AM, Lux, Jim (337C) wrote: > The problem with FPGAs (and I use a fair number of them) is that you're > never going to get the same picojoules/bit transition kind of power > consumption that you do with a purpose designed processor. The extra > logic needed to get it "reconfigurable", and the physical junction sizes > as well, make it so. > > What you will find is that on certain kinds of problems, you can implement > a more efficient algorithm in FPGA than you can in a conventional > processor or GPU. So, for that class of problem, the FPGA is a winner > (things lending themselves to fixed point systolic array type processes > are a good candidate). > > Bear in mind also that while an FPGA may have, say, 10-million gate > equivalent, any given practical design is going to use a small fraction of > those gates. Fortunately, most of those unused gates aren't toggling, so > they don't consume clock related power, but they do consume leakage > current, so the whole clock rate vs core voltage trade winds up a bit > different for FPGAs. > > The biggest problem with FPGAs is that they are difficult to write high > performance software for. With FORTRAN on conventional and vectorized and > pipelined processors, we've got 50 years of compiler writing expertise, > and real high performance libraries. And, literally millions of people > who know how to code in FORTRAN or C or something, so if you're looking > for the highest performance coders, even at the 4 sigma level, you've got > a fair number to choose from. For numerical computation in FPGAs, not so > many. I'd guess that a large fraction of FPGA developers are doing one of > two things: 1) digital signal processing, flow through kinds of stuff > (error correcting codes, compression/decompression, crypto; 2) bus > interface and data handling (PCI bus, disk drive controls, etc.). > > Interestingly, even with the relative scarcity of FPGA developers versus > conventional CPU software, the average salaries aren't that far apart. > The distribution on "generic coders" is wider (particularly on the low > end.. Barriers to entry are lower for C,Java,whathaveyou code monkeys), > but there are very, very few people making more than, say, 150-200k/yr > doing either. (except in a few anomalous industries, where compensation > is higher than normal in general). (also leaving out "equity > participation" type deals) > > > > On 12/22/11 7:42 AM, "Prentice Bisbal" wrote: > >> On 12/22/2011 09:57 AM, Eugen Leitl wrote: >>> On Thu, Dec 22, 2011 at 09:43:55AM -0500, Prentice Bisbal wrote: >>> >>>> Or if your German is rusty: >>>> >>>> >>>> http://www.zdnet.com/blog/computers/amd-radeon-hd-7970-graphics-card-lau >>>> nched-benchmarked-fastest-single-gpu-board-available/7204 >>> Wonder what kind of response will be forthcoming from nVidia, >>> given developments like >>> http://www.theregister.co.uk/2011/11/14/arm_gpu_nvidia_supercomputer/ >>> >>> It does seem that x86 is dead, despite good Bulldozer performance >>> in Interlagos >>> >>> >>> http://www.heise.de/newsticker/meldung/AMDs-Serverprozessoren-mit-Bulldoz >>> er-Architektur-legen-los-1378230.html >>> >>> (engage dekrautizer of your choice). >>> >> At SC11, it was clear that everyone was looking for ways around the >> power wall. I saw 5 or 6 different booths touting the use of FPGAs for >> improved performance/efficiency. I don't remember there being a single >> FPGA booth in the past. Whether the accelerator is GPU, FPGA, GRAPE, >> Intem MIC, or something else, I think it's clear that the future of HPC >> architecture is going to change radically in the next couple years, >> unless some major breakthrough occurs for commodity processors. >> >> I think DE Shaw Research's Anton computer, which uses FPGAs and custom >> processors, is an excellent example of what the future of HPC might look >> like. >> >> -- >> Prentice >> _______________________________________________ >> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing >> To change your subscription (digest mode or unsubscribe) visit >> http://www.beowulf.org/mailman/listinfo/beowulf > _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From james.p.lux at jpl.nasa.gov Thu Dec 22 11:27:46 2011 From: james.p.lux at jpl.nasa.gov (Lux, Jim (337C)) Date: Thu, 22 Dec 2011 08:27:46 -0800 Subject: [Beowulf] 3.79 TFlops sp, 0.95 TFlops dp, 264 TByte/s, 3 GByte, 198 W @ 500 EUR In-Reply-To: <4EF34FEB.8030903@ias.edu> Message-ID: The problem with FPGAs (and I use a fair number of them) is that you're never going to get the same picojoules/bit transition kind of power consumption that you do with a purpose designed processor. The extra logic needed to get it "reconfigurable", and the physical junction sizes as well, make it so. What you will find is that on certain kinds of problems, you can implement a more efficient algorithm in FPGA than you can in a conventional processor or GPU. So, for that class of problem, the FPGA is a winner (things lending themselves to fixed point systolic array type processes are a good candidate). Bear in mind also that while an FPGA may have, say, 10-million gate equivalent, any given practical design is going to use a small fraction of those gates. Fortunately, most of those unused gates aren't toggling, so they don't consume clock related power, but they do consume leakage current, so the whole clock rate vs core voltage trade winds up a bit different for FPGAs. The biggest problem with FPGAs is that they are difficult to write high performance software for. With FORTRAN on conventional and vectorized and pipelined processors, we've got 50 years of compiler writing expertise, and real high performance libraries. And, literally millions of people who know how to code in FORTRAN or C or something, so if you're looking for the highest performance coders, even at the 4 sigma level, you've got a fair number to choose from. For numerical computation in FPGAs, not so many. I'd guess that a large fraction of FPGA developers are doing one of two things: 1) digital signal processing, flow through kinds of stuff (error correcting codes, compression/decompression, crypto; 2) bus interface and data handling (PCI bus, disk drive controls, etc.). Interestingly, even with the relative scarcity of FPGA developers versus conventional CPU software, the average salaries aren't that far apart. The distribution on "generic coders" is wider (particularly on the low end.. Barriers to entry are lower for C,Java,whathaveyou code monkeys), but there are very, very few people making more than, say, 150-200k/yr doing either. (except in a few anomalous industries, where compensation is higher than normal in general). (also leaving out "equity participation" type deals) On 12/22/11 7:42 AM, "Prentice Bisbal" wrote: >On 12/22/2011 09:57 AM, Eugen Leitl wrote: >> On Thu, Dec 22, 2011 at 09:43:55AM -0500, Prentice Bisbal wrote: >> >>> Or if your German is rusty: >>> >>> >>>http://www.zdnet.com/blog/computers/amd-radeon-hd-7970-graphics-card-lau >>>nched-benchmarked-fastest-single-gpu-board-available/7204 >> Wonder what kind of response will be forthcoming from nVidia, >> given developments like >>http://www.theregister.co.uk/2011/11/14/arm_gpu_nvidia_supercomputer/ >> >> It does seem that x86 is dead, despite good Bulldozer performance >> in Interlagos >> >> >>http://www.heise.de/newsticker/meldung/AMDs-Serverprozessoren-mit-Bulldoz >>er-Architektur-legen-los-1378230.html >> >> (engage dekrautizer of your choice). >> > >At SC11, it was clear that everyone was looking for ways around the >power wall. I saw 5 or 6 different booths touting the use of FPGAs for >improved performance/efficiency. I don't remember there being a single >FPGA booth in the past. Whether the accelerator is GPU, FPGA, GRAPE, >Intem MIC, or something else, I think it's clear that the future of HPC >architecture is going to change radically in the next couple years, >unless some major breakthrough occurs for commodity processors. > >I think DE Shaw Research's Anton computer, which uses FPGAs and custom >processors, is an excellent example of what the future of HPC might look >like. > >-- >Prentice >_______________________________________________ >Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing >To change your subscription (digest mode or unsubscribe) visit >http://www.beowulf.org/mailman/listinfo/beowulf _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From james.p.lux at jpl.nasa.gov Thu Dec 22 12:33:37 2011 From: james.p.lux at jpl.nasa.gov (Lux, Jim (337C)) Date: Thu, 22 Dec 2011 09:33:37 -0800 Subject: [Beowulf] 3.79 TFlops sp, 0.95 TFlops dp, 264 TByte/s, 3 GByte, 198 W @ 500 EUR In-Reply-To: <4EF36093.9040807@ias.edu> Message-ID: That's an interesting approach of combining ASICs with FPGAs. ASICs will blow the doors off anything else in a FLOP/Joule contest or a FLOPS/kg or FLOPS/dollar.. For tasks for which the ASIC is designed. FPGAs to handle the routing/sequencing/variable parts of the problem and ASICs to do the crunching is a great idea. Sort of the same idea as including DSP or PowerPC cores on a Xilinx FPGA, at a more macro scale. (and of interest in the HPC world, since early 2nd generation Hypercubes from Intel used Xilinx FPGAs as their routing fabric) The challenge with this kind of hardware design is PWB design. Sure, you have 1100+ pins coming out of that FPGA.. Now you have to route them somewhere. And do it in a manufacturable board: I've worked recently with a board that had 22 layers, and we were at the ragged edge of tolerances with the close pitch column grid array parts we had to use. I would expect the clever folks at DE Shaw did an integrated design with their ASIC.. Make the ASIC pinouts such that they line up with the FPGAs, and make the routing problem simpler. On 12/22/11 8:53 AM, "Prentice Bisbal" wrote: >Just for the record - I'm only the messenger. I noticed a >not-insignificant number of booths touting FPGAs at SC11 this year, so I >reported on it. I also mentioned other forms of accelerators, like GPUs >and Intel's MIC architecture. > >The Anton computer architecture isn't just a FPGA - it also has >custom-designed processors (ASICS). The ASICs handle the parts of the >molecular dynamics (MD) algorithms that are well-understood, and >unlikely to change, and the FPGAs handle the parts of the algorithms >that may change or might have room for further optimization. > >As far as I know, only 8 or 9 Antons have been built. One is at the >Pittsburgh Supercomputing Center (PSC), the rest are for internal use at >DE Shaw. A single Anton consists of 512 cores, and takes up 6 or 8 >racks. Despite it's small size, it's orders of magnitude faster at >doing MD calculations than even super computers like Jaguar and >Roadrunner with hundreds of thousands of processors. So overall, Anton >is several orders of magnitudes faster than an general-purpose processor >based supercomputer. And sI'm sure it uses a LOT less power. I don't >think the Anton's are clustered together, so I'm pretty sure the >published performance on MD simulations is for a single Anton with 512 >cores > >Keep in mind that Anton was designed to do only 1 thing: MD, so it >probably can't even run LinPack, and if it did, I'm sure it's score >would be awful. Also, the designers cut corners where they knew the >safely could, like using fixed-precision (or is it fixed-point?) math, >so the hardware design is only half the story in this example. > >Prentice > > > >On 12/22/2011 11:27 AM, Lux, Jim (337C) wrote: >> The problem with FPGAs (and I use a fair number of them) is that you're >> never going to get the same picojoules/bit transition kind of power >> consumption that you do with a purpose designed processor. The extra >> logic needed to get it "reconfigurable", and the physical junction sizes >> as well, make it so. >> >> What you will find is that on certain kinds of problems, you can >>implement >> a more efficient algorithm in FPGA than you can in a conventional >> processor or GPU. So, for that class of problem, the FPGA is a winner >> (things lending themselves to fixed point systolic array type processes >> are a good candidate). >> >> Bear in mind also that while an FPGA may have, say, 10-million gate >> equivalent, any given practical design is going to use a small fraction >>of >> those gates. Fortunately, most of those unused gates aren't toggling, >>so >> they don't consume clock related power, but they do consume leakage >> current, so the whole clock rate vs core voltage trade winds up a bit >> different for FPGAs. >> >> The biggest problem with FPGAs is that they are difficult to write high >> performance software for. With FORTRAN on conventional and vectorized >>and >> pipelined processors, we've got 50 years of compiler writing expertise, >> and real high performance libraries. And, literally millions of people >> who know how to code in FORTRAN or C or something, so if you're looking >> for the highest performance coders, even at the 4 sigma level, you've >>got >> a fair number to choose from. For numerical computation in FPGAs, not >>so >> many. I'd guess that a large fraction of FPGA developers are doing one >>of >> two things: 1) digital signal processing, flow through kinds of stuff >> (error correcting codes, compression/decompression, crypto; 2) bus >> interface and data handling (PCI bus, disk drive controls, etc.). >> >> Interestingly, even with the relative scarcity of FPGA developers versus >> conventional CPU software, the average salaries aren't that far apart. >> The distribution on "generic coders" is wider (particularly on the low >> end.. Barriers to entry are lower for C,Java,whathaveyou code monkeys), >> but there are very, very few people making more than, say, 150-200k/yr >> doing either. (except in a few anomalous industries, where compensation >> is higher than normal in general). (also leaving out "equity >> participation" type deals) >> >> >> >> On 12/22/11 7:42 AM, "Prentice Bisbal" wrote: >> >>> On 12/22/2011 09:57 AM, Eugen Leitl wrote: >>>> On Thu, Dec 22, 2011 at 09:43:55AM -0500, Prentice Bisbal wrote: >>>> >>>>> Or if your German is rusty: >>>>> >>>>> >>>>> >>>>>http://www.zdnet.com/blog/computers/amd-radeon-hd-7970-graphics-card-l >>>>>au >>>>> nched-benchmarked-fastest-single-gpu-board-available/7204 >>>> Wonder what kind of response will be forthcoming from nVidia, >>>> given developments like >>>> http://www.theregister.co.uk/2011/11/14/arm_gpu_nvidia_supercomputer/ >>>> >>>> It does seem that x86 is dead, despite good Bulldozer performance >>>> in Interlagos >>>> >>>> >>>> >>>>http://www.heise.de/newsticker/meldung/AMDs-Serverprozessoren-mit-Bulld >>>>oz >>>> er-Architektur-legen-los-1378230.html >>>> >>>> (engage dekrautizer of your choice). >>>> >>> At SC11, it was clear that everyone was looking for ways around the >>> power wall. I saw 5 or 6 different booths touting the use of FPGAs for >>> improved performance/efficiency. I don't remember there being a single >>> FPGA booth in the past. Whether the accelerator is GPU, FPGA, GRAPE, >>> Intem MIC, or something else, I think it's clear that the future of >>>HPC >>> architecture is going to change radically in the next couple years, >>> unless some major breakthrough occurs for commodity processors. >>> >>> I think DE Shaw Research's Anton computer, which uses FPGAs and custom >>> processors, is an excellent example of what the future of HPC might >>>look >>> like. >>> >>> -- >>> Prentice >>> _______________________________________________ >>> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin >>>Computing >>> To change your subscription (digest mode or unsubscribe) visit >>> http://www.beowulf.org/mailman/listinfo/beowulf >> >_______________________________________________ >Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing >To change your subscription (digest mode or unsubscribe) visit >http://www.beowulf.org/mailman/listinfo/beowulf _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From prentice at ias.edu Thu Dec 22 14:49:15 2011 From: prentice at ias.edu (Prentice Bisbal) Date: Thu, 22 Dec 2011 14:49:15 -0500 Subject: [Beowulf] 3.79 TFlops sp, 0.95 TFlops dp, 264 TByte/s, 3 GByte, 198 W @ 500 EUR In-Reply-To: References: Message-ID: <4EF389BB.5070608@ias.edu> Jim, If you or anyone else on this are interested in learning more about the anton architecture, there a bunch of links here: http://www.deshawresearch.com/publications.html There's a couple that give good descriptions of the anton architecture. I read most of the computer-related ones over the summer. Yes, that's my idea of light summer reading! Prentice On 12/22/2011 12:33 PM, Lux, Jim (337C) wrote: > That's an interesting approach of combining ASICs with FPGAs. ASICs will > blow the doors off anything else in a FLOP/Joule contest or a FLOPS/kg or > FLOPS/dollar.. For tasks for which the ASIC is designed. FPGAs to handle > the routing/sequencing/variable parts of the problem and ASICs to do the > crunching is a great idea. Sort of the same idea as including DSP or > PowerPC cores on a Xilinx FPGA, at a more macro scale. > (and of interest in the HPC world, since early 2nd generation Hypercubes > from Intel used Xilinx FPGAs as their routing fabric) > > The challenge with this kind of hardware design is PWB design. Sure, you > have 1100+ pins coming out of that FPGA.. Now you have to route them > somewhere. And do it in a manufacturable board: I've worked recently with > a board that had 22 layers, and we were at the ragged edge of tolerances > with the close pitch column grid array parts we had to use. > > I would expect the clever folks at DE Shaw did an integrated design with > their ASIC.. Make the ASIC pinouts such that they line up with the FPGAs, > and make the routing problem simpler. > > > > > On 12/22/11 8:53 AM, "Prentice Bisbal" wrote: > >> Just for the record - I'm only the messenger. I noticed a >> not-insignificant number of booths touting FPGAs at SC11 this year, so I >> reported on it. I also mentioned other forms of accelerators, like GPUs >> and Intel's MIC architecture. >> >> The Anton computer architecture isn't just a FPGA - it also has >> custom-designed processors (ASICS). The ASICs handle the parts of the >> molecular dynamics (MD) algorithms that are well-understood, and >> unlikely to change, and the FPGAs handle the parts of the algorithms >> that may change or might have room for further optimization. >> >> As far as I know, only 8 or 9 Antons have been built. One is at the >> Pittsburgh Supercomputing Center (PSC), the rest are for internal use at >> DE Shaw. A single Anton consists of 512 cores, and takes up 6 or 8 >> racks. Despite it's small size, it's orders of magnitude faster at >> doing MD calculations than even super computers like Jaguar and >> Roadrunner with hundreds of thousands of processors. So overall, Anton >> is several orders of magnitudes faster than an general-purpose processor >> based supercomputer. And sI'm sure it uses a LOT less power. I don't >> think the Anton's are clustered together, so I'm pretty sure the >> published performance on MD simulations is for a single Anton with 512 >> cores >> >> Keep in mind that Anton was designed to do only 1 thing: MD, so it >> probably can't even run LinPack, and if it did, I'm sure it's score >> would be awful. Also, the designers cut corners where they knew the >> safely could, like using fixed-precision (or is it fixed-point?) math, >> so the hardware design is only half the story in this example. >> >> Prentice >> >> >> >> On 12/22/2011 11:27 AM, Lux, Jim (337C) wrote: >>> The problem with FPGAs (and I use a fair number of them) is that you're >>> never going to get the same picojoules/bit transition kind of power >>> consumption that you do with a purpose designed processor. The extra >>> logic needed to get it "reconfigurable", and the physical junction sizes >>> as well, make it so. >>> >>> What you will find is that on certain kinds of problems, you can >>> implement >>> a more efficient algorithm in FPGA than you can in a conventional >>> processor or GPU. So, for that class of problem, the FPGA is a winner >>> (things lending themselves to fixed point systolic array type processes >>> are a good candidate). >>> >>> Bear in mind also that while an FPGA may have, say, 10-million gate >>> equivalent, any given practical design is going to use a small fraction >>> of >>> those gates. Fortunately, most of those unused gates aren't toggling, >>> so >>> they don't consume clock related power, but they do consume leakage >>> current, so the whole clock rate vs core voltage trade winds up a bit >>> different for FPGAs. >>> >>> The biggest problem with FPGAs is that they are difficult to write high >>> performance software for. With FORTRAN on conventional and vectorized >>> and >>> pipelined processors, we've got 50 years of compiler writing expertise, >>> and real high performance libraries. And, literally millions of people >>> who know how to code in FORTRAN or C or something, so if you're looking >>> for the highest performance coders, even at the 4 sigma level, you've >>> got >>> a fair number to choose from. For numerical computation in FPGAs, not >>> so >>> many. I'd guess that a large fraction of FPGA developers are doing one >>> of >>> two things: 1) digital signal processing, flow through kinds of stuff >>> (error correcting codes, compression/decompression, crypto; 2) bus >>> interface and data handling (PCI bus, disk drive controls, etc.). >>> >>> Interestingly, even with the relative scarcity of FPGA developers versus >>> conventional CPU software, the average salaries aren't that far apart. >>> The distribution on "generic coders" is wider (particularly on the low >>> end.. Barriers to entry are lower for C,Java,whathaveyou code monkeys), >>> but there are very, very few people making more than, say, 150-200k/yr >>> doing either. (except in a few anomalous industries, where compensation >>> is higher than normal in general). (also leaving out "equity >>> participation" type deals) >>> >>> >>> >>> On 12/22/11 7:42 AM, "Prentice Bisbal" wrote: >>> >>>> On 12/22/2011 09:57 AM, Eugen Leitl wrote: >>>>> On Thu, Dec 22, 2011 at 09:43:55AM -0500, Prentice Bisbal wrote: >>>>> >>>>>> Or if your German is rusty: >>>>>> >>>>>> >>>>>> >>>>>> http://www.zdnet.com/blog/computers/amd-radeon-hd-7970-graphics-card-l >>>>>> au >>>>>> nched-benchmarked-fastest-single-gpu-board-available/7204 >>>>> Wonder what kind of response will be forthcoming from nVidia, >>>>> given developments like >>>>> http://www.theregister.co.uk/2011/11/14/arm_gpu_nvidia_supercomputer/ >>>>> >>>>> It does seem that x86 is dead, despite good Bulldozer performance >>>>> in Interlagos >>>>> >>>>> >>>>> >>>>> http://www.heise.de/newsticker/meldung/AMDs-Serverprozessoren-mit-Bulld >>>>> oz >>>>> er-Architektur-legen-los-1378230.html >>>>> >>>>> (engage dekrautizer of your choice). >>>>> >>>> At SC11, it was clear that everyone was looking for ways around the >>>> power wall. I saw 5 or 6 different booths touting the use of FPGAs for >>>> improved performance/efficiency. I don't remember there being a single >>>> FPGA booth in the past. Whether the accelerator is GPU, FPGA, GRAPE, >>>> Intem MIC, or something else, I think it's clear that the future of >>>> HPC >>>> architecture is going to change radically in the next couple years, >>>> unless some major breakthrough occurs for commodity processors. >>>> >>>> I think DE Shaw Research's Anton computer, which uses FPGAs and custom >>>> processors, is an excellent example of what the future of HPC might >>>> look >>>> like. >>>> >>>> -- >>>> Prentice >>>> _______________________________________________ >>>> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin >>>> Computing >>>> To change your subscription (digest mode or unsubscribe) visit >>>> http://www.beowulf.org/mailman/listinfo/beowulf >> _______________________________________________ >> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing >> To change your subscription (digest mode or unsubscribe) visit >> http://www.beowulf.org/mailman/listinfo/beowulf > _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From tegner at renget.se Fri Dec 23 10:54:22 2011 From: tegner at renget.se (Jon Tegner) Date: Fri, 23 Dec 2011 16:54:22 +0100 Subject: [Beowulf] personal HPC In-Reply-To: <34813.192.168.93.213.1324572677.squirrel@mail.eadline.org> References: <34813.192.168.93.213.1324572677.squirrel@mail.eadline.org> Message-ID: <4EF4A42E.1030208@renget.se> Cool! Impressive to have taken it this far! What are the dimensions of the system? And the mainbord for the compute nodes, are you using mini-itx there? Regards, /jon On 12/22/2011 05:51 PM, Douglas Eadline wrote: > For those that don't know, I have been working > on a commodity "desk side" cluster for a while. > I have been writing about the progress at: > > http://limulus.basement-supercomputing.com/ > > Recently I was able to get 200 GFLOPS using Intel > i5-2400S processors connected by GigE (58% of peak). > Of course these are CPU FLOPS not GPU FLOPS and the > design has a power/heat/performance/noise envelope > that makes it suitable for true desk side computing. > (for things like software development, education, > small production work, and cloud staging) > > You can find the raw HPC numbers and specifications here: > > http://limulus.basement-supercomputing.com/wiki/CommercialLimulus > > BTW, if click the "Nexlink Limulus" link, you can take a survey > for a chance to win one of these systems. > > Happy holidays > > -- > Doug > _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From deadline at eadline.org Fri Dec 23 12:31:09 2011 From: deadline at eadline.org (Douglas Eadline) Date: Fri, 23 Dec 2011 12:31:09 -0500 (EST) Subject: [Beowulf] personal HPC In-Reply-To: <4EF4A42E.1030208@renget.se> References: <34813.192.168.93.213.1324572677.squirrel@mail.eadline.org> <4EF4A42E.1030208@renget.se> Message-ID: <49637.192.168.93.213.1324661469.squirrel@mail.eadline.org> > Cool! Impressive to have taken it this far! > > What are the dimensions of the system? And the mainbord > for the compute nodes, are you using mini-itx there? Hey Jon, It is a standard Antec 1200 case, the approximate size is 20x22x8.5 inches or 51x56x22 cm. It uses micro-ATX boards. BTW, there is no case modification needed, it all slides and screws in. The FAQ may have can provide more info: http://limulus.basement-supercomputing.com/wiki/LimulusFAQ -- Doug > > Regards, > > /jon > > On 12/22/2011 05:51 PM, Douglas Eadline wrote: >> For those that don't know, I have been working >> on a commodity "desk side" cluster for a while. >> I have been writing about the progress at: >> >> http://limulus.basement-supercomputing.com/ >> >> Recently I was able to get 200 GFLOPS using Intel >> i5-2400S processors connected by GigE (58% of peak). >> Of course these are CPU FLOPS not GPU FLOPS and the >> design has a power/heat/performance/noise envelope >> that makes it suitable for true desk side computing. >> (for things like software development, education, >> small production work, and cloud staging) >> >> You can find the raw HPC numbers and specifications here: >> >> http://limulus.basement-supercomputing.com/wiki/CommercialLimulus >> >> BTW, if click the "Nexlink Limulus" link, you can take a survey >> for a chance to win one of these systems. >> >> Happy holidays >> >> -- >> Doug >> > > > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing > To change your subscription (digest mode or unsubscribe) visit > http://www.beowulf.org/mailman/listinfo/beowulf > > -- > MailScanner: clean > -- Doug -- MailScanner: clean _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From eagles051387 at gmail.com Fri Dec 23 14:32:23 2011 From: eagles051387 at gmail.com (Jonathan Aquilina) Date: Fri, 23 Dec 2011 13:32:23 -0600 Subject: [Beowulf] clustering using off the shelf systems in a fish tank full of oil. Message-ID: <4EF4D747.3080200@gmail.com> I am just curious as to everyones take on this http://www.youtube.com/watch?v=PtufuXLvOok Being able to over clock the systems how much more performance gains can one get out of them _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From samuel at unimelb.edu.au Tue Dec 27 07:04:02 2011 From: samuel at unimelb.edu.au (Chris Samuel) Date: Tue, 27 Dec 2011 23:04:02 +1100 Subject: [Beowulf] =?iso-8859-1?q?3=2E79_TFlops_sp=2C_0=2E95_TFlops_dp=2C_?= =?iso-8859-1?q?264_TByte/s=2C_3=09GByte_=2C_198_W_=40_500_EUR?= In-Reply-To: <4EF34FEB.8030903@ias.edu> References: <20111222095040.GK31847@leitl.org> <20111222145744.GZ31847@leitl.org> <4EF34FEB.8030903@ias.edu> Message-ID: <201112272304.02320.samuel@unimelb.edu.au> On Fri, 23 Dec 2011 02:42:35 AM Prentice Bisbal wrote: > At SC11, it was clear that everyone was looking for ways around the > power wall. I saw 5 or 6 different booths touting the use of FPGAs > for improved performance/efficiency. I don't remember there being > a single FPGA booth in the past. I couldn't be at SC'11 due to family health issues, but I'm sure I remember a number of FPGA booths at previous SC's. I remember one at SC'07 or so that had FPGA's that would go into an AMD Opteron CPU socket for instance. Ah yes, I even took a photo of it (the FPGA in the socket, not the booth I'm afraid) at SC'07: http://www.flickr.com/photos/chrissamuel/2267611323/in/set-72157603919719911 Looks like an Altera FPGA. cheers! Chris -- Christopher Samuel - Senior Systems Administrator VLSCI - Victorian Life Sciences Computation Initiative Email: samuel at unimelb.edu.au Phone: +61 (0)3 903 55545 http://www.vlsci.unimelb.edu.au/ _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From prentice at ias.edu Wed Dec 28 10:40:57 2011 From: prentice at ias.edu (Prentice Bisbal) Date: Wed, 28 Dec 2011 10:40:57 -0500 Subject: [Beowulf] clustering using off the shelf systems in a fish tank full of oil. In-Reply-To: <4EF4D747.3080200@gmail.com> References: <4EF4D747.3080200@gmail.com> Message-ID: <4EFB3889.6090401@ias.edu> There has been a company at the SC conferences for the past 3 years trying to sell exactly that (server cooling by submersion in mineral oil) for the past 3 years. In my opinion it, suffers from a few major problems: 1. It's messy. If you every have to take hardware out of the oil to repair/replace, it's messy. The oil could drip all over, creating safety hazards. And if you need to remove a hardware component from a server, good luck! Now that everything is oily and slippery, there definitely will be a problem with that hard drive once it flies out of your hands, even if there wasn't a problem with it before! 2. The weight of the mineral oil. Despite the density of current 1-U and blade systems, I still think that air makes up a not-significant percentage of volume of the full rack. Fill that space with a liquid like mineral oil, and I'm sure you double, triple, or maybe even quadruple the weight load on your datacenter's raised floor. -- Prentice http://msds.farnam.com/m000712.htm On 12/23/2011 2:32 PM, Jonathan Aquilina wrote: > > I am just curious as to everyones take on this > > http://www.youtube.com/watch?v=PtufuXLvOok > > Being able to over clock the systems how much more performance gains can > one get out of them > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing > To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf > _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From landman at scalableinformatics.com Wed Dec 28 10:49:00 2011 From: landman at scalableinformatics.com (Joe Landman) Date: Wed, 28 Dec 2011 10:49:00 -0500 Subject: [Beowulf] clustering using off the shelf systems in a fish tank full of oil. In-Reply-To: <4EFB3889.6090401@ias.edu> References: <4EF4D747.3080200@gmail.com> <4EFB3889.6090401@ias.edu> Message-ID: <4EFB3A6C.8040608@scalableinformatics.com> On 12/28/2011 10:40 AM, Prentice Bisbal wrote: > There has been a company at the SC conferences for the past 3 years > trying to sell exactly that (server cooling by submersion in mineral > oil) for the past 3 years. > > In my opinion it, suffers from a few major problems: [...] Those are the costs in the cost-benefit analysis. Not really complete, as you need to include filtering, mineral oil supply/monitoring, etc. The benefits are that it cools really ... really well. -- Joseph Landman, Ph.D Founder and CEO Scalable Informatics Inc. email: landman at scalableinformatics.com web : http://scalableinformatics.com http://scalableinformatics.com/sicluster phone: +1 734 786 8423 x121 fax : +1 866 888 3112 cell : +1 734 612 4615 _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From eagles051387 at gmail.com Wed Dec 28 11:05:08 2011 From: eagles051387 at gmail.com (Jonathan Aquilina) Date: Wed, 28 Dec 2011 10:05:08 -0600 Subject: [Beowulf] clustering using off the shelf systems in a fish tank full of oil. In-Reply-To: <4EFB3A6C.8040608@scalableinformatics.com> References: <4EF4D747.3080200@gmail.com> <4EFB3889.6090401@ias.edu> <4EFB3A6C.8040608@scalableinformatics.com> Message-ID: <4EFB3E34.1070006@gmail.com> On 12/28/2011 9:49 AM, Joe Landman wrote: > On 12/28/2011 10:40 AM, Prentice Bisbal wrote: >> There has been a company at the SC conferences for the past 3 years >> trying to sell exactly that (server cooling by submersion in mineral >> oil) for the past 3 years. >> >> In my opinion it, suffers from a few major problems: > [...] > > Those are the costs in the cost-benefit analysis. Not really complete, > as you need to include filtering, mineral oil supply/monitoring, etc. > > The benefits are that it cools really ... really well. > Im curious though to see a cluster like that how much one can actually overclock a given system or cluster of these systems. is overclocking used any more in current day clusters? _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From deadline at eadline.org Wed Dec 28 11:11:53 2011 From: deadline at eadline.org (Douglas Eadline) Date: Wed, 28 Dec 2011 11:11:53 -0500 (EST) Subject: [Beowulf] clustering using off the shelf systems in a fish tank full of oil. In-Reply-To: <4EFB3889.6090401@ias.edu> References: <4EF4D747.3080200@gmail.com> <4EFB3889.6090401@ias.edu> Message-ID: <23763.130.219.8.226.1325088713.squirrel@mail.eadline.org> However, if you really overclock, you can make french fries -- Doug > There has been a company at the SC conferences for the past 3 years > trying to sell exactly that (server cooling by submersion in mineral > oil) for the past 3 years. > > In my opinion it, suffers from a few major problems: > > 1. It's messy. If you every have to take hardware out of the oil to > repair/replace, it's messy. The oil could drip all over, creating safety > hazards. And if you need to remove a hardware component from a server, > good luck! Now that everything is oily and slippery, there definitely > will be a problem with that hard drive once it flies out of your hands, > even if there wasn't a problem with it before! > > 2. The weight of the mineral oil. Despite the density of current 1-U and > blade systems, I still think that air makes up a not-significant > percentage of volume of the full rack. Fill that space with a liquid > like mineral oil, and I'm sure you double, triple, or maybe even > quadruple the weight load on your datacenter's raised floor. > > -- > Prentice > > > > http://msds.farnam.com/m000712.htm > > On 12/23/2011 2:32 PM, Jonathan Aquilina wrote: >> >> I am just curious as to everyones take on this >> >> http://www.youtube.com/watch?v=PtufuXLvOok >> >> Being able to over clock the systems how much more performance gains can >> one get out of them >> _______________________________________________ >> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing >> To change your subscription (digest mode or unsubscribe) visit >> http://www.beowulf.org/mailman/listinfo/beowulf >> > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing > To change your subscription (digest mode or unsubscribe) visit > http://www.beowulf.org/mailman/listinfo/beowulf > > -- > MailScanner: clean > -- Doug -- MailScanner: clean _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf From cbergstrom at pathscale.com Wed Dec 28 11:15:27 2011 From: cbergstrom at pathscale.com (=?ISO-8859-1?Q?=22C=2E_Bergstr=F6m=22?=) Date: Wed, 28 Dec 2011 23:15:27 +0700 Subject: [Beowulf] clustering using off the shelf systems in a fish tank full of oil. In-Reply-To: <23763.130.219.8.226.1325088713.squirrel@mail.eadline.org> References: <4EF4D747.3080200@gmail.com> <4EFB3889.6090401@ias.edu> <23763.130.219.8.226.1325088713.squirrel@mail.eadline.org> Message-ID: <4EFB409F.20302@pathscale.com> On 12/28/11 11:11 PM, Douglas Eadline wrote: > However, if you really overclock, you can make french fries I think smores from the oncoming grease fire would be more fun ;) _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From eagles051387 at gmail.com Wed Dec 28 11:18:33 2011 From: eagles051387 at gmail.com (Jonathan Aquilina) Date: Wed, 28 Dec 2011 10:18:33 -0600 Subject: [Beowulf] clustering using off the shelf systems in a fish tank full of oil. In-Reply-To: <23763.130.219.8.226.1325088713.squirrel@mail.eadline.org> References: <4EF4D747.3080200@gmail.com> <4EFB3889.6090401@ias.edu> <23763.130.219.8.226.1325088713.squirrel@mail.eadline.org> Message-ID: <4EFB4159.1040206@gmail.com> Was thinking that after i sent the email. I think the solution to part one of your answer Prentice is the following. You would have spare machines on hand that you would swap out with a faulty machine allowing you the necessary time to replace parts as needed with out the risk of spilling the oil on the floor and creating any hazards in the workplace. On 12/28/2011 10:11 AM, Douglas Eadline wrote: > However, if you really overclock, you can make french fries > > -- > Doug > > >> There has been a company at the SC conferences for the past 3 years >> trying to sell exactly that (server cooling by submersion in mineral >> oil) for the past 3 years. >> >> In my opinion it, suffers from a few major problems: >> >> 1. It's messy. If you every have to take hardware out of the oil to >> repair/replace, it's messy. The oil could drip all over, creating safety >> hazards. And if you need to remove a hardware component from a server, >> good luck! Now that everything is oily and slippery, there definitely >> will be a problem with that hard drive once it flies out of your hands, >> even if there wasn't a problem with it before! >> >> 2. The weight of the mineral oil. Despite the density of current 1-U and >> blade systems, I still think that air makes up a not-significant >> percentage of volume of the full rack. Fill that space with a liquid >> like mineral oil, and I'm sure you double, triple, or maybe even >> quadruple the weight load on your datacenter's raised floor. >> >> -- >> Prentice >> >> >> >> http://msds.farnam.com/m000712.htm >> >> On 12/23/2011 2:32 PM, Jonathan Aquilina wrote: >>> I am just curious as to everyones take on this >>> >>> http://www.youtube.com/watch?v=PtufuXLvOok >>> >>> Being able to over clock the systems how much more performance gains can >>> one get out of them >>> _______________________________________________ >>> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing >>> To change your subscription (digest mode or unsubscribe) visit >>> http://www.beowulf.org/mailman/listinfo/beowulf >>> >> _______________________________________________ >> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing >> To change your subscription (digest mode or unsubscribe) visit >> http://www.beowulf.org/mailman/listinfo/beowulf >> >> -- >> MailScanner: clean >> > _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From landman at scalableinformatics.com Wed Dec 28 11:31:12 2011 From: landman at scalableinformatics.com (Joe Landman) Date: Wed, 28 Dec 2011 11:31:12 -0500 Subject: [Beowulf] clustering using off the shelf systems in a fish tank full of oil. In-Reply-To: <23763.130.219.8.226.1325088713.squirrel@mail.eadline.org> References: <4EF4D747.3080200@gmail.com> <4EFB3889.6090401@ias.edu> <23763.130.219.8.226.1325088713.squirrel@mail.eadline.org> Message-ID: <4EFB4450.4020607@scalableinformatics.com> On 12/28/2011 11:11 AM, Douglas Eadline wrote: > > However, if you really overclock, you can make french fries Mmmmm server fries .... tasty ! -- Joseph Landman, Ph.D Founder and CEO Scalable Informatics Inc. email: landman at scalableinformatics.com web : http://scalableinformatics.com http://scalableinformatics.com/sicluster phone: +1 734 786 8423 x121 fax : +1 866 888 3112 cell : +1 734 612 4615 _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From james.p.lux at jpl.nasa.gov Wed Dec 28 12:17:12 2011 From: james.p.lux at jpl.nasa.gov (Lux, Jim (337C)) Date: Wed, 28 Dec 2011 09:17:12 -0800 Subject: [Beowulf] clustering using off the shelf systems in a fish tank full of oil. In-Reply-To: <4EFB3889.6090401@ias.edu> Message-ID: On 12/28/11 7:40 AM, "Prentice Bisbal" wrote: >There has been a company at the SC conferences for the past 3 years >trying to sell exactly that (server cooling by submersion in mineral >oil) for the past 3 years. > >In my opinion it, suffers from a few major problems: > >1. It's messy. If you every have to take hardware out of the oil to >repair/replace, it's messy. The oil could drip all over, creating safety >hazards. And if you need to remove a hardware component from a server, >good luck! Now that everything is oily and slippery, there definitely >will be a problem with that hard drive once it flies out of your hands, >even if there wasn't a problem with it before! > >2. The weight of the mineral oil. Despite the density of current 1-U and >blade systems, I still think that air makes up a not-significant >percentage of volume of the full rack. Fill that space with a liquid >like mineral oil, and I'm sure you double, triple, or maybe even >quadruple the weight load on your datacenter's raised floor. > > I've worked quite a lot with oil insulation in the high voltage world. Prentice's comments (particularly #1) are spot on. ALL oil filled equipment that is designed for servicing leaks. ALL. Maybe it's just a fine oil film on the outside, maybe it's a puddle on the floor, but it all leaks. (Exception.. Things that are welded closed with oil inside, but that's not serviceable) When you do remove the equipment from the tank, yes, it drips, and it's a mess. Slipperyness isn't as big a problem.. You lift the stuff out of the tank, and let is sit for a long while while it drips back into the tank. Pick a real low viscosity oil (good for other reasons) and it's not too bad. The problem is that there is some nook or cranny that retains oil because of its orientation or capillary effects, and that oil comes oozing/spilling out later. Fluorinert is a different story (albeit hideously more expensive than oil). It's very low viscosity, has low capillary attraction, etc. and will (if chosen properly) evaporate. Equipment that cools by ebullient (boiling) Fluorinert cleans up very nicely, because the boiling point is chosen to be quite low. I'm not sure I'd be plunging a disk drive into oil. Most drive cases I've seen have a vent plug. Maybe the holes are small enough so that the oil molecules don't make it through, but air does, but temperature cycling is going to force oil into the case eventually. > _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From james.p.lux at jpl.nasa.gov Wed Dec 28 12:20:47 2011 From: james.p.lux at jpl.nasa.gov (Lux, Jim (337C)) Date: Wed, 28 Dec 2011 09:20:47 -0800 Subject: [Beowulf] clustering using off the shelf systems in a fish tank full of oil. In-Reply-To: <4EFB409F.20302@pathscale.com> Message-ID: And this is why PCBs were used instead of oil. No burning, much more chemically inert. Too bad there's inevitable manufacturing contaminants which are carcinogenic in very low quantities, and because they are so persistent, cause problems for a long time. Oil does spoil, after all. Slowly, for good insulating mineral oil (they put anti-oxidants like BHT, BHA, or alpha-tocopherol in it), but it does degrade. Silicones are essentially inert and don't really spoil, but are a LOT more expensive, and have other disadvantages (real hard to remove with a solvent, for instance) On 12/28/11 8:15 AM, "C. Bergstr?m" wrote: >On 12/28/11 11:11 PM, Douglas Eadline wrote: >> However, if you really overclock, you can make french fries >I think smores from the oncoming grease fire would be more fun ;) >_______________________________________________ >Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing >To change your subscription (digest mode or unsubscribe) visit >http://www.beowulf.org/mailman/listinfo/beowulf _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From james.p.lux at jpl.nasa.gov Wed Dec 28 12:30:15 2011 From: james.p.lux at jpl.nasa.gov (Lux, Jim (337C)) Date: Wed, 28 Dec 2011 09:30:15 -0800 Subject: [Beowulf] clustering using off the shelf systems in a fish tank full of oil. In-Reply-To: <4EFB4159.1040206@gmail.com> Message-ID: On 12/28/11 8:18 AM, "Jonathan Aquilina" wrote: >Was thinking that after i sent the email. > >I think the solution to part one of your answer Prentice is the following. > >You would have spare machines on hand that you would swap out with a >faulty machine allowing you the necessary time to replace parts as >needed with out the risk of spilling the oil on the floor and creating >any hazards in the workplace. And you'll have your oily floor "service depot" somewhere else... (and you'll still have oily floors under your racks.. Oil WILL move through the wires by capillary attraction and/or thermal/atmospheric pumping. Home experiment: Get a piece of stranded wire about 30 cm long. Fill a cup or glass with oil to within a couple cm of the top. Drape the wire over the edge of the cup with one end in the oil and the other end on a piece of paper on the surface of the table. (do all this within a raised edge pan or cookie sheet). Wait a day or two. Observe. Clean up. Bear in mind that a 4 U case full of oil is going to be pretty heavy. Oil has a specific gravity/density of around .7 kg/liter. It's gonna be right around the OSHA 1 person lift limit of 55 lb, and I wouldn't want to be the guy standing under the chassis as you pull it out of the top slot of the rack. So you're going to need a rolling cart with a suitable lifting mechanism or maybe a chain hoist on a rail down between your server aisles, sort of like in a slaughter house or metal plating plant? > _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From eagles051387 at gmail.com Wed Dec 28 13:04:57 2011 From: eagles051387 at gmail.com (Jonathan Aquilina) Date: Wed, 28 Dec 2011 12:04:57 -0600 Subject: [Beowulf] clustering using off the shelf systems in a fish tank full of oil. In-Reply-To: References: Message-ID: <4EFB5A49.20706@gmail.com> On 12/28/2011 11:17 AM, Lux, Jim (337C) wrote: > > On 12/28/11 7:40 AM, "Prentice Bisbal" wrote: > >> There has been a company at the SC conferences for the past 3 years >> trying to sell exactly that (server cooling by submersion in mineral >> oil) for the past 3 years. >> >> In my opinion it, suffers from a few major problems: >> >> 1. It's messy. If you every have to take hardware out of the oil to >> repair/replace, it's messy. The oil could drip all over, creating safety >> hazards. And if you need to remove a hardware component from a server, >> good luck! Now that everything is oily and slippery, there definitely >> will be a problem with that hard drive once it flies out of your hands, >> even if there wasn't a problem with it before! >> >> 2. The weight of the mineral oil. Despite the density of current 1-U and >> blade systems, I still think that air makes up a not-significant >> percentage of volume of the full rack. Fill that space with a liquid >> like mineral oil, and I'm sure you double, triple, or maybe even >> quadruple the weight load on your datacenter's raised floor. >> >> > I've worked quite a lot with oil insulation in the high voltage world. > Prentice's comments (particularly #1) are spot on. > > ALL oil filled equipment that is designed for servicing leaks. ALL. > Maybe it's just a fine oil film on the outside, maybe it's a puddle on the > floor, but it all leaks. (Exception.. Things that are welded closed with > oil inside, but that's not serviceable) > > When you do remove the equipment from the tank, yes, it drips, and it's a > mess. Slipperyness isn't as big a problem.. You lift the stuff out of > the tank, and let is sit for a long while while it drips back into the > tank. Pick a real low viscosity oil (good for other reasons) and it's > not too bad. The problem is that there is some nook or cranny that retains > oil because of its orientation or capillary effects, and that oil comes > oozing/spilling out later. > > Fluorinert is a different story (albeit hideously more expensive than > oil). It's very low viscosity, has low capillary attraction, etc. and > will (if chosen properly) evaporate. Equipment that cools by ebullient > (boiling) Fluorinert cleans up very nicely, because the boiling point is > chosen to be quite low. > > > I'm not sure I'd be plunging a disk drive into oil. Most drive cases I've > seen have a vent plug. Maybe the holes are small enough so that the oil > molecules don't make it through, but air does, but temperature cycling is > going to force oil into the case eventually. Jim would you plunge an SSD in there? So you wouldnt advise using mineral oil like the video shows? > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing > To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From eagles051387 at gmail.com Wed Dec 28 13:06:38 2011 From: eagles051387 at gmail.com (Jonathan Aquilina) Date: Wed, 28 Dec 2011 12:06:38 -0600 Subject: [Beowulf] clustering using off the shelf systems in a fish tank full of oil. In-Reply-To: References: Message-ID: <4EFB5AAE.3030900@gmail.com> On 12/28/2011 11:30 AM, Lux, Jim (337C) wrote: > > On 12/28/11 8:18 AM, "Jonathan Aquilina" wrote: > >> Was thinking that after i sent the email. >> >> I think the solution to part one of your answer Prentice is the following. >> >> You would have spare machines on hand that you would swap out with a >> faulty machine allowing you the necessary time to replace parts as >> needed with out the risk of spilling the oil on the floor and creating >> any hazards in the workplace. > > And you'll have your oily floor "service depot" somewhere else... (and > you'll still have oily floors under your racks.. Oil WILL move through the > wires by capillary attraction and/or thermal/atmospheric pumping. Home > experiment: Get a piece of stranded wire about 30 cm long. Fill a cup or > glass with oil to within a couple cm of the top. Drape the wire over the > edge of the cup with one end in the oil and the other end on a piece of > paper on the surface of the table. (do all this within a raised edge pan > or cookie sheet). Wait a day or two. Observe. Clean up. > > Bear in mind that a 4 U case full of oil is going to be pretty heavy. Oil > has a specific gravity/density of around .7 kg/liter. It's gonna be right > around the OSHA 1 person lift limit of 55 lb, and I wouldn't want to be > the guy standing under the chassis as you pull it out of the top slot of > the rack. So you're going to need a rolling cart with a suitable lifting > mechanism or maybe a chain hoist on a rail down between your server > aisles, sort of like in a slaughter house or metal plating plant? > Wait a min guys maybe i wasnt clear, im not saying using standard server cases here. I am talking about actually using fish tanks instead. would you still have that leaking issue? _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From diep at xs4all.nl Wed Dec 28 13:43:50 2011 From: diep at xs4all.nl (Vincent Diepeveen) Date: Wed, 28 Dec 2011 19:43:50 +0100 Subject: [Beowulf] clustering using off the shelf systems in a fish tank full of oil. In-Reply-To: <4EFB5AAE.3030900@gmail.com> References: <4EFB5AAE.3030900@gmail.com> Message-ID: <715C5657-461B-41E7-9591-5DF89F3CC285@xs4all.nl> On Dec 28, 2011, at 7:06 PM, Jonathan Aquilina wrote: > On 12/28/2011 11:30 AM, Lux, Jim (337C) wrote: >> >> On 12/28/11 8:18 AM, "Jonathan Aquilina" >> wrote: >> >>> Was thinking that after i sent the email. >>> >>> I think the solution to part one of your answer Prentice is the >>> following. >>> >>> You would have spare machines on hand that you would swap out with a >>> faulty machine allowing you the necessary time to replace parts as >>> needed with out the risk of spilling the oil on the floor and >>> creating >>> any hazards in the workplace. >> >> And you'll have your oily floor "service depot" somewhere else... >> (and >> you'll still have oily floors under your racks.. Oil WILL move >> through the >> wires by capillary attraction and/or thermal/atmospheric >> pumping. Home >> experiment: Get a piece of stranded wire about 30 cm long. Fill >> a cup or >> glass with oil to within a couple cm of the top. Drape the wire >> over the >> edge of the cup with one end in the oil and the other end on a >> piece of >> paper on the surface of the table. (do all this within a raised >> edge pan >> or cookie sheet). Wait a day or two. Observe. Clean up. >> >> Bear in mind that a 4 U case full of oil is going to be pretty >> heavy. Oil >> has a specific gravity/density of around .7 kg/liter. It's gonna >> be right >> around the OSHA 1 person lift limit of 55 lb, and I wouldn't want >> to be >> the guy standing under the chassis as you pull it out of the top >> slot of >> the rack. So you're going to need a rolling cart with a suitable >> lifting >> mechanism or maybe a chain hoist on a rail down between your server >> aisles, sort of like in a slaughter house or metal plating plant? >> > Wait a min guys maybe i wasnt clear, im not saying using standard > server > cases here. That's because i guess Jim had already given his sysadmin a few flippers as a Christmas gift to service the rackmounts. > I am talking about actually using fish tanks instead. would > you still have that leaking issue? And after a few days it'll get really hot inside that fish tank. You'll remember then the bubbles which do a great cooling job and considering the huge temperature difference it'll remove quite some watts - yet it'll keep heating up if you use a box with 4 cores or more as those consume more than double the watts than what the shown systems used. But as you had explained to me you only have some old junk there anyway so it's worth a try, especially interesting to know is how much watts the fishing tank removes by itself. Maybe you can measure that for us. It's interesting to know how much a few bubbles remove, as that should be very efficient way to remove heat once it approaches a 100C+ isn't it? Jonathan, maybe you can get air from outside, i see now at the weather report that it's 13C in Malta, is that correct or is that only during nights? Maybe Jim wants to explain the huge temperature difference that the high voltage power cables cause default and the huge active cooling that gets used for the small parts that are underground. Even then they can't really put in the ground such solutions for high voltages over too long of a distance, that's technical not possible yet. Above me is 2 * 450 megawatt, which is tough to put underground for more than a kilometer or so, besides that they need the trajectory to be 8 meters wide as well as a minimum. Not sure you want that high temperature in your aquarium, the components might not withstand it for too long :) Anyway, I found it a very entertaining "pimp your computer" youtube video from 2007 that aquarium and i had a good laugh! Vincent > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin > Computing > To change your subscription (digest mode or unsubscribe) visit > http://www.beowulf.org/mailman/listinfo/beowulf _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From james.p.lux at jpl.nasa.gov Wed Dec 28 13:42:56 2011 From: james.p.lux at jpl.nasa.gov (Lux, Jim (337C)) Date: Wed, 28 Dec 2011 10:42:56 -0800 Subject: [Beowulf] clustering using off the shelf systems in a fish tank full of oil. In-Reply-To: <4EFB5A49.20706@gmail.com> Message-ID: An SSD wouldn't be a problem. No spinning disks in air, etc. And, in general, I'd work real hard to find a better cooling solution than oil immersion. It's a mess. About 5-10 years ago on this list we had some discussions on this - I was thinking about a portable cluster for use in the field by archaeologists, so it had to be cheap.. No Defense Department weapons system scale budgets in the social sciences. And it also had to be rugged and work in wide temperatures My "use case" was processing electrical resistance tomography or ground penetrating radar (generically, iterative inversion) in Central American jungle or Middle Eastern deserts. (Where Indiana Jones goes, so goes the Lux Field'wulf) If I were building something that had to be sealed, and needed to get the heat out to the outer surface (e.g. A minicluster in a box for a dusty field environment) and I wanted to use inexpensive commodity components, what I would think about is some scheme where you have a pump that sprays an inert cooling liquid (one of the inexpensive Freons, I think.. Not necessarily Fluorinert) over the boards. Sort of like a "dry sump" lubrication system in a racing engine. But it would take some serious engineering.. And one might wonder whether it would be easier and cheaper just to design for conduction cooling with things like wedgelocks to hold the cards in (and provide a thermal path. Or do something like package a small airconditioner with the cluster (although my notional package is "checkable as luggage/carryable on back of pack animal or backseat of car, so full sized rack is out of the question) As a production item, I think the wedgelock/conduction cooled scheme might be better (and I'd spend some time with some mobos looking at their thermal properties. A suitable "clamp" scheme for the edges might be enough, along with existing heatpipe type technologies. On 12/28/11 10:04 AM, "Jonathan Aquilina" wrote: >On 12/28/2011 11:17 AM, Lux, Jim (337C) wrote: >> >> I'm not sure I'd be plunging a disk drive into oil. Most drive cases >>I've >> seen have a vent plug. Maybe the holes are small enough so that the oil >> molecules don't make it through, but air does, but temperature cycling >>is >> going to force oil into the case eventually. > >Jim would you plunge an SSD in there? So you wouldnt advise using >mineral oil like the video shows? _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From james.p.lux at jpl.nasa.gov Wed Dec 28 13:51:03 2011 From: james.p.lux at jpl.nasa.gov (Lux, Jim (337C)) Date: Wed, 28 Dec 2011 10:51:03 -0800 Subject: [Beowulf] clustering using off the shelf systems in a fish tank full of oil. In-Reply-To: <4EFB5AAE.3030900@gmail.com> Message-ID: On 12/28/11 10:06 AM, "Jonathan Aquilina" wrote: >On 12/28/2011 11:30 AM, Lux, Jim (337C) wrote: >> >> And you'll have your oily floor "service depot" somewhere else... (and >> you'll still have oily floors under your racks.. Oil WILL move through >>the >> wires by capillary attraction and/or thermal/atmospheric pumping. >>Home >> experiment: Get a piece of stranded wire about 30 cm long. Fill a cup >>or >> glass with oil to within a couple cm of the top. Drape the wire over >>the >> edge of the cup with one end in the oil and the other end on a piece of >> paper on the surface of the table. (do all this within a raised edge pan >> or cookie sheet). Wait a day or two. Observe. Clean up. >> >> >Wait a min guys maybe i wasnt clear, im not saying using standard server >cases here. I am talking about actually using fish tanks instead. would >you still have that leaking issue? Almost certainly. Unless you arrange for all the wires to end up higher than the surface of the oil, the tube formed by the insulation serves as a nice siphon, started by capillary effects, to drain your tank on to the floor. (Faraday mentioned this effect with the shaving towel over the edge of the basin). And "open container of oil" (your fishtank) works for the short run, but you have to figure out how to keep it clean, while still vented, and keep moisture out (you're not insulating for HV, so that's not a big problem, but moisture in the oil will wind up in places where it might cause corrosion.. Running a bit warm helps "boil off" the water. Works great as a demo, not so hot for the long term. Try the experiment with the wire and glass of oil (use cheap cooking oil or motor oil...). Or to be fancy, how about a cluster of arduinos? BUT, if you do go oil.. Shell Diala AX is probably what you want (or the Univolt 65 equivalent). Runs about $5-10/gallon in a 5 gallon pail, cheaper in drums or truckload lots ($2-3/gallon, like most other non-exotic industrial liquids) You might find gallons of USP mineral oil at a feed store (used as a laxative for farm animals) at a competitive price, and for this application, the water content isn't as important, and it probably won't spoil too fast. > _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From diep at xs4all.nl Wed Dec 28 14:00:03 2011 From: diep at xs4all.nl (Vincent Diepeveen) Date: Wed, 28 Dec 2011 20:00:03 +0100 Subject: [Beowulf] watercooling In-Reply-To: References: Message-ID: <3081ACC9-051A-4875-B615-3B5C8A1E530A@xs4all.nl> Yeah Jim good comments, I was thinking for my cluster to overclock, which is why i guess some posted the overclocking sentences, and wanted to do it a bit more cheapskate. Latest idea now was to save costs by using for say a node or 16, to order 16 cpu blocks and 16 small pumps and 2 cheap big reservoirs: Coldreservoir ==> 16 minipumps ==> 16 cpu blocks ==> Hotreservoir Now have a big pump from the hotreservoir to outside, or maybe even 2, and put on the roof a big car radiatior, dirt cheap in fact, and a big fan which works on 24 volts. Maybe even 2. Then pump it back into the coldreservoir (gravity). Guessing i can get at most nodes around a 4.5Ghz or so @ 6 cores gulftown maybe (gulftown is fastest cpu for Diep of course sandy bridge with 6 cores or more as well when at same Ghz, in fact sandy bridge has 4 channels so is a tad faster than the 3 channel gulftown but that's peanuts). Not sure this setup works as i fear pressure differences if the huge pump doesn't pump at the same speed like the 16 small pumps. Anyone? Vincent On Dec 28, 2011, at 7:42 PM, Lux, Jim (337C) wrote: > An SSD wouldn't be a problem. No spinning disks in air, etc. > > And, in general, I'd work real hard to find a better cooling > solution than > oil immersion. It's a mess. > About 5-10 years ago on this list we had some discussions on this > - I was > thinking about a portable cluster for use in the field by > archaeologists, > so it had to be cheap.. No Defense Department weapons system scale > budgets > in the social sciences. And it also had to be rugged and work in wide > temperatures My "use case" was processing electrical resistance > tomography > or ground penetrating radar (generically, iterative inversion) in > Central > American jungle or Middle Eastern deserts. (Where Indiana Jones > goes, so > goes the Lux Field'wulf) > > > If I were building something that had to be sealed, and needed to > get the > heat out to the outer surface (e.g. A minicluster in a box for a dusty > field environment) and I wanted to use inexpensive commodity > components, > what I would think about is some scheme where you have a pump that > sprays > an inert cooling liquid (one of the inexpensive Freons, I think.. Not > necessarily Fluorinert) over the boards. Sort of like a "dry sump" > lubrication system in a racing engine. > > But it would take some serious engineering.. And one might wonder > whether > it would be easier and cheaper just to design for conduction > cooling with > things like wedgelocks to hold the cards in (and provide a thermal > path. > Or do something like package a small airconditioner with the cluster > (although my notional package is "checkable as luggage/carryable on > back > of pack animal or backseat of car, so full sized rack is out of the > question) > > As a production item, I think the wedgelock/conduction cooled > scheme might > be better (and I'd spend some time with some mobos looking at their > thermal properties. A suitable "clamp" scheme for the edges might be > enough, along with existing heatpipe type technologies. > > > On 12/28/11 10:04 AM, "Jonathan Aquilina" > wrote: > >> On 12/28/2011 11:17 AM, Lux, Jim (337C) wrote: >>> >>> I'm not sure I'd be plunging a disk drive into oil. Most drive >>> cases >>> I've >>> seen have a vent plug. Maybe the holes are small enough so that >>> the oil >>> molecules don't make it through, but air does, but temperature >>> cycling >>> is >>> going to force oil into the case eventually. >> >> Jim would you plunge an SSD in there? So you wouldnt advise using >> mineral oil like the video shows? > > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin > Computing > To change your subscription (digest mode or unsubscribe) visit > http://www.beowulf.org/mailman/listinfo/beowulf _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From james.p.lux at jpl.nasa.gov Wed Dec 28 14:17:11 2011 From: james.p.lux at jpl.nasa.gov (Lux, Jim (337C)) Date: Wed, 28 Dec 2011 11:17:11 -0800 Subject: [Beowulf] watercooling In-Reply-To: <3081ACC9-051A-4875-B615-3B5C8A1E530A@xs4all.nl> Message-ID: On 12/28/11 11:00 AM, "Vincent Diepeveen" wrote: >Yeah Jim good comments, > >I was thinking for my cluster to overclock, which is why i guess some >posted the overclocking sentences, >and wanted to do it a bit more cheapskate. > >Latest idea now was to save costs by using for say a node or 16, >to order 16 cpu blocks and 16 small pumps and 2 cheap big reservoirs: > >Coldreservoir ==> 16 minipumps ==> 16 cpu blocks ==> Hotreservoir Hmm.. Over the past few years I've been trying different schemes to keep a bunch (a cluster?) of glass bottles full of 750ml of an 12-15% alcohol solution in water at a reasonable temperature (15C or thereabouts), and I've gone through a wide variety of improvised schemes. (aside from buying a purpose built refrigerator.. Where's the fun in that?) Unless you need small size with high power density, very quiet operation, or sealed cases, BY FAR the easiest way is a conventional air conditioner blowing cold air through the system. Schemes with pumps and radiators and heat exchangers of one kind or another have maintenance and unexpected problems (stuff grows in almost any liquid, metals corrode, pumps fail, plastics degrade). A very inexpensive window airconditioner (US$99, 8000 BTU/hr = 2400 Watts) draws about 500-800 Watts (depending on mfr etc). The Coefficient of Performance (COP) of these things is terrible, but still, you ARE pumping more heat out than electricity you're putting in. A "split system" would put the noisy part outside and the cold part inside. The other strategy... Get a surplus laboratory chiller. Put THAT outside and run your insulated cold water tubes down to a radiator/heat exchanger in your computer box. At least the lab chiller already has the pumps and packaging put together. Run a suitable mix of commercial antifreeze and water (which will include various corrosion inhibitors, etc.) But really, cold air cooling is by far and away the easiest, most trouble free way to do things, unless it just won't work for some other reason. > _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From diep at xs4all.nl Wed Dec 28 14:46:10 2011 From: diep at xs4all.nl (Vincent Diepeveen) Date: Wed, 28 Dec 2011 20:46:10 +0100 Subject: [Beowulf] watercooling In-Reply-To: References: Message-ID: <46131205-E9D0-4A0A-A201-31A2C289DCF6@xs4all.nl> On Dec 28, 2011, at 8:17 PM, Lux, Jim (337C) wrote: > > > On 12/28/11 11:00 AM, "Vincent Diepeveen" wrote: > >> Yeah Jim good comments, >> >> I was thinking for my cluster to overclock, which is why i guess some >> posted the overclocking sentences, >> and wanted to do it a bit more cheapskate. >> >> Latest idea now was to save costs by using for say a node or 16, >> to order 16 cpu blocks and 16 small pumps and 2 cheap big reservoirs: >> >> Coldreservoir ==> 16 minipumps ==> 16 cpu blocks ==> Hotreservoir > > > > Hmm.. Over the past few years I've been trying different schemes to > keep a > bunch (a cluster?) of glass bottles full of 750ml of an 12-15% alcohol > solution in water at a reasonable temperature (15C or thereabouts), > and > I've gone through a wide variety of improvised schemes. (aside from > buying a purpose built refrigerator.. Where's the fun in that?) > > > Unless you need small size with high power density, very quiet > operation, > or sealed cases, BY FAR the easiest way is a conventional air > conditioner > blowing cold air through the system. > > Schemes with pumps and radiators and heat exchangers of one kind or > another have maintenance and unexpected problems (stuff grows in > almost > any liquid, metals corrode, pumps fail, plastics degrade). > > A very inexpensive window airconditioner (US$99, 8000 BTU/hr = 2400 > Watts) > draws about 500-800 Watts (depending on mfr etc). The Coefficient of > Performance (COP) of these things is terrible, but still, you ARE > pumping > more heat out than electricity you're putting in. > > > A "split system" would put the noisy part outside and the cold part > inside. > > > The other strategy... Get a surplus laboratory chiller. Put THAT > outside > and run your insulated cold water tubes down to a radiator/heat > exchanger > in your computer box. At least the lab chiller already has the > pumps and > packaging put together. Run a suitable mix of commercial > antifreeze and > water (which will include various corrosion inhibitors, etc.) > > But really, cold air cooling is by far and away the easiest, most > trouble > free way to do things, unless it just won't work for some other > reason. > How about 2 feet thick reinforced concrete walls? Nah.... From ease viewpoint we totally agree. yet that won't get even close to that 4.4-4.6Ghz overclock. For that overclock you really need stable watercooling with low temperatures. So those cooling kits are there anyway. Just i can choose how many radiators i put inside the room. Good radiators that use the same tube system are expensive. Just a single big huge car radiator that you put on the roof is of course cheaper than 16 huge ones with each 3 to 4 fans. Realize that for home built clusters so much heat inside a room and burning that much watts is a physical office limit. Like you can burn a watt or 2000 without too much of a problem, above that it gets really problematic. This office has 3 fuses available. Each 16 amps. Practical it's over 230 volt. In itself one fuse can't be used as the washing machine is on it. So 2 left. Now on paper it would be possible to get 4 kilowatt from those 2. Yet that's paper. All the airco's also consume from that. With the 16 radiators and 3 to 4 fans a radiator we speak of a lousy 48-64 huge fans just for cooling 16 cpu's. Also eats space. The airco here is rated using a 1440 watt maximum and uses practical a 770 watt or so when i measured. The noise is ear deafening. Now for the switch i can build a case that removes a lot of sound from it, also because switch isn't eating much, yet it's a different story for the machines. So removal of noise sure is an important issue as well, as i sit the next room. As for the nodes themselves, realize idea is mainboards with underneath say 0.8 cm of space, and 16 PSU's. Next posting i'll try to do an email with a photo of current setup using an existing mainboard. You'll see the constraints then :) > >> > > _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From diep at xs4all.nl Wed Dec 28 15:02:41 2011 From: diep at xs4all.nl (Vincent Diepeveen) Date: Wed, 28 Dec 2011 21:02:41 +0100 Subject: [Beowulf] watercooling Message-ID: Photos i put on my facebook: http://www.facebook.com/media/set/?set=a. 2906369387734.146499.1515523963&type=1#!/photo.php? fbid=2906377587939&set=a.2906369387734.146499.1515523963&type=3&theater _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From eagles051387 at gmail.com Thu Dec 29 10:53:39 2011 From: eagles051387 at gmail.com (Jonathan Aquilina) Date: Thu, 29 Dec 2011 09:53:39 -0600 Subject: [Beowulf] clustering using off the shelf systems in a fish tank full of oil. In-Reply-To: <715C5657-461B-41E7-9591-5DF89F3CC285@xs4all.nl> References: <4EFB5AAE.3030900@gmail.com> <715C5657-461B-41E7-9591-5DF89F3CC285@xs4all.nl> Message-ID: <4EFC8D03.4020406@gmail.com> On 12/28/2011 12:43 PM, Vincent Diepeveen wrote: > > On Dec 28, 2011, at 7:06 PM, Jonathan Aquilina wrote: > >> On 12/28/2011 11:30 AM, Lux, Jim (337C) wrote: >>> >>> On 12/28/11 8:18 AM, "Jonathan Aquilina" >>> wrote: >>> >>>> Was thinking that after i sent the email. >>>> >>>> I think the solution to part one of your answer Prentice is the >>>> following. >>>> >>>> You would have spare machines on hand that you would swap out with a >>>> faulty machine allowing you the necessary time to replace parts as >>>> needed with out the risk of spilling the oil on the floor and creating >>>> any hazards in the workplace. >>> >>> And you'll have your oily floor "service depot" somewhere else... (and >>> you'll still have oily floors under your racks.. Oil WILL move >>> through the >>> wires by capillary attraction and/or thermal/atmospheric pumping. >>> Home >>> experiment: Get a piece of stranded wire about 30 cm long. Fill a >>> cup or >>> glass with oil to within a couple cm of the top. Drape the wire >>> over the >>> edge of the cup with one end in the oil and the other end on a piece of >>> paper on the surface of the table. (do all this within a raised edge >>> pan >>> or cookie sheet). Wait a day or two. Observe. Clean up. >>> >>> Bear in mind that a 4 U case full of oil is going to be pretty >>> heavy. Oil >>> has a specific gravity/density of around .7 kg/liter. It's gonna be >>> right >>> around the OSHA 1 person lift limit of 55 lb, and I wouldn't want to be >>> the guy standing under the chassis as you pull it out of the top >>> slot of >>> the rack. So you're going to need a rolling cart with a suitable >>> lifting >>> mechanism or maybe a chain hoist on a rail down between your server >>> aisles, sort of like in a slaughter house or metal plating plant? >>> >> Wait a min guys maybe i wasnt clear, im not saying using standard server >> cases here. > > That's because i guess Jim had already given his sysadmin a few > flippers as a Christmas gift to service the rackmounts. > >> I am talking about actually using fish tanks instead. would >> you still have that leaking issue? > > And after a few days it'll get really hot inside that fish tank. > > You'll remember then the bubbles which do a great cooling job > and considering the huge temperature difference it'll remove quite > some watts - yet it'll keep heating up if you use a box with 4 cores > or more > as those consume more than double the watts than what the shown systems > used. > > But as you had explained to me you only have some old junk there anyway > so it's worth a try, especially interesting to know is how much watts > the fishing > tank removes by itself. Maybe you can measure that for us. > > It's interesting to know how much a few bubbles remove, as that > should be very efficient > way to remove heat once it approaches a 100C+ isn't it? > > Jonathan, maybe you can get air from outside, i see now at the weather > report that it's 13C in Malta, is that correct > or is that only during nights? > Honestly not sure as I am back state side till next tuesday, but it is possible that that is at night or during the day. As of right now I am not sure. > Maybe Jim wants to explain the huge temperature difference that the > high voltage power cables > cause default and the huge active cooling that gets used for the small > parts that are underground. > Even then they can't really put in the ground such solutions for high > voltages over too long of a distance, > that's technical not possible yet. > > Above me is 2 * 450 megawatt, which is tough to put underground for > more than a kilometer or so, besides that > they need the trajectory to be 8 meters wide as well as a minimum. > > Not sure you want that high temperature in your aquarium, the > components might not withstand it for too long :) > > Anyway, I found it a very entertaining "pimp your computer" youtube > video from 2007 that aquarium and i had a good laugh! > > Vincent > >> _______________________________________________ >> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing >> To change your subscription (digest mode or unsubscribe) visit >> http://www.beowulf.org/mailman/listinfo/beowulf > _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From diep at xs4all.nl Thu Dec 29 11:24:58 2011 From: diep at xs4all.nl (Vincent Diepeveen) Date: Thu, 29 Dec 2011 17:24:58 +0100 Subject: [Beowulf] clustering using off the shelf systems in a fish tank full of oil. In-Reply-To: <4EFC8D03.4020406@gmail.com> References: <4EFB5AAE.3030900@gmail.com> <715C5657-461B-41E7-9591-5DF89F3CC285@xs4all.nl> <4EFC8D03.4020406@gmail.com> Message-ID: <5AF52A05-28AA-4EE5-A081-EA60BD1E9B32@xs4all.nl> On Dec 29, 2011, at 4:53 PM, Jonathan Aquilina wrote: >> >> Jonathan, maybe you can get air from outside, i see now at the >> weather report that it's 13C in Malta, is that correct >> or is that only during nights? >> > > Honestly not sure as I am back state side till next tuesday, but it > is possible that that is at night or during the day. As of right > now I am not sure. > Jonathan, You're basically saying you lied to me on MSN that you live in Malta and have a job there and use a few old junk computers (P3 and such) to build a cluster, like you posted about 1 or 2 days (some years ago) after we chatted onto this mailing list? Vincent _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From eagles051387 at gmail.com Thu Dec 29 11:28:48 2011 From: eagles051387 at gmail.com (Jonathan Aquilina) Date: Thu, 29 Dec 2011 10:28:48 -0600 Subject: [Beowulf] clustering using off the shelf systems in a fish tank full of oil. In-Reply-To: <5AF52A05-28AA-4EE5-A081-EA60BD1E9B32@xs4all.nl> References: <4EFB5AAE.3030900@gmail.com> <715C5657-461B-41E7-9591-5DF89F3CC285@xs4all.nl> <4EFC8D03.4020406@gmail.com> <5AF52A05-28AA-4EE5-A081-EA60BD1E9B32@xs4all.nl> Message-ID: <4EFC9540.5010906@gmail.com> On 12/29/2011 10:24 AM, Vincent Diepeveen wrote: > On Dec 29, 2011, at 4:53 PM, Jonathan Aquilina wrote: >>> Jonathan, maybe you can get air from outside, i see now at the >>> weather report that it's 13C in Malta, is that correct >>> or is that only during nights? >>> >> Honestly not sure as I am back state side till next tuesday, but it >> is possible that that is at night or during the day. As of right >> now I am not sure. >> > Jonathan, > > You're basically saying you lied to me on MSN that you live in Malta > and have a job there and use a few old junk computers (P3 and such) > to build a cluster, > like you posted about 1 or 2 days (some years ago) after we chatted > onto this mailing list? I have not lied. I do live there. I have my dad who still travels between texas and malta as he still works, and he couldnt take time off to come to malta for the holidays I am here till next tuesday visiting him. > Vincent > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing > To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From hahn at mcmaster.ca Thu Dec 29 14:49:37 2011 From: hahn at mcmaster.ca (Mark Hahn) Date: Thu, 29 Dec 2011 14:49:37 -0500 (EST) Subject: [Beowulf] clustering using off the shelf systems in a fish tank full of oil. In-Reply-To: <4EFC9540.5010906@gmail.com> References: <4EFB5AAE.3030900@gmail.com> <715C5657-461B-41E7-9591-5DF89F3CC285@xs4all.nl> <4EFC8D03.4020406@gmail.com> <5AF52A05-28AA-4EE5-A081-EA60BD1E9B32@xs4all.nl> <4EFC9540.5010906@gmail.com> Message-ID: guys, this isn't a dating site. _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From diep at xs4all.nl Thu Dec 29 19:50:45 2011 From: diep at xs4all.nl (Vincent Diepeveen) Date: Fri, 30 Dec 2011 01:50:45 +0100 Subject: [Beowulf] clustering using off the shelf systems in a fish tank full of oil. In-Reply-To: References: <4EFB5AAE.3030900@gmail.com> <715C5657-461B-41E7-9591-5DF89F3CC285@xs4all.nl> <4EFC8D03.4020406@gmail.com> <5AF52A05-28AA-4EE5-A081-EA60BD1E9B32@xs4all.nl> <4EFC9540.5010906@gmail.com> Message-ID: it's very useful Mark, as we know now he works for the company and also for which nation. Vincent On Dec 29, 2011, at 8:49 PM, Mark Hahn wrote: > guys, this isn't a dating site. > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin > Computing > To change your subscription (digest mode or unsubscribe) visit > http://www.beowulf.org/mailman/listinfo/beowulf _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From samuel at unimelb.edu.au Fri Dec 30 21:09:32 2011 From: samuel at unimelb.edu.au (Chris Samuel) Date: Sat, 31 Dec 2011 13:09:32 +1100 Subject: [Beowulf] personal HPC In-Reply-To: <34813.192.168.93.213.1324572677.squirrel@mail.eadline.org> References: <34813.192.168.93.213.1324572677.squirrel@mail.eadline.org> Message-ID: <201112311309.32578.samuel@unimelb.edu.au> On Fri, 23 Dec 2011 03:51:17 AM Douglas Eadline wrote: > BTW, if click the "Nexlink Limulus" link, you can take a survey > for a chance to win one of these systems. That survey requires you to pick a US state, which isn't really an option for those of us outside the USA.. is there any chance of getting that fixed up? Or should I just pick the Federated States of Micronesia? It's about the closest geographically to me I'd guess! :-) cheers! Chris -- Christopher Samuel - Senior Systems Administrator VLSCI - Victorian Life Sciences Computation Initiative Email: samuel at unimelb.edu.au Phone: +61 (0)3 903 55545 http://www.vlsci.unimelb.edu.au/ _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- MailScanner: clean From deadline at eadline.org Sat Dec 31 09:16:35 2011 From: deadline at eadline.org (Douglas Eadline) Date: Sat, 31 Dec 2011 09:16:35 -0500 (EST) Subject: [Beowulf] personal HPC In-Reply-To: <201112311309.32578.samuel@unimelb.edu.au> References: <34813.192.168.93.213.1324572677.squirrel@mail.eadline.org> <201112311309.32578.samuel@unimelb.edu.au> Message-ID: <55819.192.168.93.213.1325340995.squirrel@mail.eadline.org> Oh, sorry the contest is only open to US residents. There should be some rules posted somewhere, let me look in to it. -- Doug > On Fri, 23 Dec 2011 03:51:17 AM Douglas Eadline wrote: > >> BTW, if click the "Nexlink Limulus" link, you can take a survey >> for a chance to win one of these systems. > > That survey requires you to pick a US state, which isn't really an > option for those of us outside the USA.. is there any chance of > getting that fixed up? > > Or should I just pick the Federated States of Micronesia? It's about > the closest geographically to me I'd guess! :-) > > cheers! > Chris > -- > Christopher Samuel - Senior Systems Administrator > VLSCI - Victorian Life Sciences Computation Initiative > Email: samuel at unimelb.edu.au Phone: +61 (0)3 903 55545 > http://www.vlsci.unimelb.edu.au/ > _______________________________________________ > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing > To change your subscription (digest mode or unsubscribe) visit > http://www.beowulf.org/mailman/listinfo/beowulf > > -- > MailScanner: clean > -- Doug -- MailScanner: clean _______________________________________________ Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf