Opinions

Opinions are like clusters, everybody should have one. Well we have clusters and opinions in this category. We also welcome your feedback. Registered users can comment on articles including opinions.

Back in 2009, I was frustrated. Worshiping the Top500 list was all the rage. I just did not understand what all the fuss was about. I certainly appreciated the goal of the Top500 benchmark and the valuable historical data it has collected over the years. However, using it as a metric to measure real-world HPC performance was in my mind a "high tech pissing contest." In my opinion, things have gotten a little better, but not much. The focus on one data point was great for marketing types, but scientists and engineers know better.

Hadoop has been growing clusters in datacenters at a rapid pace. Is Hadoop the new corporate HPC?

Apache Hadoop has been generating a lot of headlines lately. For those that are not aware, Hadoop is an open source project that provides a distributed file system and MapReduce framework for massive amounts of data. The primary hardware used for Hadoop is clusters of commodity servers. File sizes can easily be in the petabyte range and can easily use hundreds or thousands of compute servers.

Open the #pod bay doors, @HAL

Despite over promising the likes of HAL 9000 the Artificial Intelligence (AI) community has been making steady progress. Indeed, the famous Watson Jeopardy Experiment was a great demonstration of the coming era of "smart systems." Other examples are Apple's Siri, and smart search engines (including Google, which seems to be getting smarter about its search results each year.)

All of these efforts have several things in common; AI based software, piles of data, and racks of commodity hardware. Popular conversations include terms like business intelligence, knowledge discovery, Big Data, Hadoop, and other new buzz words. Is this yet another fad being oversold by the marketing types or is this a game changing set of technologies that will shape how we interact with almost everything we touch?

Does the speed of light limits how big a cluster we can build

Recently, I started to think about the physical limits and how these limits would effect the size of clusters. I did some back of the envelope math to get an estimate of how c (the speed of light) can limit cluster size. As clusters continue to grow in size and the push toward exascale performance, the following analysis may become more important in designing HPC systems. I want to preface this discussion, however, with a disclaimer that I thought about this for all of 20 minutes. I welcome variations or refinements on my ciphering.

Much has changed in the supercomputing arena. Even you can get in the game!

Recently, Sebastian Anthony wrote an article for ExtremeTech entitled What Can You Do With A Supercomputer? His conclusion was "not much" and for many people he is largely correct. However, there is deeper understanding that may change the answer to "plenty."

He was mostly right when talking about the worlds largest supercomputers. Indeed, one very workable past definition of a supercomputer was "any computer that had at least a six digit price tag." In the past, that was largely true and created a rather daunting barrier to entry for those who needed to crunch numbers. The cost was due to an architectural wall between supercomputers and the rest of computing. These systems were designed to perform math very quickly using vector processors. It all worked rather well until the cost of fabrication made creating your own vector CPU prohibitively expensive.

Search

Login And Newsletter

Create an account to access exclusive content, comment on articles, and receive our newsletters.

Feedburner


This work is licensed under CC BY-NC-SA 4.0

©2005-2023 Copyright Seagrove LLC, Some rights reserved. Except where otherwise noted, this site is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International. The Cluster Monkey Logo and Monkey Character are Trademarks of Seagrove LLC.