- Published on Tuesday, 06 October 2015 12:27
- Written by Douglas Eadline
- Hits: 142
A recent article on Phys.org has announced a breakthrough in quantum computing. The article, Crucial hurdle overcome in quantum computing, describes how a team at University of New South Wales (UNSW) in Sydney Australia has created a working quantum gate in silicon. This process paves the way for quantum computing to become a reality in the years to come. Background on quantum computing can be found in this Cluster Monkey article: A Smidgen of Quantum Computing
According to Dr Menno Veldhorst, a UNSW Research Fellow and the lead author of the Nature paper:.
"We've morphed those silicon transistors into quantum bits by ensuring that each has only one electron associated with it. We then store the binary code of 0 or 1 on the 'spin' of the electron, which is associated with the electron's tiny magnetic field".
- Published on Wednesday, 23 September 2015 14:23
- Written by Douglas Eadline
- Hits: 97
From the best acronym of the day (BAD) department
The Adept project is bringing some metrics and tools to help optimize energy-efficient use of parallel technologies. According the web site, "Adept builds on the expertise of software developers from high-performance computing (HPC) to exploit parallelism for performance, and on the expertise of Embedded systems engineers in managing energy usage. Adept is developing a tool that can guide software developers and help them to model and predict the power consumption and performance of parallel software and hardware."
Recently, Adapt released a benchmarks suite to help understand and measure power usage for HPC and embedded systems The benchmark suite consists of a wide range of benchmarks including both high-performance embedded and high-performance technical computing. The benchmarks are designed to characterize the efficiency (both in terms of performance and energy) of computer systems, from the hardware and system software stack to the compilers and programming models. More information about the benchmark suite can found on the EPCC Blog Page
Hopefully the ClusterMonkey crew will carve out some time to play with these tools and report back on their experiences.
- Published on Friday, 03 April 2015 14:04
- Written by Administrator
- Hits: 607
A quick note about some cool new features we are adding to ClusterMoneky. First, we have a new white paper repository. This repository will provide exclusive content that is only available to registered users. Registering on ClusterMonkey is simple (read below for more information on the registration rationale). If you have not already, go to the Registration Page and register (Name, email, and password, that is it!). When you are logged in, a "Downloads" item will appear in the blue main menu box on the left side of the page.
When you click on the "Downloads" link, you will be able to download:
- Introducing Limulus Hadoop - Learn about the capabilities of a true desk-side Hadoop workstation. Complete with benchmarks.
- The Limulus HPC Appliance - Learn about the exciting capabilities of the cool, quiet, and fast personal HPC appliance. Includes HPL and NAS benchmarks
- HPC for Dummies (2nd Edition) - a freely available book from AMD published a few years ago. Provides a good intro to HPC.
- Published on Wednesday, 29 July 2015 09:27
- Written by Number Six
- Hits: 321
Intel and Micron just announced a new type of memory that does not use transistors. Called 3D Xpoint memory, the new technology is reported to be 1,000 times faster in both read and write than NAND, as well ten times more dense with 1000 times more endurance. If this is true and the price is right a real disruption awaits the industry. The performance numbers are orders of magnitude over anything else (HP memristor where are you?).
Intel states that it not a phase-change memory process, a memristor technology, or a spin-transfer torque technique. Get more details here: What a New Class of Memory Means for Future Applications (The Platform).
Editors Note: The stories and articles have slowed to a trickle because of this: Hadoop 2 Quick-Start Guide. The book is now in production so more Monkey goodness shall be forthcoming.
Update: Intel has announced Optane branded drives and memory sticks built using Xpoint memory. Get more details from the Platform article: Intel Reveals Plans For Optane 3D XPoint Memory
- Published on Tuesday, 31 March 2015 08:58
- Written by Douglas Eadline
- Hits: 1148
From the elephant in the room department
Talk to most people about Apache™ Hadoop® and the conversation will quickly turn to using the MapReduce algorithm. MapReduce works quite well as a processing model for many types of problems. In particular, when multiple mapping process are used to span TBytes of data the power of a scalable Hadoop cluster becomes evident. In Hadoop version 1, the MapReduce process was one of two core components. The other component is the Hadoop Distributed File System (HDFS). Once data is stored and replicated in HDFS, the MapReduce process could move computational processes to the server on which specific data resides. The result is a very fast and parallel computational approach to problems with large amounts of data. But, MapReduce is not the whole story.
Login And Newsletter
We have 95 guests and no members online
Operational Characteristics of a ZFS-backed Lustre Filesystem
6 Oct 2015 | 12:10 pm
Seagate Acquires Dot Hill Systems
6 Oct 2015 | 11:45 am
Argonne’s Dr. Marius Stan from Breaking Bad on Computational Science in Cinema
6 Oct 2015 | 11:32 am
M&A: Cleversafe is snarfed up by IBM
5 Oct 2015 | 3:07 pm
Voting in HPCWire’s readers choice awards are open, please vote!
23 Sep 2015 | 3:20 pm
As the benchmark cooks
21 Sep 2015 | 3:44 pm