- Written by Douglas Eadline
- Hits: 857
A recent article on Phys.org has announced a breakthrough in quantum computing. The article, Crucial hurdle overcome in quantum computing, describes how a team at University of New South Wales (UNSW) in Sydney Australia has created a working quantum gate in silicon. This process paves the way for quantum computing to become a reality in the years to come. Background on quantum computing can be found in this Cluster Monkey article: A Smidgen of Quantum Computing
According to Dr. Menno Veldhorst, a UNSW Research Fellow and the lead author of the Nature paper:
"We've morphed those silicon transistors into quantum bits by ensuring that each has only one electron associated with it. We then store the binary code of 0 or 1 on the 'spin' of the electron, which is associated with the electron's tiny magnetic field."
- Written by Douglas Eadline
- Hits: 800
From the best acronym of the day (BAD) department
The Adept project is bringing some metrics and tools to help optimize energy-efficient use of parallel technologies. According the web site, "Adept builds on the expertise of software developers from high-performance computing (HPC) to exploit parallelism for performance, and on the expertise of Embedded systems engineers in managing energy usage. Adept is developing a tool that can guide software developers and help them to model and predict the power consumption and performance of parallel software and hardware."
Recently, Adapt released a benchmarks suite to help understand and measure power usage for HPC and embedded systems The benchmark suite consists of a wide range of benchmarks including both high-performance embedded and high-performance technical computing. The benchmarks are designed to characterize the efficiency (both in terms of performance and energy) of computer systems, from the hardware and system software stack to the compilers and programming models. More information about the benchmark suite can found on the EPCC Blog Page
Hopefully the ClusterMonkey crew will carve out some time to play with these tools and report back on their experiences.
- Written by Administrator
- Hits: 903
A quick note about some cool new features we are adding to ClusterMoneky. First, we have a new white paper repository. This repository will provide exclusive content that is only available to registered users. Registering on ClusterMonkey is simple (read below for more information on the registration rationale). If you have not already, go to the Registration Page and register (Name, email, and password, that is it!). When you are logged in, a "Downloads" item will appear in the blue main menu box on the left side of the page.
When you click on the "Downloads" link, you will be able to download:
- Introducing Limulus Hadoop - Learn about the capabilities of a true desk-side Hadoop workstation. Complete with benchmarks.
- The Limulus HPC Appliance - Learn about the exciting capabilities of the cool, quiet, and fast personal HPC appliance. Includes HPL and NAS benchmarks
- HPC for Dummies (2nd Edition) - a freely available book from AMD published a few years ago. Provides a good intro to HPC.
- Written by Number Six
- Hits: 717
Intel and Micron just announced a new type of memory that does not use transistors. Called 3D Xpoint memory, the new technology is reported to be 1,000 times faster in both read and write than NAND, as well ten times more dense with 1000 times more endurance. If this is true and the price is right a real disruption awaits the industry. The performance numbers are orders of magnitude over anything else (HP memristor where are you?).
Intel states that it not a phase-change memory process, a memristor technology, or a spin-transfer torque technique. Get more details here: What a New Class of Memory Means for Future Applications (The Platform).
Editors Note: The stories and articles have slowed to a trickle because of this: Hadoop 2 Quick-Start Guide. The book is now in production so more Monkey goodness shall be forthcoming.
Update: Intel has announced Optane branded drives and memory sticks built using Xpoint memory. Get more details from the Platform article: Intel Reveals Plans For Optane 3D XPoint Memory
- Written by Douglas Eadline
- Hits: 1629
From the elephant in the room department
Talk to most people about Apache™ Hadoop® and the conversation will quickly turn to using the MapReduce algorithm. MapReduce works quite well as a processing model for many types of problems. In particular, when multiple mapping process are used to span TBytes of data the power of a scalable Hadoop cluster becomes evident. In Hadoop version 1, the MapReduce process was one of two core components. The other component is the Hadoop Distributed File System (HDFS). Once data is stored and replicated in HDFS, the MapReduce process could move computational processes to the server on which specific data resides. The result is a very fast and parallel computational approach to problems with large amounts of data. But, MapReduce is not the whole story.
Login And Newsletter
We have 33 guests and no members online
Changing the Exascale Efficiency Narrative at Memory Start Point
28 Jul 2016 | 3:00 pm
First, Kill All The Servers
27 Jul 2016 | 7:07 pm
Mesos Reaches Milestone, Adds Native Docker
27 Jul 2016 | 12:42 pm
NSF to Invest $35 million in Scientific Software
30 Jul 2016 | 12:25 pm
University of Maryland and U.S. Army Research Lab to Collaborate on HPC
30 Jul 2016 | 12:01 pm
ORNL Supercomputers to Boost National Cancer Moonshot
29 Jul 2016 | 1:00 pm
Seagate and ClusterStor: a lesson in not jumping to conclusions based on what was not said
26 Jul 2016 | 12:38 pm
Systemd and non-desktop scenarios
20 Jul 2016 | 2:49 pm
You can’t win
20 Jul 2016 | 8:16 am