Select News

The news in this category has been selected by us because we thought it would be interestingto hard core cluster geeks. Of course, you don't have to be a cluster geek to read the news stories.

Intel and Micron just announced a new type of memory that does not use transistors. Called 3D Xpoint memory, the new technology is reported to be 1,000 times faster in both read and write than NAND, as well ten times more dense with 1000 times more endurance. If this is true and the price is right a real disruption awaits the industry. The performance numbers are orders of magnitude over anything else (HP memristor where are you?).

Intel states that it not a phase-change memory process, a memristor technology, or a spin-transfer torque technique. Get more details here: What a New Class of Memory Means for Future Applications (The Platform).

Editors Note: The stories and articles have slowed to a trickle because of this: Hadoop 2 Quick-Start Guide. The book is now in production so more Monkey goodness shall be forthcoming.

Update: Intel has announced Optane branded drives and memory sticks built using Xpoint memory. Get more details from the Platform article: Intel Reveals Plans For Optane 3D XPoint Memory

Wire-speed and beyond, oh wait

A recent twitter post by Chris Samuel ‏@chris_bloke pointed out some optimizations in Linux networking. Over at there is an article explaining Bulk network packet transmission. The comments are worth reading as well, including this blog post by Jesper Dangaard Brouer.

Addison Snell of Intersect360 Research presented the firm’s findings on HPC Software Environments in 2014 to the members of the HPC500, an elite user group of organizations that represent a worldwide, diverse collection of established HPC professionals from a cross-section of academic, government, and commercial organizations.

The presentation drew on findings from several different reports. Intersect360 Research recently released two HPC User Site Census reports, one on applications and one on middleware. The reports examined the supplier, products, and primary usage of the application software and middleware reported at a number of sites over the previous year. The presentation also included findings from the research firm’s annual HPC user budget map, which tracks spending patterns and shows a percent of spending by category, as well as from a special study on “The Big Data Opportunity for HPC,” which surveyed both HPC and non-HPC enterprises (through a partnership with Gabriel Consulting) on Big Data applications and infrastructures.

A petabyte of space should be enough for anybody

Quick, you just got a petabyte of storage, how fast can you fill it? And once it is full how fast can you dump it. If you estimated anything more than 14 hours, then you are too slow. Way too slow. By combining well designed hardware and BeeGFS (f.k.a. FhGFS) you get the Scalable Informatics FastPath Unison storage appliance. A big chunk of storage that delivers 20 gigabytes/sec (sustained). You can learn a more details from the interview on Inside HP.

Free HPC secrets! Your key to success!

The Council on Competitiveness recently released Solve. The Exascale Effect: the Benefits of Supercomputing Investment for U.S. Industry (pdf). As the federal government pursues exascale computing to achieve national security and science missions, Solve examines how U.S.-based companies also benefit from leading edge computation and new technologies first funded by government.


Login And Newsletter

Create an account to access exclusive content, comment on articles, and receive our newsletters.


This work is licensed under CC BY-NC-SA 4.0

©2005-2023 Copyright Seagrove LLC, Some rights reserved. Except where otherwise noted, this site is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International. The Cluster Monkey Logo and Monkey Character are Trademarks of Seagrove LLC.