Close to the Edge: If I Don't Use ECC, Have I Made an Error?
- Details
- Written by Douglas Eadline
- Hits: 4576

A continuing installment of our (Close to the) Edge Computing series.
The main memory in computing devices is an ephemeral repository of information. Processing units (CPUs) may analyze and change the information, but ultimately the results of everything eventually end up in the main memory store. Of course, the information may move to nonvolatile memory or disk storage, which provides a more permanent resting place.
Main memory integrity is important. If errors occur in main memory, anything from nothing to a full crash of the entire computer is possible. In order to prevent and possibly correct memory errors, Error-Correcting Code memory (ECC) memory has been developed and deployed in systems where data errors may result in harmful results, like in real-time financial systems, for instance. The goal is that the data written to memory should be the same when it is read back in the future.
Close to the Edge: No Data Center Needed Computing
- Details
- Written by Douglas Eadline
- Hits: 3098

Introduction
Welcome to a new series on ClusterMonkey! While the news and articles have been a bit sparse lately, it is not because the head monkey has been idle. Indeed, there is so much to write about and so little time. Another issue we ran into was how to present all the recent projects that may seem rather disparate with an easy-to-understand overriding theme. Welcome to edge computing.
Defining edge computing has become tricky because it now has a marketing buzz associated with it. Thus, like many over-hyped technology topics, it may take on several forms and have some core aspects that allow it to be treated as a "thing."
In this series, the definition of edge is going to be as specific as possible. In general, edge computing is that which does not take place in the data center or the cloud (hint: the cloud is a data center). Such a definition is too broad, however, since computing is everywhere (from cell phones to actual desktop workstations). A more precise definition of edge computing can be written as:
Data center level computing that happens outside of the physical data center or cloud.
That definition seems to eliminate many smaller forms of computing but still is a little gray in terms of "data center level computing." This category of computing usually operates 24/7 and provides a significantly higher level of performance and storage than mass-marketed personal systems.
Sledgehammer HPC
- Details
- Written by Douglas Eadline
- Hits: 7154

HPC without coding in MPI is possible, but only if your problem fits into one of several high level frameworks.
[Note: The following updated article was originally published in Linux Magazine in June 2009. The background presented in this article has recently become relevant due to the resurgence of things like genetic algorithms and the rapid growth of MapReduce (Hadoop) . It does not cover deep learning.]
Not all HPC applications are created in the same way. There are applications like Gromacs, Amber, OpenFoam, etc. that allow domain specialist to input their problem into an HPC framework. Although there is some work required to "get the problem into the application", these are really application specific solutions that do not require the end user to write a program. At the other end of the spectrum are the user written applications. The starting points for these problems include a compiler (C/C++ or Fortran), an MPI library, and other programming tools. The work involved can range form small to large as the user must concern themselves with the "parallel aspects of the problem". Note: all application software started out at this point some time in the past.
Big Data In Little Spaces: Hadoop And Spark At The Edge
- Details
- Written by Administrator
- Hits: 1766

Ever wonder what Edge computing is all about? Data happens and information takes work. Estimates are that by 2020, 1.7 megabytes of new data will be created every second for every person in the world. That is a lot of raw data.
Two questions come to mind. What are we going to do with it and where we going to keep it. Big Data is often described by the three Vs – Volume, Velocity, and Variability – and note not all three need apply. What is missing is the letter “U” which stands for Usability. A Data Scientist will first ask, how much of my data is usable? Data usability can take several forms and include things like quality (is it noisy, incomplete, accurate) and pertinence (is there any extraneous information that will not make a difference to my analysis). There is also the issue of timeliness. Is there a “use by” date for the analysis or might the data be needed in the future for some as of yet unknown reason. The usability component is hugely important and often determines the size of any scalable analytics solution. Usable data is not the same as raw data.
Get the full article at The Next Platform. You may recognize the author.
Answering The Nagging Apache Hadoop/Spark Question
- Details
- Written by Douglas Eadline
- Hits: 8385
(or How to Avoid the Trough of Disillusionment)

A recent blog post, Why not so Hadoop?, is worth reading if you are interested in big data analytics, Hadoop, Spark, and all that. The article contains the 2015 Gartner Hype Cycle. The 2016 version is worth examining as well. Some points similar to the blog can be made here:
- Big data was at the "Trough of Disillusionment" stage in 2014, but is not seen in the 2015/16 Hype cycle.
- The "Internet of Things" (a technology that is expected to fill the big data pipeline) was on the peak for two years and now has been given "platform status."
Search
Login And Newsletter
Feedburner
Who's Online
We have 112 guests and no members online
Latest Stories/News
Popular
HPCWire
-
OpenHPC Announces the Release of OpenHPC v3.0
OpenHPC Announces the Release of OpenHPC v3.0
Oct. 4, 2023 — OpenHPC is pleased to announce the release of OpenHPC v3.0. This is the first release of the OHPC 3.x branch targeting support for three new major […] The post OpenHPC Announces the Release of OpenHPC v3.0[…]
Source: HPCwire
Created on: Oct 4, 2023 | 23:24 pm
HPCwire | Oct 4, 2023 | 23:24 pm -
UT’s Texas Institute for Electronics and Infleqtion Launch Quantum Manufacturing Center of Excellence
UT’s Texas Institute for Electronics and Infleqtion Launch Quantum Manufacturing Center of Excellence
AUSTIN, Texas, Oct. 4, 2023 — The University of Texas at Austin and Infleqtion, a global quantum technologies company, have signed a memorandum of understanding to develop a new center […] The post UT’s Texas Institute for Electronics and Infleqtion[…]
Source: HPCwire
Created on: Oct 4, 2023 | 22:26 pm
HPCwire | Oct 4, 2023 | 22:26 pm -
Keshav Pingali Receives Ken Kennedy Award for High Performance and Parallel Computing
Keshav Pingali Receives Ken Kennedy Award for High Performance and Parallel Computing
Oct. 4, 2023 — It’s 4 a.m. in Italy. Jet lagged before a conference, Keshav Pingali, professor of Computer Science and core faculty member at the Oden Institute for Computational Engineering and Sciences, found […] The post Keshav Pingali Receives Ken Kennedy Award for[…]
Source: HPCwire
Created on: Oct 4, 2023 | 21:17 pm
HPCwire | Oct 4, 2023 | 21:17 pm
InsideHPC
-
NVIDIA, Intel and Google Alums Form Lemurian Labs, Raise $9M for 20X AI Throughput Boost
Oct 4, 2023 | 20:39 pm
-
Keshav Pingali to Receive ACM-IEEE CS Ken Kennedy Award
Oct 4, 2023 | 19:22 pm
-
Hyperion: HPC Community’s Interest in LLMs Has ‘Exploded,’ with Complexity, Cost Concerns
Oct 4, 2023 | 16:52 pm