Introduction
Welcome to a new series on ClusterMonkey! While the news and articles have been a bit sparse lately, it is not because the head monkey has been idle. Indeed, there is so much to write about and so little time. Another issue we ran into was how to present all the recent projects that may seem rather disparate with an easy-to-understand overriding theme. Welcome to edge computing.
Defining edge computing has become tricky because it now has a marketing buzz associated with it. Thus, like many over-hyped technology topics, it may take on several forms and have some core aspects that allow it to be treated as a "thing."
In this series, the definition of edge is going to be as specific as possible. In general, edge computing is that which does not take place in the data center or the cloud (hint: the cloud is a data center). Such a definition is too broad, however, since computing is everywhere (from cell phones to actual desktop workstations). A more precise definition of edge computing can be written as:
Data center level computing that happens outside of the physical data center or cloud.
That definition seems to eliminate many smaller forms of computing but still is a little gray in terms of "data center level computing." This category of computing usually operates 24/7 and provides a significantly higher level of performance and storage than mass-marketed personal systems.
A good way to designate this type of edge computing is to categorize it as "No Data Center Needed" (NDN) computing. Thus, a good definition is "those systems that can rival data center system performance, storage, and networking, but are not physically housed in a data center or in the cloud." This categorization is still fairly broad but does keep the focus on the type of computing not normally found in the consumer marketplace. For those interested in technical coverage of edge computing, have a look at the Edge Section on The Next Platform website, and of course, check back here or sign up for our newsletters to read about our continuing series.
Location, Location, Location
In business, location is often a key to success. For edge computing, location is a constraint. Inside a data center, the power, noise, and heat envelope can be quite large and flexible. Outside the data center, the location may actually determine the level of edge computing that can take place. With regard to makeshift "personal data centers" in labs, offices, and closets (I have seen a quite a few of these), NDN computing should not require "building a personal remote data center" or require changes to the surrounding infrastructure. To provide workable plug-and-play NDN computing, a standard environment needs to be defined. Unlike the data center, the power, noise, and heat envelope must be smaller and based on unmodified standard office (work or home), factory, lab, and classroom environments. One way to define an NDN edge computing environment is to consider the chart in Figure One below.
Figure One: Defining the edge computing envelope
From a design standpoint, an NDN system should fit within the green cube in Figure One. There is a bit of wiggle room, but in general, the volume of the NDN cube can't get that much bigger before data center level (makeshift or otherwise) services are needed. If systems are engineered to work within the NDN cube, there is a good chance they can be deployed almost anywhere.
This series of articles is going to address engineering high performance--numeric and data--systems to fit within the NDN envelope in Figure One. In other words, our data processing environment is defined by a fixed set of specifications and we will tune the NDN design to fit within these parameters. Thus, our definition of location is derived from power, heat, and noise--all of which are NOT independent. The design aspects and how these play out in terms of other issues are discussed below.
Available Power
Available power is a fixed variable for most NDN computing. Adding new electrical service may not be possible or may be prohibitively expensive. In the U.S , typical electrical service is either 15 or 20 Amps of current at 120 Volts. That translates into a maximum of 1800 Watts (15Ax120V) for most office, classroom, or residential locations. Assuming 20% headroom for safety, the usable power is about 1440 Watts. Thus, on any single circuit, an NDN system should not exceed this amount. However, this assumes exclusive use of the circuit. Most outlets are not single runs and are shared with other outlets and possibly overhead lights. When looking to use power, these considerations are important; otherwise, when an NDN system places a large computational load on the power circuit, a breaker may trip. There may also be other shared devices on the same circuit including monitors, lamps, audio equipment, and even a coffee pot. Thus, the true available power often takes some investigation.There is a hard power ceiling of 1440 Watts (1920 Watts for a 20A service) on most circuits, but the actual ceiling may be much less depending on how the location is wired. Therefore, when designing NDN systems, it is best not to assume--unless verified--that the full 1440 Watts is available and to design accordingly (e.g. a good baseline for unknown circuits is 500-600W). Keep in mind, big CPUs and/or GPUs can add up to 250W each to the underlying system.
Another issue lies in the number of wall receptacles. Ideally, the use of plug strips should be avoided and an NDN system should plug directly into a single wall receptacle. Using multiple receptacles can be problematic if the system has multiple power supplies or components (e.g. an Ethernet switch) that require multiple receptacles. The use of a multi-plug uninterruptible power supply (UPS) or power conditioner can help with this situation; however, many AC/DC power supplies normally create more heat than a single large power supply and can lead to inefficient power use.