

Therefore, even employing a Rad Hard By Design (RHBD) technology, a GPU is expected to fail almost three orders of magnitude more often than a state-of-the-art space processor.Ī larger soft error vulnerability is not the only reason why simple microarchitectures with low parallelism are still the vast majority of processors employed in space. As a matter of fact, the failure rate of a processor (given a certain technology and environment) is proportional to its area (i.e., the number of sequential elements when considering only upsets in sequential elements). The main reason behind this very low mean time to failure (MTTF) is that GPUs are much larger (e.g., 2.2 billion transistors, ** which corresponds to roughly 550 MGE if we assume four transistors per GE ††) than single-core, single-issue processors (890 kGE for the one in ) typically employed in space. For instance, the GPU tested in is reported to fail during an irradiation with high-energy proton beam roughly every 43 s. One of the main issues in terms of hardware faced by the space industry is that it is not possible to reuse in a straightforward way the hardware platforms employed in terrestrial applications, given the specific constraints of satellite data systems especially in terms of robustness to ionizing radiation. The space industry looks at this phenomenon with interest, although the availability of large datasets for space applications is limited and the hardware employed in space applications lags behind in terms of performance compared with its commercial counterpart. T he success of deep neural networks (DNNs) for terrestrial applications has been mainly due to the availability of large datasets (i.e., rise of “big data”) and the availability of relatively inexpensive hardware that can run learning and inference in reasonable timescales, for instance, graphics processing units (GPUs).
#Super vectorizer 1.60 full
The design of the memory subsystem is carried out in detail to allow full exploitation of the computational resources in typically resource-constrained space systems. The workload of DNNs for on-board image and telemetry analysis is analyzed, and the results are used to drive the preliminary design of a RISC-V vector processor to be employed as a generic platform to enable energy-efficient OBDM for both payload and platform applications. This paper analyzes the impact of DNNs on the system-level capabilities of space systems in terms of on-board decision making (OBDM) and identifies the specific criticalities of deploying DNNs on satellites. The applicability of this paradigm to space systems, where both large datasets and inexpensive hardware are not readily available, is more difficult and thus still rare. The use of deep neural networks (DNNs) in terrestrial applications went from niche to widespread in a few years, thanks to relatively inexpensive hardware for both training and inference, and large datasets available.
