I’ve been blogging about GPUs recently, and I think you can tell it’s because I’m excited about the technology. General Purpose Computing on the GPU (GPGPU) promises great performance increases in computationally heavy software, which we find immensely useful. In the past, we’ve managed to engineer web-based applications (see: SmartConservation) that could run complex models by implementing a process queuing architecture, but in these systems, while they will run on the web, processing may still take several minutes and they therefore can neither provide a responsive user experience nor support many users. We’ve also engineered a system that can perform fast, distributed raster calculations (see: Walkshed, powered by DecisionTree).
One of the reasons that GPGPU is so promising is the increasing number of processing cores available on affordable graphics cards. This increases the computation capacity by leveraging many processors running in parallel. What’s interesting is that this technique is not new. Timothy Mattson, blogging at Intel, has been doing this since the mid 80′s. The Library of Congress contains a book on parallel computing structures and algorithms, dating back to 1969.
As we delve deeper into our work improving Map Algebra operations, important differences in algorithm approaches and implementations become apparent: not all parallel architectures are the same. One might be tempted to think that when switching from the single-threaded CPU logic to multithreaded/parallel logic that there would be one model of parallel computing that is universal. This is definitely not the case.
Three of the most popular types of parallel computing today are:
- Shared-memory Multi-Processors (SMP)
- Distributed-memory Massive Parallel Processors (MPP)
- Cluster computing
Each type of parallel computing has its benefits and drawbacks. It really just depends what kind of computing you need to do. I’ll describe these common computing types in detail, starting with the ‘traditional’ CPU model.