Parallel Computing
Knuth isn't a huge fan of parallel computing. He acknowledged several fields where parallel computing speeds things up, like computer graphics, but remains unsatisfied with its daily-life impact. On the other hand, Tim Mattson, a proponent of OpenMP, embraces parallel computing for its power saving capability. Here, a computational complexity perspective is provided to reconcile various contentious issues regarding parallel computing, like its longevity.
Computational complexity measures the growth of logical depth with respect to growing inputs. If the number of parallel machines is of O(1), it can be shown that parallel speedup doesn't change the order of computational complexity, but only affects the constant factor. From traditional point of view, the order of computational complexity is of prime importance, for the presumption that data sets may grow astronomically, while the constant factor is of little interest. However, we wish to emphasize that in many practical and even critical applications, like medical imaging, data sets are often of size below a fixed number. The number may be large, but the central challenge isn't growing input sizes, but how many batch operations can be performed for limited duration. Smaller constant factor allows more batch operations to be performed and thus is beneficial to higher productivity.
The constant factor in computational complexity should be emphasized for batch operations.
So far, textbooks on algorithms seldom raise the issue of the constant factor. We believe that this is a mistake. Big Theta is useful for growing input sizes, but not that useful for batch operations. And, yes, since parallel computing has the potential to lower the constant factor, its longevity is guaranteed and its implementation should be improved so that more batch applications can be produced.
Comments
Post a Comment