Parallel Programming: Soon the Only Game in Town

The news is not that parallel programming is here—it has been here for a long time.

中欧Aw 2254 1001
The news is thatparallel programmingwill shortly be pervasive, from embedded systems, to computer aided design/computer assisted manufacturing (CAD/CAM) workstations, to supervisory systems, to high-performance computing clusters.

“We’re seeing more and more complexcontrol algorithms,” says Casey Weltzin, product manager for LabView Real‑Time at test and automation supplierNational Instruments Corp.(www.ni.com), in Austin, Texas. “Automation systems will need to be more reliable to incorporate more features. This will require greater processing speed. But we won’t get that performance by relying on faster clock rates to run traditional sequential programs faster. We’re not likely to move much above the current 3 gigahertz (GHz) because of fundamental power consumption and dissipation issues. Instead, we’ll get the performance by distributing the tasks that together solve a single problem across multi-core processors. The only way to realize the potential of these newer processors is with parallel programming strategies.”

“Parallel programming will be ubiquitous. Let there be no doubt about it,” said Sanjiv Shah in a 2007 interview forThinking Parallel(www.thinkingparallel.com). Shah is a senior principal engineer atIntel Corp.(www.intel.com), the Santa Clara, Calif.-based supplier of multi-core microprocessors, and an architect of OpenMP, the most widely adopted parallel programming standard using compiler directives. “Fast movers will use parallelism as a competitive advantage, and sequential applications may survive due to installed user base and features. But, over time, as more and more cores become available, sequential applications will be at a bigger disadvantage. They will be ignoring too much computing power to compete and survive.”

How it’s done
“Parallel programming means mapping the code into ‘threads,’ ” says Weltzin. “A thread is just a piece of code that tells the operating system that it can run in parallel with other pieces of code. Each thread can then run on a different processor core.”

Parallel development begins with discovering parallelism in the problem that the program is intended to solve. For example, “Update all pixels in an image” exhibits data parallelism. The same task, “update,” is performed on each pixel datum. On the other hand, “Evaluate multiple properties within a chemical reaction system” exhibits task parallelism. Both the task (“evaluate”) and the data vary. Consultation with domain experts can reveal parallelism that is far less apparent.

Parallelism discovered in the problem is then used to develop an algorithm and data structures that can be expressed in a programming language. Parallel programming has such broad relevance that it can be profitably implemented at different levels: at a low level, closer to the computer, it offers great control and the last ounce of performance; at a higher level, closer to the physical problem, it offers convenience and domain focus.

In a typical architecture, the output of the previous step in a program is a sequential instruction that could execute in a parallel fashion but won’t yet. At this point, the programmer adds explicit parallel constructs in a notation that will allow the program to execute in parallel if multiple cores are available. The ways to do this include compiler directives, message passing, calls to thread libraries and custom‑programmed control of individual threads. Debugging parallel programs is critical—and difficult, especially when parallel threads share and modify the same memory. Tuning will almost certainly be necessary because small changes in a parallel program can significantly alter overall performance.

Weltzin advises, “If I were to make one recommendation to programmers who want to get into parallel programming, I’d say, ‘Look at the tools that are out there.’ ” But parallel programming will probably never be easy. “Parallel programming is simply a different way of thinking,” Weltzin concludes.

Marty Weil, martyweil@charter.net, is anAutomation WorldContributing Writer.

National Instruments Corp.
www.ni.com

Thinking Parallel
www.thinkingparallel.com

Intel Corp.
www.intel.com

Subscribe to Automation World's RSS Feeds for Columns & Departments

More in Home