Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

Eine wieder verwendbare dynamische Ressourcenverteilung für verschiedene Tasks auf einem Workstationcluster

: Chen, K.
: May, T.

Darmstadt, 2006, 95 S.
Darmstadt, TU, Dipl.-Arb., 2006
Fraunhofer IGD ()
parallel computing; clustering; multi-thread handling

The parallel computing is a very important approach to improve the efficiency of certain programs. Parts of a parallel program run simultaneously on several processors or computers so that the computing time will be greatly reduced. Computer clusters are several computers linked with Ethernet . It's the cheapest way to utilize the power of parallel computing and can be logically infinitely extended. To take advantage of a cluster, one must employ certain APIs like MPI or PVM. Such APIs use process as the smallest calculating unit to distribute on the cluster. With these APIs one must explicit program the communication between processes.
This diplom thesis proposed a new way to program the cluster: using the thread. A framework were designed and a prototype of that framework, with which a program can distribute it's threads on a cluster, was implemented based upon MPI. The framework takes over the communication between threads and provides tools such as mutex, semaphore und wait condition to control the work flow between threads. A default stepwise load balancing algorithm was also provided to distribute the workload on the cluster as balanced as possible. Design patterns were used to provide flexibility so that the algorithms can be easily updated. The prototype was tested at the end on a workstation cluster with the marching cubes algorithm.