Thursday, 8 May 2014

Digital Image processing technique for blood Glucose measurements



Scalable algorithms must be developed using parallel techniques to reduce processing time and increase memory efficiency. If the data amount exceeds the memory of the CPU or GPU, several techniques can be employed, including compressed or packed representations of the data, decomposition techniques, multi-resolution schemes, or out-of-core techniques . Recent research combined bricking and decomposition with a hierarchical data structure.
           
            Different programming steps are used for the data management:
(i)                 decomposition techniques to reach a multi-resolution subdivision of the data,

             (ii) streaming techniques to asynchronously reach the right viewing data, and   
        (iii) algorithms to render the volume visualization or to visualize the zoomed data. 
The main disadvantage of working with Giga- to Terabyte volume data is the runtime performance. Current research is focused on advanced parallelization techniques in order to reach an acceptable real-time response. These techniques require different hardware architectures. Several programming languages have been developed to support such architectures:

1.      Parallel CPU-based programming on a single node with shared memory using threaded programming techniques like OpenMP or QtThreaded.

2.       Parallel GPU-based programming on a single node with one GPU or multiple GPUs using programming languages for the massive parallel cores on the graphic card. With advances in GPU architecture, several algorithms have reached higher efficiency by transferring the program from CPU to GPU. This means instead of four to eight parallel CPUs, 240 to 480 massively parallel processing cores on the graphic card are used. Several languages have been developed by the graphic cards industry to code algorithms for execution on the GPU.

3.      Parallel programming on multiple nodes in a cluster of linked computers connected through a fast local area network (LAN), which is also referred to as Grid computing . Special software interfaces manage the communication between the processes, like the message passing interface (MPI).

No comments:

Post a Comment