PhD candidates
Graduated PhD candidates
- Chris Broekema.
Commodity compute- and data-transport system design in modern large-scale distributed radio telescopes,
Jointly with Henri Bal. VU University Amsterdam, September 28, 2020. - Alessio Sclocco.
Accelerating Radio Astronomy with Auto-Tuning,
Jointly with Henri Bal. VU University Amsterdam, October 11, 2017. - Pieter Hijma.
Programming Many-Cores on Multiple Levels of Abstraction,
Jointly with Henri Bal. VU University Amsterdam, 9 June 2015. - Niels Drost.
Real-World Distributed Supercomputing,
Jointly with Henri Bal. VU University Amsterdam, 25 November 2010.
Research interests
My current research interests focuses on developing radio astronomy and signal processing algorithms for very large radio telescopes, such as LOFAR (operated by ASTRON, the Netherlands institute for radio astronomy), and the Square Kilometre Array (SKA). I implement these algorithms on accelerators like multi- and many-core architectures, such as graphics processing units (GPUS) from NVIDIA and AMD. For instance, I developed a software correlator on five different architectures with multiple cores. Per chip, the implementations on NVIDIA GPUs and the Cell are more than 20 times faster than the LOFAR production correlator on our IBM Blue Gene/P supercomputer. Also, the power efficiency of the many-core architectures is much better. I worked on correlators, beam forming, poly-phase filters, gridding (imaging). I am also working on real-time Radio Frequency Interference (RFI) mitigation for exascale instruments such as the Square Kilometre Array (SKA) telescope.
My research interests also include parallel programming with Java, in particular on distributed systems, such as grids, clouds and clusters. Together with the HPDC group at the VU, I designed and implemented Ibis. Ibis consists of a communication library for communication on the grid, and a set of high-level programming models that can be used to write parallel and distributed (grid) applications. These models include Satin (divide-and-conquer and master-worker), MPJ (Java’s MPI specification), GMI (Group Method Invocation, an object-oriented MPI-like model), and a highly efficient RMI implementation that can be up to ten times faster than the standard implementation.
I designed and developed Satin. Satin is one of the programming models of Ibis. With Satin, you can write divide-and-conquer programs in Java. These applications recursively divide a problem into smaller pieces. Next, the application can be deployed on a multi-core machine, a cluster, grid or cloud. The programming model is extremely high-level: the programs are essentially sequential, they do not contain any communication code, and there is no concept of remote machines. Still, the programs work highly efficiently on a grid, and support speculative parallelism, transparent fault-tolerance, malleability, are adaptive to CPU load and network performance changes. Pieter Hijma extended Satin to use GPUs, resulting in a spinoff called Cashmere.
I have developed the Java implementation of the Grid Application Toolkit (JavaGAT). The JavaGAT offers a set of coordinated, generic and flexible APIs for accessing grid services from application codes, portals, data managements systems. The JavaGAT sits between grid applications and numerous types of grid middleware, such as Globus, Unicore, SSH or Zorilla. JavaGAT lifts the burden of grid application programmers by providing them with a uniform interface that provides file access, job submission, monitoring, and access to information services. As a result, grid application programmers need only learn a single API to obtain access to the entire grid. Due to its modular design, the JavaGAT can easily be extended with support for other grid middleware layers. The JavaGAT is now standardized within the Open Grid Forum (OGF), and is now called SAGA (Simple Api for Grid Applications). The Java reference implementation for SAGA is built by our group, on top of the JavaGAT software.
I have worked in the virtual labs for eScience (Vl-e) project. Previously, I worked on the GridLab project. In this GridLab project, I worked on adaptive grid middleware, such as Delphoi. This is an information system that can give information about the grid. It can also predict future information, such as anticipated network and CPU load. Using this information, it can for instance set the optimal number of parallel data streams for large data transfers.