Skip to content

Software

Radio Astronomy and LOFAR / SKA

I contributed to several software packages and modules for the LOFAR radio telescope. I developed a fast asynchronous data transpose in MPI, and developed the polyphase filter bank generator for the LOFAR production system. Next to that, I performed research for future correlators, often on many-core systems such as GPUs. The research includes work on real-time RFI mitigation, beam forming, polyphase filters, and the correlator algorithm itself. The software codes are mostly generic, and can be used for other telescopes as well. In most cases, the codes are available with open source licenses, in C++ for CPUs, and in CUDA and OpenCL for GPUs.

Correlators

As a part of my research into many-core radio astronomy correlators, I developed correlator implementations on several different architectures, including CPUs, GPUs from NVIDA and AMD, as well as the Cell/BE processor. These codes were used in several research papers. The best starting point for more information is this paper:

Rob V. van Nieuwpoort and John W. Romein:
Correlating Radio Astronomy Signals with Many-Core Hardware
Springer International Journal of Parallel Programming, Special Issue on NY-2009 Intl. Conf. on Supercomputing, Volume 39, Number 1, 88-114, DOI: 10.1007/s10766-010-0144-3, 2011.

The CPU and GPU codes (C++, CUDA, OpenCL) are on GitHub.

Polyphase filter banks

I wrote the polyphase filter bank generator for LOFAR. A polyphase filter splits a signal into multiple frequency channels. It consist of a number of FIR filters (Finite Impulse Response), and FFTs (Fast Fourier Transforms). This code generates the filter weights for polyphase filter banks with arbitrary numbers of channels and taps, and with configurable windows. The code is a part of the LOFAR real-time central processor (also called the correlator). The code for the run-time generation of the filter banks is available from GitHub.

Together with Karel van der Veldt, Ana Lucia Varbanescu and Chris Jesshope, I worked on A Polyphase Filter For GPUs And Multi-Core Processors. A paper about this implementation was published at the First Workshop on High Performance Computing in Astronomy (AstroHPC 2012). The code runs on Intel CPUs (written in C), NVIDIA (with Cuda) and AMD GPUs (with OpenCL), and on the simulated MicroGrid architecture. The source code is available here.

picture by John Romein

Beam Forming

Alessio Sclocco, Ana Lucia Varbanescu, Jan David Mol, and I worked together on several beam forming implementations on GPUs. The work is described in this paper: Radio Astronomy Beam Forming on Many-Core Architectures, 26th IEEE International Parallel & Distributed Processing Symposium (IPDPS) May 21-25, 2012, Shanghai, China. The code is available on GitHub.

Radio Frequency Interference (RFI) Mitigation

eAstroViz: https://github.com/NLeSC/eAstroViz.

This is a visualization and RFI mitigation tool of the Netherlands eScience Center eAstronomy project. This tool can convert and visualize radio astronomy measurement sets (i.e. visibilities), as well as most LOFAR intermediate data products, such as raw voltages, filtered data and beam formed data. In addition, this tool can perform RFI mitigation.

Radio Frequency Interference (RFI) mitigation is extremely important to take advantage of the vastly improved bandwidth, sensitivity, and field-of-view of exascale telescopes. For current instruments, RFI mitigation is typically done offline, and in some cases (partially) manually. At the same time, it is clear that due to the high bandwidth requirements, RFI mitigation will have to be done automatically, and in real-time, for exascale instruments.

Although our techniques are generic, we describe how we implemented real-time RFI mitigation for one of the SKA pathfinders: The Low Frequency Array (LOFAR). The RFI mitigation algorithms and operations we introduce here are extremely fast, and the computational requirements scale linearly in the number of samples and frequency channels. We evaluate the quality of the algorithms with real LOFAR pulsar observations. By comparing the signal-to-noise ratios of the folded pulse profiles, we can quantitatively compare the impact of real-time RFI mitigation, and compare different algorithms.

  • This tool was used to produce the results published in this paper:
    Rob V. van Nieuwpoort and the LOFAR team: Exascale Real-Time Radio Frequency Interference Mitigation. Exascale Radio Astronomy, AAS Topical Conference Series Vol. 2. Proceedings of the conference held 30 March – 4 April, 2014 in Monterey, California. Bulletin of the American Astronomical Society, Vol. 46, #3, #403.01
  • In addition, this repository contains the real-time RFI mitigation code developed for LOFAR, in the directory LOFAR-source.
  • Finally, we have developed a GPU prototype version of the code as well. This code was developed by Linus Schoemaker in the context of his masters project. The code is in the directory GPU-source.

Ibis

Ibis is an open source Java distributed computing software project of the Computer Systems group, which is part of the Computer Science department of the Faculty of Sciences at the VU University, Amsterdam, The Netherlands.

The main goal of the Ibis project is to create an efficient, portable Java-based platform for distributed computing. The Ibis project currently consists of the IPL (a communication library), a variety of programming models, the Java Grid Application Toolkit, and the Zorilla peer-to-peer middleware. All components can be deployed on any compute platform, thanks to the use of Java.

All Ibis code can be found on GitHub.

The Ibis Portability Layer (IPL)

The Ibis Portability layer (IPL) is a communication library specifically designed for usage in a grid environment. It has a number of properties which help to achieve its goal of providing programmers with an easy to use, reliable grid communication infrastructure:

Run anywhere
Ibis, is programmed in Java. Because of this it is possible to use Ibis programs anywhere a Java Virtual machine is available.
Efficient
In addition to the standard network types available in Java, Ibis is able to use fast local networks such as Myrinet where these are available. If multiple network types are present on a single system, Ibis will automatically select the best type available.
Flexible
By offering multiple communication models, not only unicast communication, it is possible to express virtually any communication pattern easily and efficiently using Ibis.
Malleable
Availability of resources in grid system changes constantly as network and hardware fail and new resources get added to a grid. Ibis offers support for keeping track of machines in a computation to address these problems.
Simple
The design of the IPL is deliberately kept simple. This makes adding support for new network types easy, and allows for easy adaptation of Ibis as an communication framework.

The code for the IPL is on GitHub.

The Satin Divide-and-Conquer Parallel Programming Model

Satin extends Java with Cilk-like primitives that make it very convenient for the programmer to write divide and conquer style programs. Unlike manager/worker programs, divide-and-conquer algorithms operate by recursively dividing a problem into smaller sub-problems. This recursive subdivision goes on until the remaining sub-problem becomes trivial to solve. After solving sub-problems, their results are recursively recombined until the final solution is assembled. Due to its hierarchical nature, the divide-and-conquer model maps cleanly to extremely large-scale systems, which also tend to have an hierarchical structure. Satin contains a efficient and simple load-balancing algorithm, Cluster-aware Random Stealing (CRS), which outperforms existing load- balancing strategies on large multi-cluster systems. In addition, Satin also provides efficient fault-tolerance, malleability (e.g. the ability to cope with dynamically changing number of processors) and migration in a way that is transparent to the application.

The best paper to read if you are interested in Satin and divide-and-conquer programming in general is:

Rob V. van Nieuwpoort, Gosia Wrzesinska, Ceriel J.H. Jacobs and Henri E.Bal:
Satin: a High-Level and Efficient Grid Programming Model
ACM Transactions on Programming Languages and Systems (TOPLAS), Volume 32 Issue 3, ACM Press New York, NY, USA, 2010. DOI: 10.1145/1709093.1709096.

The code for Satin is on GitHub.

MPJ: MPI for Java

The MPJ programming interface has been defined by the Java Grande forum to provide MPI-like message passing for Java applications. Ibis MPJ is a pure-Java implementation of this interface, and delivers high-performance communication, while being deployable on various platforms, from Myrinet-based clusters to grids. See this paper for more details:

Markus Bornemann, Rob V. van Nieuwpoort, Thilo Kielmann:
MPJ/Ibis: a Flexible and Efficient Message Passing Platform for Java,
B. Di Martino et al. (Eds.): EuroPVM/MPI 2005, LNCS Volume Number 3666, pp. 217-224, 2005, Springer Verlag Berlin Heidelberg 2005.

Efficient Java Remote Method Invocation

We built a Remote Method Invocation (RMI) implementation on top of Ibis. RMI is described here. Ibis boosts RMI performance using several optimizations, especially to avoid the high overhead of runtime type inspection that current RMI implementations have. Earlier projects (e.g., Manta) applied similar optimizations by writing parts of the runtime system in native code (C instead of Java), giving up Java’s high portability. The philosophy behind Ibis is to try to obtain good performance without using any native code, but allow native solutions to further optimize special cases. For example, a Grid application developed with Ibis can use a pure-Java RMI implementation over TCP/IP that will run “everywhere”. However, when the application runs on, say, a Myrinet cluster, the RMI runtime system can request Ibis to load a more efficient communication implementation for Myrinet that partially uses native code. For more information, see this paper:

Jason Maassen, Rob V. van Nieuwpoort, Ronald Veldema, Henri E. Bal, and Aske Plaat:
An Efficient Implementation of Java’s Remote Method Invocation,
Proc. Seventh ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP’99),
pp. 173-182, Atlanta, GA, May 4-6, 1999.

The Java Grid Application Toolkit (JavaGAT)

The Java Grid Application Toolkit (JavaGAT) offers a set of coordinated, generic and flexible APIs for accessing grid services from application codes, portals, data managements systems, etc.

JavaGAT sits between grid applications and numerous types of grid middleware, such as Globus, Glite, SGE, SSH or Zorilla. JavaGAT lifts the burden of grid application programmers by providing them with a uniform interface that provides file operations, job submission, monitoring, and access to information services. As a result, grid application programmers need only to learn a single API to obtain access to the entire grid. Due to its modular design, JavaGAT can easily be extended with support for other grid middleware layers. Later, the Grid Application Toolkit was standardized in the OGF (Open Grid Forum) as SAGA (Simple API for Grid Applications). The Java implementation of SAGA was built on top of the JavaGAT. For more information, see this paper:

Rob V. van Nieuwpoort, Thilo Kielmann and Henri E. Bal:
User-Friendly and Reliable Grid Computing Based on Imperfect Middleware.,
Proceedings of the ACM/IEEE Conference on Supercomputing (SC’07), November 2007, Reno, NV, USA.

The source code for the JavaGAT is on GitHub.