Home My Research Publications Curriculum Vitae Contact Me Google Scholar Twitter Me in Kyoto, Japan

My Research

Reconfigurable Architectures

Reconfigurable architectures are those architecture whose functionality is plastic and unfixed. Unlike for example a general-purpose processor -- whose functionality (# ALUs, # Cores, etc.) is fixed -- a reconfigurable architecture is flexible. One example of a modern reconfigurable architecture is the Field-Programmable Gate Array (FPGA). I have been working with FPGAs for a long time, particularly on how they can be used to complement-/compete-with existing compute devices with respect to performance and power-efficiency. My research have included both the use of High-Level Synthesis (HLS) tools and the development of them. Some exciting papers of mine on the subject:

Neuromorphic Systems

A recent interest of mine is how to build neuromorphic -- biologically inspired -- systems, preferably using reconfigurable architectures. Here I am looking at how to map different neuron- and synapse-models to modern hardware in order to execute (or "simulate") as many and as large systems as possible. Turns out that reconfigurable architectures are a good match for bio-inspired systems, and even using more abstract programming methods such as High-Level Synthesis (HLS) can yields many times better execution performance than on general-purpose systems. Two papers on the subject:

Parallel Computing

My research interest in parallel computing began even before my PhD studies, where I researched strengths and weaknesses in different parallel programming libraries (e.g. Cilk-5, TBB, OpenMP, GCD). I have since done various research on both homogeneous and hetergeneoeus systems, specifically honoring the concepts of tasks. During my PhD studies I also created a prototype system called BLYSK. BLYSK was a task-based runtime system I created to use for experimentation with runtime system schedulers during. It is supposed to be a api-compatible replacement for GCCs OpenMP runtime system (libgomp), but significantly faster and more versatile. BLYSK was also used in the PaPP (https://artemis-ia.eu/project/44-papp.html), where me and colleague Lars Bonnichsen (PhD, DTU) included support for speculation in OpenMP. You can find a version of BLYSK at: Github Some selected and exciting publications on the subject of parallel computing, performance vizualization, and performance prediction: