Recent Publications

Submitted Papers

  1. Assessment of sound spatialisation algorithms for sonic rendering with headsets
    Ali Tarzan, Paolo Bientinesi and Marco Alunno
    Journal of New Music Research, November 2017.
    Given an input sound signal and a target virtual sound source, sound spatialisation algorithms manipulate the signal so that a listener perceives it as though it were emitted from the target source. There exist several established spatialisation approaches that deliver satisfactory results when loudspeakers are used to playback the manipulated signal. As headphones have a number of desirable characteristics over loudspeakers, such as portability, isolation from the surrounding environment, cost and ease of use, it is interesting to explore how a sense of acoustic space can be conveyed through them. This article first surveys traditional spatialisation approaches intended for loudspeakers, and then reviews them with regard to their adaptability to headphones.
    abstractwebhide
  2. Extended Pipeline For Content-Based Feature Engineering In Music Genre Recognition
    Tina Raissi and Paolo Bientinesi
    Submitted to the International Conference on Acoustics, Speech, and Signal Processing, October 2017.
    We present a feature engineering pipeline for the construction of musical signal characteristics, to be used for the design of a supervised model for musical genre identification. The key idea is to extend the traditional two-step process of extraction and classification with additive stand-alone phases which are no longer organized in a waterfall scheme. The whole system is realized by traversing backtrack arrows and cycles between various stages. In order to give a compact and effective repre- sentation of the features, the standard early temporal integra- tion is combined with other selection and extraction phases: on the one hand, the selection of the most meaningful char- acteristics based on information gain, on the other hand, the inclusion of the nonlinear correlation between this subset of features, determined by an autoencoder. The results of the experiments conducted on GTZAN dataset reveal a noticeable contribution of this methodology towards the model’s perfor- mance in classification task.
    abstracthide

Journal Articles

  1. Algorithm 979: Recursive Algorithms for Dense Linear Algebra—The ReLAPACK Collection
    ACM Transactions on Mathematical Software (TOMS), Volume 44(2), pp. 16:1-16:19, September 2017.
    @article{Peise2017:728,
        author    = "Elmar Peise and Paolo Bientinesi",
        title     = "Algorithm 979: Recursive Algorithms for Dense Linear Algebra—The ReLAPACK Collection",
        journal   = "ACM Transactions on Mathematical Software (TOMS)",
        year      = 2017,
        volume    = 44,
        number    = 2,
        pages     = "16:1--16:19",
        month     = sep,
        publisher = "ACM",
        address   = "New York, NY, USA",
        url       = "http://arxiv.org/pdf/1602.06763v1"
    }
    To exploit both memory locality and the full performance potential of highly tuned kernels, dense linear algebra libraries such as LAPACK commonly implement operations as blocked algorithms. However, to achieve next-to-optimal performance with such algorithms, significant tuning is required. On the other hand, recursive algorithms are virtually tuning free, and yet attain similar performance. In this paper, we first analyze and compare blocked and recursive algorithms in terms of performance, and then introduce ReLAPACK, an open-source library of recursive algorithms to seamlessly replace most of LAPACK's blocked algorithms. In many scenarios, ReLAPACK clearly outperforms reference LAPACK, and even improves upon the performance of optimizes libraries.
    abstractwebPDFbibtexhide
  2. TTC: A high-performance Compiler for Tensor Transpositions
    Paul Springer, Jeff R. Hammond and Paolo Bientinesi
    ACM Transactions on Mathematical Software (TOMS), Volume 44(2), pp. 15:1-15:21, August 2017.
    @article{Springer2017:910,
        author    = "Paul Springer and {Jeff R.} Hammond and Paolo Bientinesi",
        title     = "TTC: A high-performance Compiler for Tensor Transpositions",
        journal   = "ACM Transactions on Mathematical Software (TOMS)",
        year      = 2017,
        volume    = 44,
        number    = 2,
        pages     = "15:1--15:21",
        month     = aug,
        publisher = "ACM",
        url       = "http://arxiv.org/pdf/1603.02297v1"
    }
    We present TTC, an open-source parallel compiler for multidimensional tensor transpositions. In order to generate high-performance C++ code, TTC explores a number of optimizations, including software prefetching, blocking, loop-reordering, and explicit vectorization. To evaluate the performance of multidimensional transpositions across a range of possible use-cases, we also release a benchmark covering arbitrary transpositions of up to six dimensions. Performance results show that the routines generated by TTC achieve close to peak memory bandwidth on both the Intel Haswell and the AMD Steamroller architectures, and yield significant performance gains over modern compilers. By implementing a set of pruning heuristics, TTC allows users to limit the number of potential solutions; this option is especially useful when dealing with high-dimensional tensors, as the search space might become prohibitively large. Experiments indicate that when only 100 potential solutions are considered, the resulting performance is about 99% of that achieved with exhaustive search.
    abstractwebPDFbibtexhide
  3. The ELAPS Framework: Experimental Linear Algebra Performance Studies
    International Journal of High Performance Computing, August 2017.
    Accepted.
    @article{Peise2017:560,
        author  = "Elmar Peise and Paolo Bientinesi",
        title   = "The ELAPS Framework: Experimental Linear Algebra Performance Studies",
        journal = "International Journal of High Performance Computing",
        year    = 2017,
        month   = aug,
        note    = "Accepted",
        url     = "http://arxiv.org/pdf/1504.08035v1"
    }
    In scientific computing, optimal use of computing resources comes at the cost of extensive coding, tuning and benchmarking. While the classic approach of “features first, performance later” is supported by a variety of tools such as Tau, Vampir, and Scalasca, the emerging performance-centric approach, in which both features and performance are primary objectives, is still lacking suitable development tools. For dense linear algebra applications, we fill this gap with the Experimental Linear Algebra Performance Studies framework (ELAPS), a multi-platform open-source environment for easy and fast, yet powerful performance experimentation and prototyping. In contrast to many existing tools, ELAPS targets the beginning of the development process, assisting application developers in both algorithmic and optimization decisions. With ELAPS, users construct experiments to investigate how performance and efficiency depend on factors such as caching, algorithmic parameters, problem size, and parallelism. Experiments are designed either through Python scripts or a specialized GUI, and run on a spectrum of architectures, ranging from laptops to accelerators and clusters. The resulting reports provide various metrics and statistics that can be analyzed both numerically and visually. In this paper, we introduce ELAPS and illustrate its practical value in guiding critical performance decisions already in early development stages.
    abstractPDFbibtexhide

Peer Reviewed Conference Publications

  1. Program Generation for Small-Scale Linear Algebra Applications
    Daniele Spampinato, Diego Fabregat-Traver, Paolo Bientinesi and Markus Pueschel
    Proceedings of the International Symposium on Code Generation and Optimization, February 2018.
    @inproceedings{Spampinato2018:858,
        author  = "{Daniele } Spampinato and Diego Fabregat-Traver and Paolo Bientinesi and Markus Pueschel",
        title   = "Program Generation for Small-Scale Linear Algebra Applications",
        year    = 2018,
        address = "Vienna, Austria",
        month   = feb
    }
    We present SLinGen, a program generation system for linear algebra. The input to SLinGen is an application expressed mathematically in a linear-algebra-inspired language (LA) that we define. LA provides basic scalar/vector/matrix additions/multiplications, higher level operations including linear systems solvers, Cholesky and LU factorizations, as well as loops. The output of SLinGen is performance-optimized single-source C code, optionally vectorized with intrinsics. The target of SLinGen are small-scale computations on fixed-size operands, for which a straightforward implementation using optimized libraries (e.g., BLAS or LAPACK) is known to yield suboptimal performance (besides increasing code size and introducing dependencies), but which are crucial in control, signal processing, computer vision, and other domains. Internally, SLinGen uses synthesis and DSL-based techniques to optimize at a high level of abstraction. We benchmark our program generator on three prototypical applications: the Kalman filter, Gaussian process regression, and an L1-analysis convex solver, as well as basic routines including Cholesky factorization and solvers for the continuous-time Lyapunov and Sylvester equations. The results show significant speed-ups compared to straightforward C with icc/clang or a polyhedral optimizer, as well as library-based and template-based implementations.
    abstractbibtexhide
  2. The Generalized Matrix Chain Algorithm
    Proceedings of the International Symposium on Code Generation and Optimization, February 2018.
    @inproceedings{Barthels2018:130,
        author  = "Henrik Barthels and Marcin Copik and Paolo Bientinesi",
        title   = "The Generalized Matrix Chain Algorithm",
        year    = 2018,
        address = "Vienna, Austria",
        month   = feb
    }
    In this paper, we present a generalized version of the matrix chain algorithm to generate efficient code for linear algebra problems, a task for which human experts often invest days or even weeks of works. The standard matrix chain problem consists in finding the parenthesization of a matrix product $M := A_1 A_2 \cdots A_n$‚Äč that minimizes the number of scalar operations. In practical applications, however, one frequently encounters more complicated expressions, involving transposition, inversion, and matrix properties. Indeed, the computation of such expressions relies on a set of computational kernels that offer functionality well beyond the simple matrix product. The challenge then shifts from finding an optimal parenthesization to finding an optimal mapping of the input expression to the available kernels. Furthermore, it is often the case that a solution based on the minimization of scalar operations does not result in the optimal solution in terms of execution time. In our experiments, the generated code outperforms other libraries and languages on average by a factor of about 5. The motivation for this work comes from the fact that---despite great advances in the development of compilers---the task of mapping linear algebra problems to optimized kernels is still to be done manually. In order to relieve the user from this complex task, new techniques for the compilation of linear algebra expressions have to be developed.
    abstractbibtexhide
  3. Efficient Pattern Matching in Python
    Manuel Krebber, Henrik Barthels and Paolo Bientinesi
    Proceedings of the 7th Workshop on Python for High-Performance and Scientific Computing, November 2017.
    In conjunction with SC17: The International Conference for High Performance Computing, Networking, Storage and Analysis.
    @inproceedings{Krebber2017:404,
        author    = "{Manuel } Krebber and Henrik Barthels and Paolo Bientinesi",
        title     = "Efficient Pattern Matching in Python",
        booktitle = "Proceedings of the 7th Workshop on Python for High-Performance and Scientific Computing",
        year      = 2017,
        month     = nov,
        note      = "In conjunction with SC17: The International Conference for High Performance Computing, Networking, Storage and Analysis",
        url       = "https://arxiv.org/pdf/1710.00077.pdf"
    }
    Pattern matching is a powerful tool for symbolic computations. Applications include term rewriting systems, as well as the manipulation of symbolic expressions, abstract syntax trees, and XML and JSON data. It also allows for an intuitive description of algorithms in the form of rewrite rules. We present the open source Python module MatchPy, which offers functionality and expressiveness similar to the pattern matching in Mathematica. In particular, it includes syntactic pattern matching, as well as matching for commutative and/or associative functions, sequence variables, and matching with constraints. MatchPy uses new and improved algorithms to efficiently find matches for large pattern sets by exploiting similarities between patterns. The performance of MatchPy is investigated on several real-world problems.
    abstractwebPDFbibtexhide
  4. Linnea: Compiling Linear Algebra Expressions to High-Performance Code
    Proceedings of the 8th International Workshop on Parallel Symbolic Computation, July 2017.
    @inproceedings{Barthels2017:254,
        author    = "Henrik Barthels and Paolo Bientinesi",
        title     = "Linnea: Compiling Linear Algebra Expressions to High-Performance Code",
        booktitle = "Proceedings of the 8th International Workshop on Parallel Symbolic Computation",
        year      = 2017,
        month     = jul,
        url       = "http://hpac.rwth-aachen.de/~barthels/publications/PASCO_2017.pdf"
    }
    Linear algebra expressions appear in fields as diverse as computational biology, signal processing, communication technology, finite element methods, and control theory. Libraries such as BLAS and LAPACK provide highly optimized building blocks for just about any linear algebra computation; thus, a linear algebra expression can be evaluated efficiently by breaking it down into those building blocks. However, this is a challenging problem, requiring knowledge in high-performance computing, compilers, and numerical linear algebra. In this paper we give an overview of existing solutions, and introduce Linnea, a compiler that solves this problem. As shown through a set of test cases, Linnea’s results are comparable with those obtained by human experts.
    abstractwebPDFbibtexhide
  5. MatchPy: A Pattern Matching Library
    Manuel Krebber, Henrik Barthels and Paolo Bientinesi
    Proceedings of the 15th Python in Science Conference, July 2017.
    @inproceedings{Krebber2017:550,
        author    = "{Manuel } Krebber and Henrik Barthels and Paolo Bientinesi",
        title     = "MatchPy: A Pattern Matching Library",
        booktitle = "Proceedings of the 15th Python in Science Conference",
        year      = 2017,
        month     = jul,
        url       = "http://conference.scipy.org/proceedings/scipy2017/pdfs/manuel_krebber.pdf"
    }
    Pattern matching is a powerful tool for symbolic computations, based on the well-defined theory of term rewriting systems. Application domains include algebraic expressions, abstract syntax trees, and XML and JSON data. Unfortunately, no lightweight implementation of pattern matching as general and flexible as Mathematica exists for Python Mathics,MacroPy,patterns,PyPatt. Therefore, we created the open source module MatchPy which offers similar pattern matching functionality in Python using a novel algorithm which finds matches for large pattern sets more efficiently by exploiting similarities between patterns.
    abstractwebPDFbibtexhide