New Computing Paradigms and Heterogeneous Parallel Architectures for High-Performance and Energy Efficiency of Classification and Optimization Tasks on Biomedical Engineering Applications (HPEE-COBE)

GOALS

 

In the last years the base hypothesis of our previous projects TIN2012-32039, TIN2015-67020-P, and PSI2015-65848-R has been confirmed: the computational capacity of present and future heterogeneous parallel architectures makes possible the efficient processing of new complex models for machine learning, such as deep neural networks, thus motivating and increasing the confidence in its usefulness to solve some of the relevant challenges facing society. Indeed, the conference given by J. L. Hennessy and D. A. Patterson after awarding of the ACM prize in 2017, and papers such as [DEA18] describe a new golden age for Computer Architecture that, at the dawn of the end of Moore's law and Dennard's scaling, is making possible the transition of Artificial Intelligence from the research laboratory to real applications that pose algorithms with a high computational complexity and large data sets of high-dimensional patterns usually affected by the so-called curse of dimensionality.

 

As in our previous projects, we consider that there are many techniques to solve problems of classification, clustering and optimization that can benefit from present and future parallel heterogeneous computer architectures, not only in terms of execution time, but also with respect to energy consumption, once a good compromise between locality and parallelism. Based on this central hypothesis, this project aims two interrelated general objectives:

  1. The proposal of new parallel algorithms to improve applications such as developmental dyslexia diagnosis, brain-computer interfaces (BCI), or context-aware modeling in healthcare, that pose classification problems of high-dimensional patterns requiring feature selection; and
  2. Their efficient implementation, in terms of execution time and electrical energy efficiency, in heterogeneous parallel architectures that include superscalar CPU cores, and accelerators such as GPUs, vector units, and other application specific processors, such as TPUs (Tensor Processing Units).