Customise Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorised as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyse the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customised advertisements based on the pages you visited previously and to analyse the effectiveness of the ad campaigns.

No cookies to display.

Advances in parallel programming

Recent advances in hardware technology led to availability of more and more powerful architectures. On the single system scenario, processors with an impressive number of cores and/or hardware thread contexts are available, often equipped with different kind of accelerators from “simple” vector units to GP-GPUs, FPGAs, NPU and specialized ASIC devices. On the cluster scenario, hardware accelerated and specialized networking architectures are available suitable to interconnect the advanced and highly parallel single system nodes. These advances in hardware impose the development of programing models and tools raising the level of abstraction presented to programmers but also being able to address the hardware peculiar features to achieve performance close to the peak ones even without too specific knowledge and expertise from the application programmers. In this lecture, we discuss new programming models and tools available, and we outline how different technologies can be used to improve the compiler/interpreter toolchains.

Speaker

Marco Danelutto – University of Pisa & CINI lab HPC-KTT

Prof. Marco Danelutto is a full professor at the Univ. of Pisa, Dept. of Computer Science. His main research interests are in the field of structured parallel programming models for parallel and distributed architectures and include design of parallel programming frameworks, tools to support parallel program development, autonomic management of non functional features, software components, parallel design patterns and algorithmic skeletons. Danelutto actively participates in the design and development of FastFlow, a structured, highly efficient, parallel programming framework targeting heterogeneous multi/many core architectures.
He participated and participates in different international research projects, including EU funded projects (CoreGRID NoE as director of the Programming model Institute and GridCOMP (FP6), Paraprhase and REPARA (FP7), RePhrase (H2020), Admire and TextaRossa (EuroHPC-2019). He is the author of about than 200 papers in international refereed journals and conferences.

Event Timeslots (1)

Tue 17 – Programming Models & Tools
-
M. Danelutto (University of Pisa & CINI lab HPC-KTT)