List of Projects

Showing 1 - 11 of 11 Results

  • A Task-based Programming Environment to Develop Reactive HPC Applications – Chameleon

    The architecture of HPC systems is becoming increasingly complex. The BMBF-funded Chameleon project is dedicated to the aspect of dynamic variability in HPC systems, which is constantly increasing. Today's programming approaches are often not suitable for highly variable systems and may only be able to use some of the true performance capabilities of modern systems in the future.

    To this end, Chameleon develops a task-based programming environment that is better prepared for systems with dynamic variability than bulk synchronous programming models commonly used today. Results from Chameleon are expected to influence the OpenMP programming model.

    The ability of the Chameleon runtime environment to react to dynamic variability is evaluated using two applications. SeiSol simulates complex earthquake scenarios and the resulting propagation of seismic waves. Parallel processing at the node level is based on a explicitly implemented task queue that takes account of priority relations between the tasks. Chameleon's reactive task-based implementation is designed to simplify this task queue and improve scaling. sam(oa)² enables finite volume and finite element simulations on dynamic adaptive triangular grids. It implements load balancing with the help of space-filling curves and can be used, among other things, for the simulation of tsunami events. Chameleon will enable dynamic execution of tasks on remote MPI processes and develop a reactive infrastructure for general 1D load balancing problems.

    Further information can be found on the project homepage.

     

  • Development of a Scalable Data-Mining-based Prediction Model for ICT and Power Systems – ScaMPo

    The complexity of power grids and at the same time of supercomputers increases permanently. Especially the increasing share of renewable energy in the power generation implies fundamental changes in the power grid. A growing number of participants is able to produce and to consume power. Consequently, every behavior modification has influence on the grid. In the area of supercomputing, the complexity of the system increases. For instance, the new RWTH cluster CLAIX scheduled to start operation in Nov. 2016 will have 600 2-sockel compute nodes and each CPU will have 24 cores (48 cores including hyperthreading). Parts of the system will be accelerated by GPUs and the power consumption of the whole system but also of each component will be continuously monitored. Changes to the software stack and any replacement of a defect hardware component has an influence on the performance, the power consumption and also on the failure rate. Especially, the forecast of the impact of changes and its long-term effects like the reduction of failure rates is a high challenge. Data mining is the key technology to handle such complex systems and is in principle a computational process of discovering patterns in large data sets. In general, data mining is one of the key technologies for the digital society. Based on the two examples – handling of complex power grids and supercomputers – the project ScaMPo creates a scalable framework to collect the data and to store in a cloud infrastructure. Afterwards, the data will be analyzed, patterns will be discovered and the understanding of the system will be improved. In case of supercomputers the operation costs will be reduced while in power grid the stability and the penetration of renewable energy will be increased. This project will not develop new data mining techniques. Rather, the project will base on open-source approaches for data mining and focus on the strength of the project partner, which is the design of a scalable and a robust approach. The long-term vision of the project, is the generalization of the approach for other research areas and the creation of a competence center for scalable data mining technologies.

  • Efficient Runtime Support for Future Programming Standards

    BMBF gefördert Logo

    Because of the increasing number of cores on a single chip and in order to profit from the potential of accelerators in compute clusters, message passing paradigms like MPI are often no longer sufficient to utilize the hardware in an optimal way. Therefore a growing number of applications will employ a hybrid approach for parallelization, like MPI+OpenMP or MPI+OpenACC or even MPI+OpenMP+OpenACC. The recent version 4.0 of the OpenMP specification addresses this by incorporating programming support for accelerator devices and SIMD units in modern microarchitectures. This increases the complexity of application development and correctness checking for parallel applications. In ELP, a modified OpenMP runtime will be developed, delivering runtime internal information to correctness analysis tools like MUST or debuggers like DDT. This will allow detecting certain error classes automatically. The data will also be used by the performance analysis tool Vampir to better understand the performance behavior of an application.

  • Jülich Aachen Research Alliance (JARA)

    The Jülich Aachen Research Alliance, JARA for short, a cooperation between RWTH Aachen University and Forschungszentrum Jülich, provides a top-level research environment that is in the top international league and is attractive to the brightest researchers worldwide. In six sections, JARA conducts research in translational brain medicine, nuclear and particle physics, soft matter science, future information technologies, high-performance computing, and sustainable energy.

    The RWTH Aachen IT Center provides support to the JARA-HPC section, which first and foremost aims to contribute to making full use of the opportunities offered by high-performance computers and computer simulations to address current research issues. Furthermore, it seeks to provide a joint infrastructure for research and teaching in the fields of high-performance computing and visualization. More..

  • MUST Correctness Checking for YML and XMP Programs – MYX

    Exascale systems challenge the programmer to write multi-level parallel programs, which means employing multiple different paradigms to address each individual level of parallelism in the system. The long-term challenge is to evolve existing and to develop new programming models to better support the application development on exascale machines. In the multi-level programming paradigm FP3C, users are able to express high-level parallelism in the YvetteML workflow language (YML) and employ parallel components written in the XcalableMP (XMP) paradigm. XMP is a PGAS language specified by Japan’s PC Cluster Consortium for high-level programming and the main research vehicle for Japan’s post-petascale programming model research targeting exascale. YML is used to describe the parallelism of an application at a very high level, in particular to couple complex applications. By developing correctness checking techniques for both paradigms, and by investigating the fundamental requirements to first design for and then to verify the correctness of parallelization paradigms, MYX aims to combine the know-how and lessons learned of different areas to derive the input necessary to guide the development of future programming models and software engineering methods.

    In MYX we will investigate the application of scalable correctness checking methods to YML, XMP and selected features of MPI. This will result in a clear guideline how to limit the risk to introduce errors and how to best express the parallelism to catch errors that for principle reasons can only be detected at runtime, as well as extended and scalable correctness checking methods.

    Further information can be found on the SPPEXA webpage and the webpage of the German Science Foundation .

  • OpenMP - Focus on Shared-Memory-Parallelization

    Logo OpenMP

    Since 1998 the High Performance Comuting Team of the Center for Computing and Communication of the RWTH Aachen University is engaged in the topic of Shared-Memory-Parallelisierung with OpenMP. OpenMP is productively used on the currently largest Shared- Memory Computers of the Center for Computing and Communication with up to 1024 processor cores. OpenMP meanwhile supports also heterogeneous computers as well as vectorization. more

  • Performance, Optimization and Productivity - POP

    pop_logo_200.jpg

    Using performance analysis tools and optimizing code for HPC architectures is a cumbersome task which often requires in depth expert knowledge in HPC. Because of current trends of computing architectures to use accelerators, more cores and deeper memory hierarchies, this complexity is going to increase further in the foreseeable future. Therefore, the POP (Performance, Optimization and Productivity) project offers services in performance analysis and performance optimization to code developers in industry and academia to connect code developers and HPC experts. This shall allow to integrate performance optimization in the software development process of HPC applications. The POP project gathers experts from the Barcelona Supercomputer Center (BSC), the High Performance Computing Center Stuttgart (HLRS), the Jülich Supercomputing Centre (JSC), the Numerical Algebra Group (NAG), TERATEC und the IT Center of RWTH Aachen University.

    POP is one of the eight Centers of Excellence in HPC that have been promoted by the European Commission within Horizon 2020.

    Further information on the project and how you can engage the services of POP can be found here.

  • Performance, Optimization and Productivity - POP

    Logo_ProPe

    ProPE is a project funded by the German Science foundation (DFG) from 2017 to 2020. It aims at developing a blueprint for a sustainable, structured, and process-oriented service infrastructure for performance engineering (PE) of high performance applications in German tier-2 or tier-3 scientific computing centers.

    The vision of ProPE is to have a nationwide support infrastructure which allows application scientists to develop and use code with provably optimal hardware resource utilization on high performance systems, thus reducing IT costs of scientific progress.

    Further information can be found on the project homepage.

  • Scalable Tools for the Analysis and Optimization of Energy Consumption in HPC - Score-E

    BMBF gefördert Logo

    For some time, computing centres have been feeling the severe financial impact of the energy consumption of modern computing systems, especially in the area of high-performance computing (HPC). Today, the share of energy already accounts for a third of the total cost of ownership and is continuously growing. The main objective of the Score-E project, funded under the 3rd "HPC software for scalable parallel computers" call of the german Federal Ministry of Education and Research (BMBF), is to provide user-friendly analysis and optimization tools for the energy consumption of HPC applications. more

  • UNiform Integrated Tool Environment - UNITE

    High-performance clusters often provide multiple MPI libraries and compiler suites for parallel programming. This means that parallel programming tools which often depend on a specific MPI library, and sometimes on a specific compiler, need to be installed multiple times, once for each combination of MPI library and compiler which has to be supported. In addition, over time, newer versions of the tools also get released and installed. One way to manage many different versions of software packages, used by many computing centers all over the world, is the "module" software. However, each center provides a different set of tools, has a different policy on how and where to install different software packages, and how to name the different versions. UNITE tries to improve this situation for debugging and performance tools. more

  • Virtual Institute - High Productivity Supercomputing

    Vi-HPS Logo©Vi-HPS

    Sponsored by the Helmholtz Association of German Research Centers the Virtual Institute - High Productivity Supercomputing (VI-HPS) aims at improving the quality and accelerate the development process of complex simulation programs in science and engineering that are being designed for the most advanced parallel computer systems. The IT Center of the RWTH Aachen University is focussed on improving the usability of the state-of-the-art programming tools for high-performance computing developped by the partner institutions. more