"Integrative Production Technology for High-Wage Countries" Excellence Cluster
In the Aachen Cluster of Excellence "Integrative Production Technology for High-Wage Countries", a consortium of 18 institutes at RWTH Aachen University is working on the development of new techniques and concepts to increase the competitiveness of production technology in Germany.
In this interdisciplinary project, the VR Group of RWTH Aachen University is responsible for reducing the occupancy of real manufacturing capacities in process optimization by using virtual production systems, the acquisition of process data that is difficult to determine with the help of simulation approaches, the realistic virtual mapping of machine tools and coupling highly specialized simulation systems to capture interphysical effects. More Information can be found on the project website.
5G Industry Campus Europe
The aim of the 5G-Industry Campus Europe is to research and practically investigate the new 5G technology in the manufacturing industry. New applications and systems are being developed to further digitalize and network production. Edge-cloud systems will also be used to test the fastest possible data processing. The network of the 5G-Industry Campus covers an outdoor area of about one square kilometre and 7,000 square meters in the machine halls of the participating partners. The project partners will investigate various application scenarios there over the next three years. These different scenarios will focus, among other things, on 5G sensor technology for monitoring and controlling manufacturing processes, mobile robotics and logistics, and the development of cross-location production chains.
As a cooperation partner, the IT Center is responsible for the fiber optic connection of the various subnetworks, the operation of the 5G network and the outdoor antennas of the 5G Industry Campus. More information can be found on the project website.
Applying Interoperable Metadata Standards (AIMS)
The Applying Interoperable Metadata Standards (AIMS) project aims to enable scientists to create, share and reuse metadata standards. Therefore, tools and workflows for the effortless creation of standardized metadata during research are being developed, which at the same time increase the efficiency of data processing. As an integral part, the created standards will be stored, indexed and made publicly available. By implementing a concept for adaptation, inheritance and versioning of the created metadata standards, the generation of backward compatible derivatives, which increase the reusability of the standards and accelerate collaboration, will be a process in which a standard develops to community-wide acceptance.
Project duration: from October 2022 Funding: German Federal Ministry for Economic Affairs and Climate Protection
The "European Digital Innovation Hub" (EDIH) is a European initiative to promote the innovative capacity of small and medium-sized enterprises. Here, the companies are supported by the partners in the areas of digitalization, machine learning and artificial intelligence as well as high-performance computing (HPC). In the area of HPC, the IT Center, in close cooperation with the Aerodynamic Institute (AIA) and the Chair for the Analysis of Technical Systems (CATS), offers consulting and training in the areas of parallel programming and scientific simulation to enable companies to initially use high-performance computing or to make existing developments in this area more efficient and economical.
EE-HPC - Energy Efficient High Performance Computing
Project duration: October 2022 - August 2025 Funding: BMBF Green HPC
Together with HLRS, the IT Center is developing a software library for fine-grained energy optimization in parallel MPI and OpenMP domains. Cluster Cockpit, a real-time monitoring and management software, provides valuable insights into power consumption and utilization of the HPC environment. The IT Center is instrumental in interface development and integrates energy-efficient practices into its operations to promote sustainable computing. The ICON application for weather model simulations serves as a test application. Together, the partners are striving for an era of environmentally conscious computing to address scientific research and complex computing challenges in a sustainable manner.
More information is available on the project website
ENSIMA - Energy-Efficient HPC Through Optimized Simulation Methods
Project duration: October 2022 - September 2025 Funding: BMBF Green HPC
The project "Energy-Optimized Simulation Methods for Application-Oriented Computing Problems" - in short "ENSIMA" - is one of the nine third-party funded projects of the BMBF program for "Energy-Efficient HPC (GreenHPC)". The project started on October 1, 2022 and is funded for three years. In addition to RWTH Aachen University as coordinator, the project partners are TU Darmstadt, Forschungszentrum Jülich, Gesellschaft für numerische Simulation mbH, GNS Systems GmbH, SIMCON kunststofftechnische Software GmbH and Technische Hochschule Würzburg-Schweinfurt. All in all, the project involves partners from academia and industry.
The basic goal of the project is to use AI methods to improve the selection of design parameters in production processes and to accelerate the execution time of simulation processes through approximate and heterogeneous computing. Innovative solutions should reduce the number of finite element simulations required, which are used for many engineering problems, such as crash simulations. For the use case of sheet metal forming in the automotive industry, the project partners want to reduce the computation time by 50%, which should lead to a 15% reduction in the use of steel and thus indirectly to a reduction in both manufacturing-related emissions and the energy required for vehicle production.
More information can be found on the project website.
In the DFG-funded project "Heuristics for Heterogeneous Memory" (H2M), RWTH Aachen University and the French project partner Inria are jointly developing support for new memory technologies such as High Bandwidth Memory (HBM) and Non-Volatile Memory (NVRAM). These technologies are increasingly being used alongside traditional Dynamic Random Access Memory (DRAM) in HPC systems. HBM offers higher bandwidth but smaller size than DRAM. NVM offers greater capacity but is slower than DRAM. Given these differences, the question is how to efficiently use systems with heterogeneous memory and where data should be stored.
For more information, see the project website.
The HPC.NRW competence network offers an extensive range of diverse HPC infrastructures plus a number of thematic clusters for low-threshold training, consulting and coaching services. The aim is to make effective and efficient use of high-performance computing and storage facilities and to support scientific researchers of all levels, particularly researchers at the post-graduate and post-doctoral levels. Regular, intensive exchanges take place between the sites of the competence network on the service and support for HPC users in NRW, software and operation, as well as HPC usage. More information can be found on the project page.
Human Brain Project - Interactive Visualization, Analysis, and Control
Within the Human Brain Project (HBP) started in October 2013, the Virtual Reality Group is leading a work package “Interactive Visualization, Analysis, and Control”. The work package is part of the HBP subproject “Interactive Supercomputing” headed by the Jülich Supercomputing Centre, and includes partners from the École Polytechnique Fédérale de Lausanne, the Swiss Supercomputing Centre, the Universidad Politécnica de Madrid, and the Universidad Rey Juan Carlos de Madrid. Furthermore, the KAUST University in Saudi Arabia will serve as an associate partner in this work package.
Hyperslice Visualization of Metamodels for Manufacturing Processes
In the second phase of the Cluster of Excellence “Integrative Production Technology for High-Wage Countries” the Virtual Production Intelligence (VPI) platform is being developed. It is a holistic and integrative concept for the support of collaborative planning, monitoring and control of core processes within production and product development in various application fields. In the context of the VPI we are developing a tool for the explorative visualization of multi-dimensional meta-model data, which is based on the concept of hyperslices, linked with a 3D volume representation. The tool is applicable to various scenarios of multi-dimensional data analysis, but the concrete use case in the current project is to find ideal configuration parameters for laser cutting machines in the context of factory planning.
Interactive Physically-based Sound Synthesis
While visual feedback is often the main focus in Virtual Environments, sound is another important modality. Auditory feedback can enhance the immersion and believability of virtual scenes. However, the common approach of using pre-recorded sound samples is often not feasible for highly interactive environments, because too many sounds would have to be recorded or designed upfront. Especially with physically simulated objects, a lot of sounds are generated by collisions between objects. The sound of such collisions depends on different factors: the material and shape of the objects, as well as the positions, strength and temporal distribution of the collision forces. Especially the latter aspects are difficult to reproduce with sound samples.
IT-ZAUBER - Digital Twins for Energy Efficient Data Centers
Project duration: September 2022 - August 2025 Funding: BMBF Green HPC
In the current BMBF project IT-ZAUBER, RWTH Aachen University, TU Dresden and ROM-Technik GmbH & Co. KG, together with the associated partners Sachsen Energie AG and ICT Facilities GmbH, are developing a digital twin as a digital image of the data center that captures its properties, condition and behavior through models, information and data. The application possibilities as a digital tool for the planning and operation of data centers are to be demonstrated and evaluated exemplarily at the two HPC centers of the RWTH and the TUD. The digital twin will be aware of the condition and behavior of the IT infrastructure as well as the cooling system and its integration into the other supply structures and will be used for condition evaluations and target value determinations, thus going far beyond the state of current operating concepts. The project is one of nine collaborative research projects in the field of energy-efficient HPC (GreenHPC) with three years of funding from the BMBF (FKZ 16ME0614).
NFDI4Chem - National Data Infrastructure for Chemistry
The National Research Data Infrastructure for Chemistry, NFDI4Chem, represents all scientific disciplines in chemistry.
The vision of NFDI4Chem is to digitize all major steps in chemical research to support scientific personnel in the collection, storage, processing, analysis, disclosure and reuse of research data. The consortium is headed by the Friedrich Schiller University of Jena. Prof. Sonja Herres-Pawlis from RWTH Aachen University acts as co-spokesperson, the IT Center is a participant. More information can be found on the project website.
NFDI4Ing - National Data Infrastructure for Engineering
NFDI4Ing focuses on the creation of necessary infrastructures for the engineering sciences and the further development of the disciplines process analysis and description, research data publication and citation as a criteria of scientific reputation. Formalized qualification and training concepts for scientific personnel and experts in the infrastructure facilities are also offered.
The FAIR (Findable, Accessible, Interoperable, Reusable) principles for the recording and re-use of research data, most of which are already internationally accepted, are developed in a discipline-conform way.
In addition, the focus is on the appropriate networking and coordination of the various stakeholders, representatives of the respective disciplines and research data infrastructures.
Among other things, the consortium deals with cross-sectional topics in the engineering sciences and acts as a reliable partner in the NFDI. The tasks arising from the cross-sectional topics are solved in close cooperation with other consortia.
The consortium is headed by Prof. Robert Schmitt from RWTH Aachen University. The head office is located at the IT Center and at the TU Darmstadt. More information can be found on the project website.
In NHR4CES (National High Performance Computing Center for Computational Engineering Science), RWTH Aachen University and Technical University Darmstadt join forces to combine their strengths in HPC applications, algorithms and methods, and the efficient use of HPC hardware. The goal is to create an ecosystem combining best practices of HPC and research data management to address questions that are of central importance for technical developments in economy and society. More information can be found on the project website.
Regional Anesthesia Simulator and Assistant
The aim of this project is to gather European experts from very diverse fields, from computer sciences to anesthesiology, to bring an innovative tool in the hands of the medical doctors to perform safer regional anesthesia for the patient at reduced cost for the society. For that purpose, a virtual reality simulator and assistant will be developed, providing an innovative way for medical doctors to train extensively on virtual patients and to be assisted by additional patient-specific information during the procedure. The project started on 1st of November 2013 for three years, gathers experts from 10 European countries in a consortium of 14 academic, industrial and clinical partners. More information can be found on the project website of the University Hospital.
Research on AI- and Simulation-Based Engineering at Exascale (RAISE)
Project duration: June 2022 - May 2024 Funding: EU’s Horizon 2020
The “Research on AI- and Simulation-Based Engineering at Exascale” (RAISE) project is a European Center of Excellence in Exascale Computing funded by the European Commission under the Horizon 2020 Framework Programme. A consortium of 13 partners from academia and industry works together on the convergence of traditional HPC and innovative AI techniques along various use cases from different domains. Use-cases involve compute-driven applications, such as AI for turbulent boundary layers, as well as data-driven applications like event reconstruction and classification at the CERN High-Luminosity Large Hadron Collider. Results obtained with these use-cases are integrated into a Unique AI Framework that contains the trained models and documentation how to use the developed AI techniques on current Petaflop and future Exaflop HPC systems.
More information can be found on the project website.
targetDART - Dynamic, Adaptive and Reactive Distribution of Compute Tasks on Heterogenous Exascale Architectures
Project duration: October 2022 - September 2025 Funding: BMBF SCALEXA
In the BMBF project targetDART, the Technical University of Munich (TUM), the High Performance Computing Center Stuttgart (HLRS) and RWTH Aachen University, represented by the IT Center, are developing a task-based approach for highly scalable simulation software that compensates for unpredictable load imbalances on heterogeneous exascale systems by means of dynamically adaptive and reactive distribution of computational tasks between computational nodes.
The BMBF project Chameleon, which has already been completed, serves as the basis for targetDART and provides valuable insights and results on dynamic task migration of tasks between nodes based on a library/API.
The project employs the OpenMP target construct for GPU utilization and MPI, particularly the new 4.0 standard, for efficient communication between nodes. The migration approach, particularly for GPUs, will be further explored and clarified. The project will also apply its approach to SeisSol, a dynamic earthquake and seismic wave simulator, and ExaHyPE, a solver for hyperbolic partial differential equations, for comprehensive testing and evaluation.
The IT Center focuses on OpenMP, target constructs, and CPU-GPU migration, while HLRS prioritizes MPI and node migration optimization. TUM's emphasis lies in optimizing the two applications. Together, the project's dynamic approach and collaboration will drive innovation in high-performance computing.
For more information, visit the project website.
Virtual Institute - High Productivity Supercomputing
Sponsored by the Helmholtz Association of German Research Centers the Virtual Institute - High Productivity Supercomputing (VI-HPS) aims at improving the quality and accelerate the development process of complex simulation programs in science and engineering that are being designed for the most advanced parallel computer systems. The IT Center of the RWTH Aachen University is focussed on improving the usability of the state-of-the-art programming tools for high-performance computing developped by the partner institutions. More information can be found on the project website.
Virtual Reality-based Medical Training Simulator for Bilateral Sagittal Split Osteotomy
This project aims at developing a specialized training system dedicated to the teaching of a major maxillofacial surgery procedure based on Virtual Reality technology. The medical procedure of interest, the bilateral sagittal split osteotomy (BSSO), allows the displacement of the lower jaw in all three spatial dimensions, e.g., to correct an under- or overbid. The procedure is performed via an intraoral approach and includes the creation of an osteotomy line in the mandible using a saw or a burr, followed by a controlled splitting of the bone by a reversed twisting of one or two chisels inserted into the line.
VisNEST – Visualization of Neural Brain Activity
The general idea of brain simulation is to reverse engineer the brain using large-scale simulations. To understand the relationships between structure and dynamics on multiple scales, ranging from microscopic circuits to networks at brain scale, forms a formidable challenge in neuroscience and produces a large amount of data. This data has to be analyzed in a timely fashion to understand the simulated model and extract insight about its functioning at the aforementioned scales.