High Performance Big Data Research Team at RIKEN R-CCS researches and develops system software for large-scale HPC (High Performance Computing) systems such as the Supercomputer Fugaku. Especially, we study state-of-the-art techniques for convergence of HPC, AI and Big data technologies as well as fundamental R&D in HPC. To achieve the goal, we develop system software to accelerate deep learning and big data processing on large-scale HPC systems, i.e., (HPC for AI/BD). we also apply AI and Big data processing techniques to resolve several technical challenges in large-scale HPC systems. We also study techniques to design next-generation large-scale HPC systems. The research topics include (but not limited to): (1) Scalability and acceleration of big data processing with massively parallel I/O by making use of next-generation storage and file systems; (2) Development of massively parallel algorithms and programming models for the next-generation non-volatile memory and deeply hierarchical memory/storage architectures; (3) Research and development for big data collection, transfer, accumulation, management, and utilization for data science; (4) Development of system software for integrating applications (Society 5.0 simulation as well as system operation), artificial intelligence training, and training data collection; (5) Scalability and acceleration of large-scale deep learning and inference; (6) Scalability and acceleration of high-reliability technologies such as checkpointing with large-scale I/O; (7) Architecture exploration for the development of next generation large-scale computers; (8) Development of tools to support application development and execution environments for large-scale computers; (9) Any other research and development related to high performance computing.
- November 26, 2019 DL4Fugaku Project
- March 24th, 2021 The webpage is now open !