Venue of Bench'19

2020 BenchCouncil International Symposium on Benchmarking, Measuring and Optimizing (Bench'20)


Nov 15-16, 2020, 8:00 am EST

Program

Sunday, November 15th, 2020, Begin at 8 am EST (UTC-5)

Time (EST) Event Video
08:00 - 08:10 Opening Remark (Professor Jianfeng Zhan) [Slides]
Session A
08:10 - 09:00 Chair: Professor Geoffrey Fox, Indiana University
Keynote - Award Lecture (BenchCouncil Rising Star Award): Scientific Benchmarking of Parallel Computing Systems
Professor Torsten Hoefler, ETH Zurich
Abstract: Measuring and reporting performance of parallel computers constitutes the basis for scientific advancement of high-performance computing (HPC). Most scientific reports show performance improvements of new techniques and are thus obliged to ensure reproducibility or at least interpretability. Our investigation of a stratified sample of 120 papers across three top conferences in the field shows that the state of the practice is not sufficient. For example, it is often unclear if reported improvements are in the noise or observed by chance. In addition to distilling best practices from existing work, we propose statistically sound analysis and reporting techniques and simple guidelines for experimental design in parallel computing. We aim to improve the standards of reporting research results and initiate a discussion in the HPC field. A wide adoption of this minimal set of rules will lead to better reproducibility and interpretability of performance results and improve the scientific culture around HPC.
Bio: Torsten is a Professor of Computer Science at ETH Zürich, Switzerland. Before joining ETH, he led the performance modeling and simulation efforts of parallel petascale applications for the NSF-funded Blue Waters project at NCSA/UIUC. He is also a key member of the Message Passing Interface (MPI) Forum where he chairs the "Collective Operations and Topologies" working group. Torsten won best paper awards at the ACM/IEEE Supercomputing Conference 2010 (SC10), EuroMPI 2013, SC13, SC14, SC19, IPDPS'15, ACM HPDC'15 and HPDC'16, ACM OOPSLA'16, and other conferences. He published numerous peer-reviewed scientific conference and journal articles and authored chapters of the MPI-2.2 and MPI-3.0 standards. For his work, Torsten received the ACM Gordon Bell Prize in 2019, the IEEE TCSC Award of Excellence (MCR) in 2019, ETH Zurich's Latsis Prize in 2015, the SIAM SIAG/Supercomputing Junior Scientist Prize in 2012, and the IEEE TCSC Young Achievers in Scalable Computing Award in 2013. Following his Ph.D., he received the Young Alumni Award 2014 from Indiana University. Torsten was elected into the first steering committee of ACM's SIGHPC in 2013 and he was re-elected in 2016. He was the first European to receive many of those honors. His research interests revolve around the central topic of "Performance-centric System Design" and include scalable networks, parallel programming techniques, and performance modeling. Additional information about Torsten can be found on his homepage at htor.inf.ethz.ch.
[Slides]
Best Paper (Chair: Dr. Wanling Gao, Institute of Computing Technology, Chinese Academy of Sciences)
9:00 - 9:15 Characterizing the Sharing Behavior of Applications using Software Transactional Memory
Douglas Pereira Pasqualin (Universidade Federal de Pelotas), Matthias Diener (University of Illinois at Urbana-Champaign), André Rauber Du Bois (Universidade Federal de Pelotas) and Mauricio Lima Pilla (Universidade Federal de Pelotas)
[Video]
9:15 - 9:30 swRodinia: A Benchmark Suite for Exploiting Architecture Properties of Sunway Processor
Bangduo Chen (Beihang University), Mingzhen Li (Beihang University), Hailong Yang (Beihang University), Zhongzhi Luan (Beihang University), Lin Gan (Tsinghua University), Guangwen Yang (Tsinghua University) and Depei Qian (Beihang University)
[Video]
9:30 - 9:45 Break
Session B
09:45 - 10:35 Chair: Professor Lizy Kurian John, University of Texas at Austin
Keynote - Award Lecture (BenchCouncil Achievement Award): It’s a Random World: Learning from Mistakes, Errors, and Noise
Professor David J. Lilja, Department of Electrical and Computer Engineering, University of Minnesota, Minneapolis (USA)
Abstract: The world is a random place. As computer performance analysts, we learn how to statistically quantify randomness and errors to prevent mistakes – usually. For example, computer designers must use yesterday’s benchmark programs to design systems today that will be evaluated with tomorrow’s benchmarks. The results of this “benchmark drift” can be ugly, as we learn from measurements on a real microprocessor that showed significantly less performance improvement than simulations predicted. Additionally, the continued scaling of devices introduces greater variability, defects, and noise into circuits making it increasingly challenging to build systems that rigidly transform conventional binary inputs into binary outputs. Yet these changes also provide opportunities through some unexpected connections. We have been investigating a stochastic computing model that treats randomness as a valuable computational resource by transforming probability values into probability values. Through these examples, I hope to demonstrate how we can learn from mistakes, errors, and noise.
Bio: David J. Lilja received Ph.D. and M.S. degrees in Electrical Engineering from the University of Illinois at Urbana-Champaign, and a B.S. in Computer Engineering from Iowa State University in Ames. He is currently a Professor of Electrical and Computer Engineering, and is a member of the graduate faculties in Computer Science and Data Science, at the University of Minnesota in Minneapolis. Previously, he served as the head of the ECE department at the University of Minnesota, worked as a research assistant at the Center for Supercomputing Research and Development at the University of Illinois, and was a development engineer at Tandem Computers Incorporated in California. His main research interests include computer architecture, parallel processing, high-performance storage systems, and computer systems performance analysis. He is a Fellow of both the Institute of Electrical and Electronics Engineers (IEEE) and the American Association for the Advancement of Science (AAAS) for contributions to the statistical analysis of computer performance.
[Slides]
Supercomputing (Chair: Dr. Zhen Jia from Amazon)
10:35 - 10:45 Optimization of the Himeno Benchmark for SX-Aurora TSUBASA
Akito Onodera (Tohoku University), Kazuhiko Komatsu (Tohoku University), Soya Fujimoto (NEC Corporation), Yoko Isobe (NEC Corporation), Masayuki Sato (Tohoku University) and Hiroaki Kobayashi (Tohoku University)
[Video]
Data Management & Storage (I) (Chair: Dr. Zhen Jia from Amazon)
10:45 - 11:00 Impact of Commodity Networks on Storage Disaggregation with NVMe-oF
Arjun Kashyap, Shashank Gugnani and Xiaoyi Lu (The Ohio State University)
[Video]
11:00 - 11:15 K2RDF: A Distributed RDF Data Management System on Kudu and Impala
Xu Chen, Boyu Qiu, Jungang Xu and Renfeng Liu (University of Chinese Academy of Sciences)
[Video]
11:15 - 11:25 OStoreBench: Benchmarking Distributed Object Storage Systems Using Real-world Application Scenarios
Guoxin Kang (Institute of Computing Technology Chinese Academy of Sciences), Defei Kong (ByteDance), Lei Wang (Institute of Computing Technology Chinese Academy of Sciences) and Jianfeng Zhan (Institute of Computing Technology Chinese Academy of Sciences)
[Video]

Monday, November 16th, 2020, Begin at 8 am EST (UTC-5)

Time (EST) Event Video
Session C
08:00 - 08:50 Chair: Professor Felix Wolf, Department of Computer Science of Technische Universität Darmstadt in Germany
AIBench and Its Performance Rankings
Professor Jianfeng Zhan, Chair of BenchCouncil Steering Committee
Bio: Dr. Jianfeng Zhan is a Full Professor at Institute of Computing Technology (ICT), Chinese Academy of Sciences (CAS), and University of Chinese Academy of Sciences (UCAS),and director of the Software Systems Labs, ICT, CAS. He has supervised over 90 graduate students, post-docs, and engineers in the past two decades. His research areas span from Chips, Systems, to Benchmarks. A common thread is benchmarking, designing, and implementing, and optimizing parallel and distributing systems. He has made strong and effective efforts to transfer his academic research into advanced technology to impact general-purpose production systems. Several technical innovations and research results, including 36 patents, from his team have been widely adopted in benchmarks, operating systems and cluster and cloud system software with direct contributions to the advancement of the parallel and distributed systems in China or even in the world.
Dr. Jianfeng Zhan founds and chairs BenchCouncil. He served as IEEE TPDS Associate Editor since 2018. He received the second-class Chinese National Technology Promotion Prize in 2006, the Distinguished Achievement Award of the Chinese Academy of Sciences in 2005, and IISWC Best paper award in 2013, respectively. Jianfeng Zhan received his B.E. in Civil Engineering and MSc in Solid Mechanics from Southwest Jiaotong University in 1996, and 1999, and his Ph.D. in Computer Science from Institute of Software, CAS and UCAS in 2002.
[Slides]
Data Management & Storage (II) (Chair: Professor Zhibin Yu, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences)
8:50 - 9:05 Artemis: An Automatic Test Suite Generator for Large Scale OLAP Database
Kaiming Mi, Chunxi Zhang, Weining Qian and Rong Zhang (East China Normal University)
[Video]
9:05 - 9:15 ConfAdvisor: An Automatic Configuration Tuning Framework for NoSQL Database Services with a Black-box Approach
Pengfei Chen, Zhaoheng Huo, Xiaoyun Li, Hui Dou and Chu Zhu (Sun Yat-sen University)
[Video]
Benchmarking on GPU (Chair: Professor Zhibin Yu, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences)
9:15 - 9:30 Parallel sorted sparse approximate inverse preconditioning algorithm on GPU
Chen Qi (Nanjing Normal University), Gao Jiaquan (Nanjing Normal University), Chu Xinyue (Nanjing Normal University) and He Guixia (Zhejiang University of Technology)
[Video]
9:30 - 9:45 ComScribe: Identifying Intra-node GPU communication
Palwisha Akhtar, Erhan Tezcan, Fareed Mohammad Qararyah and Didem Unat (Koç University, Turkey)
[Video]
9:45 - 10:00 Break
Session D (Chair: Professor Felix Wolf, Department of Computer Science of Technische Universität Darmstadt in Germany)
10:00 - 10:50 Keynote: Benchmarking Quantum Computers
Professor Kristel Michielsen, Institute for Advanced Simulation, Jülich Supercomputing Centre
Abstract: Significant advances in the system- and application-oriented development of quantum computers open up new approaches to hard optimization problems, efficient machine learning and simulations of complex quantum systems.
In order to evaluate quantum computing as a new compute technology, profound test models and benchmarks are needed to compare quantum computing and quantum annealing devices with trustworthy simulations on digital supercomputers. These simulations provide essential insight into their operation, enable benchmarking and contribute to their design.
We present results of benchmarking quantum computing hardware and software. We show benchmarking outcomes for the IBM Quantum Experience and CAS-Alibaba gate-based quantum computers, the D-Wave quantum annealers D-Wave 2000Q and Advantage, and for the approximate quantum optimization algorithm (QAOA) and quantum annealing. For this purpose, also simulations of both types of quantum computers are performed by first modeling them as zero-temperature quantum systems of interacting spin-1/2 particles and then emulating their dynamics by solving the time-dependent Schrödinger equation.
Bio: Prof. Dr. Kristel Michielsen received her PhD from the University of Groningen (the Netherlands) for work on the simulation of strongly correlated electron systems in 1993. Since 2009 she is group leader of the research group Quantum Information Processing at the Jülich Supercomputing Centre, Forschungszentrum Jülich (Germany) and is also Professor of Quantum Information Processing at RWTH Aachen University (Germany). Kristel Michielsen and her research group have ample experience in performing large-scale simulations of quantum systems. She has expertise in, on the one hand simulating quantum computers and quantum annealers, and on the other hand in benchmarking and studying prototype applications for this new compute technology by using the various quantum computing and quantum annealing systems that are nowadays available. Together with Prof. Lippert she is building up the Jülich Universal Infrastructure for Quantum computing (JUNIC) at the Jülich Supercomputing Centre.
Application & Dataset
10:50 - 11:05 A Benchmark of Ocular Disease Intelligent Recognition: One Shot for Multi-disease Detection
Ning Li, Tao Li, Chunyu Hu, Kai Wang and Hong Kang (Nankai university)
[Video]
11:05 - 11:20 MAS3K: An Open Dataset for Marine Animal Segmentation
Lin Li (Ocean University of China), Eric Rigall (Ocean University of China), Junyu Dong (Ocean University of China) and Geng Chen (Inception Institute of Artificial Intelligence, United Arab Emirates)
[Video]
11:20 - 11:30 Benchmarking Blockchain Interactions in Mobile Edge Cloud Software Systems
Hong-Linh Truong (Aalto University) and Filip Rydzi (Independent)
[Video]
11:30 - 12:00 Invited Talk: DataBench Toolbox – supporting Big Data and AI Benchmarking
Dr. Arne J. Berre (Chief Scientist, SINTEF Digital), Tomás Pariente Lobo (Associate Head of AI, Data & Robotics Unit, Atos), Dr. Todor Ivanov (Senior consultant at Lead Consult)
Abstract: The DataBench Toolbox offers support for big data and AI benchmarking based on existing efforts in the benchmarking community. The DataBench framework is based on classification of benchmarks using a generic pipeline structure for Big Data and AI pipelines related to the Big Data Value Association (BDVA) Reference Model and the ISO SC42 AI Framework. Based on existing efforts in big data benchmarking and enabling inclusion of new benchmarks that could arise in the future, the DataBench Toolbox provides an environment to search, select and deploy big data benchmarking tools, giving the possibility to identify technical metrics and, also relate to and derive business KPIs for an organization, in order to support evidence Based Big Data and AI Benchmarking to improve Business Performance The Handbook and the DataBench toolbox are essential components of the DataBench project results. The DataBench Toolbox is a software tool which will provide access to benchmarking services, KPIs and various types of knowledge: the DataBench Handbook plays a complementary role to the Toolbox by providing a comprehensive view of the benchmarks referenced in the Toolbox, of how technical and business benchmarking can be linked. The DataBench Handbook and Toolbox are aimed at industrial users and technology developers who need to make informed decisions on Big Data and AI Technologies investments by optimizing technical and business performance.
Bio: Dr. Arne J. Berre works with Digital Platforms and Systems Interoperability, focusing on Big Data and processing support for Analytics/AI/Machine Learning. He is involved with in a number of ongoing Norwegian and European Horizon 2020 projects including the DataBench project, where he is the technical ccordinator. Arne is the originator of the HyperModel Benchmark. He is the Innovation Director of NorwAI (Norwegian Research center for AI Innovation) and the leader of SN/K 586 AI – the Norwegian Standard Committee for Artificial Intelligence and Big Data and lead of the Norwegian ISO SC42 AI committee. He is the Leader of BDVA (Big Data Value Association) TF6 Technical Priorities and co-chair of the TF6 Benchmarking group. He is Chief Scientist at SINTEF in Oslo, Norway.
Tomás Pariente Lobo has more than 30 years of experience in IT. His technical expertise is mainly in Artificial Intelligence, Big Data, Linked Data and knowledge management. Since June 2006 Tomas works as a project manager and technical coordinator for EU-funded projects leading a group of researchers dealing with all aspects related to the data value chain, with special focus on data architectures, data analysis and technologies such as Natural Language Processing and semantics. Tomás is responsible for the Toolbox in the DataBench project.
Dr. Todor Ivnaov is an expert in the design, implementation and benchmarking of distributed big data systems and data-intensive applications. Prior to that, he has worked as a senior researcher in multiple projects in the field of databases and big data benchmarking as well as a senior software engineer developing Flight Information Display Systems (FIDS) for different international airports. Todor has been involved in the SPEC benchmarking group and is one of the authors of the ABench benchmark. Todor is responsible for the Toolbox evaluation in the DataBench project.
[Slides 1]
[Slides 2]
[Slides 3]