Each year, BenchCouncil will present the BenchCouncil Achievement Award (3000$), the BenchCouncil Rising Star Award (1000$), the BenchCouncil Distinguished Doctoral Dissertation Award (1000$), the BenchCouncil Best Paper Award (1000$), the BenchCouncil Tony Hey Best Student Paper Award (1000$) and the BenchCouncil Award for Excellence for Reproducible Research.
The BenchCouncil award committees consist of the BenchCouncil steering committee members and the past awardees.
We welcome the industry leaders as sponsors to provide the honorariums.
This award recognizes a senior member who has made long-term contributions to benchmarks, data, standards, evaluations, and optimizations. The winner is eligible for BenchCouncil Fellow. ($3000)
Contributions"Novel contributions to the workload characterization metrologies and tools, and he is the main contributor of the Sniper, a fast, accurate and parallel x86 multi-core simulator.".
Selective Publications
Information about Professor Eeckhout's contributions is available here:https://users.elis.ugent.be/~leeckhou/
[1] Carlson, Trevor E., Wim Heirman, and Lieven Eeckhout. "Sniper: Exploring the level of abstraction for scalable and accurate parallel multi-core simulation." In Proceedings of 2011 International Conference for High Performance Computing, Networking, Storage and Analysis, pp. 1-12. 2011.
[2] Georges, Andy, Dries Buytaert, and Lieven Eeckhout. "Statistically rigorous java performance evaluation." ACM SIGPLAN Notices 42, no. 10 (2007): 57-76.
[3] John, Lizy Kurian, and Lieven Eeckhout, eds. Performance evaluation and benchmarking. CRC Press, 2018.
2023 Award Committee
Prof. D. K. Panda, the Ohio State University
Prof. Geoffrey Fox, Indiana University
Prof. Jianfeng Zhan, University of Chinese Academy of Sciences
Prof. Jack J. Dongarra, University of Tennessee (Since 2022)
John L. Henning, SPEC CPU Subcommittee and Oracle (Since 2023)
Prof. Tony Hey, Rutherford Appleton Laboratory STFC (Retired)
Prof. David J. Lilja, University of Minnesota, Minneapolis (Retired)
Prof. Lizy Kurian John, the University of Texas at Austin (Retired)
Contributions"The primary contributor of the SPEC CPU 2000, SPEC CPU2006, and SPEC CPU 2017, which are widely used in CPU designs and evaluations in both industry and academia".
Selective Publications
Information about JOHN's contributions is available here:https://dl.acm.org/profile/81100057428
[1] John L. Henning. SPEC CPU2006 benchmark descriptions. SIGARCH Comput. Archit. News 34, 4 (September 2006), 1–17.
[2] John L. Henning. SPEC CPU2000: Measuring CPU performance in the new millennium. Computer 33.7 (2000): 28-35.
[3] John L. Henning. A Retrospective Look at SPEC Benchmarking, including Successes and Failures. ModSim 2022. August 10–12, 2022.
2022 Award Committee
Prof. D. K. Panda, the Ohio State University
Prof. Lizy Kurian John, the University of Texas at Austin (Retired)
Prof. Geoffrey Fox, Indiana University
Prof. Jianfeng Zhan, University of Chinese Academy of Sciences
Prof. Tony Hey, Rutherford Appleton Laboratory STFC (Since 2020) (Retired)
Prof. David J. Lilja, University of Minnesota, Minneapolis (Since 2021) (Retired)
Prof. Jack J. Dongarra, University of Tennessee (Since 2022)
Contributions"Novel and substantial contributions to development, testing and documentation of high-quality mathematical software and benchmarking HPC systems."
Selective Publications
[1] Anderson, E., Bai, Z., Bischof, C., Blackford, L. S., Demmel, J., Dongarra, J., ... & Sorensen, D. (1999). LAPACK users' guide. Society for industrial and applied mathematics.
[2] Barrett, R., Berry, M., Chan, T. F., Demmel, J., Donato, J., Dongarra, J., ... & Van der Vorst, H. (1994). Templates for the solution of linear systems: building blocks for iterative methods. Society for Industrial and Applied Mathematics.
[3] Gabriel, E., Fagg, G. E., Bosilca, G., Angskun, T., Dongarra, J. J., Squyres, J. M., ... & Woodall, T. S. (2004). Open MPI: Goals, concept, and design of a next generation MPI implementation. In Recent Advances in Parallel Virtual Machine and Message Passing Interface: 11th European PVM/MPI Users’ Group Meeting Budapest, Hungary, September 19-22, 2004. Proceedings 11 (pp. 97-104). Springer Berlin Heidelberg.
2021 Award Committee
Prof. D. K. Panda, the Ohio State University
Prof. Lizy Kurian John, the University of Texas at Austin
Prof. Geoffrey Fox, Indiana University
Prof. Jianfeng Zhan, University of Chinese Academy of Sciences
Prof. Tony Hey, Rutherford Appleton Laboratory STFC (Since 2020)
Prof. David J. Lilja, University of Minnesota, Minneapolis (Since 2021)
Contributions"Summarizing practical methods of measurement, simulation and analytical modeling" and “proposing MinneSPEC for simulation-based computer architecture research” and “exploiting hardware-software interactions and architecture-circuit interactions to improve system performance"
Selective Publications
[1] David J Lilja. (2005). Measuring computer performance: a practitioner's guide. Cambridge university press.
[2] AJ KleinOsowski, David J Lilja. (2002). MinneSPEC: A new SPEC benchmark workload for simulation-based computer architecture research. IEEE Computer Architecture Letters, 1(1), 7-7.
[3] David J Lilja. (1993). Cache coherence in large-scale shared-memory multiprocessors: Issues and comparisons. ACM Computing Surveys (CSUR), 25(3), 303-338.
2020 Award Committee
Prof. D. K. Panda, the Ohio State University
Prof. Lizy Kurian John, the University of Texas at Austin
Prof. Geoffrey Fox, Indiana University
Prof. Jianfeng Zhan, University of Chinese Academy of Sciences
Prof. Tony Hey, Rutherford Appleton Laboratory STFC (Since 2020)
ContributionsFor the contributions in devising first parallel benchmark suite – the “Genesis” benchmarks – for performance evaluation of distributed memory parallel machines and “coauthoring first draft of the MPI message-passing standard with Jack Dongarra, David Walker, and Rolf Hempei” and “the recent work on a large-scale data science benchmark”.
Selective Publications
[1] Cliff Addison, Vladimir Getov, Tony Hey, Roger W. Hockney, I.C. Wolton. (1993). The GENESIS distributed-memory benchmarks. Elsevier.
[2] Jeyan Thiyagalingam, Mallikarjun Shankar, Geoffrey Fox, & Tony Hey. (2022). Scientific machine learning benchmarks. Nature Reviews Physics, 4(6), 413-420.
[3] Jack Dongarra, Rolf Hempei, Tony hey, David Walker. A preliminary draft proposal of MPI. November 1992.
2019 Award Committee
Prof. D. K. Panda, the Ohio State University
Prof. Lizy Kurian John, the University of Texas at Austin
Prof. Geoffrey Fox, Indiana University
Prof. Jianfeng Zhan, University of Chinese Academy of Sciences
This award recognizes a young researcher who demonstrates outstanding research and practice in benchmarks, data, standards, evaluations, and optimizations. The winner is eligible for BenchCouncil Senior Member. ($1000)
Contributions"The contribution to the natural language processing evaluation and benchmarking, including Senteval, Adversarial NLI, Dynabench."
Selective Publications
Information about Dr. Douwe Kiela is available here:
https://douwekiela.github.io/
https://scholar.google.com/citations?user=Q0piorUAAAAJ&hl=en
https://scholar.google.com/citations?user=Q0piorUAAAAJ&hl=en&oi=ao
[1] Conneau, Alexis and Douwe Kiela. “SentEval: An Evaluation Toolkit for Universal Sentence Representations.” ArXiv abs/1803.05449 (2018): n. pag.
[2] Nie, Yixin, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston and Douwe Kiela. “Adversarial NLI: A New Benchmark for Natural Language Understanding.” ACL (2020).
[3] Kiela, Douwe, Max Bartolo, Yixin Nie, Divyansh Kaushik, Atticus Geiger, Zhengxuan Wu, Bertie Vidgen, Grusha Prasad, Amanpreet Singh, Pratik Ringshia, Zhiyi Ma, Tristan Thrush, Sebastian Riedel, Zeerak Waseem, Pontus Stenetorp, Robin Jia, Mohit Bansal, Christopher Potts and Adina Williams. “Dynabench: Rethinking Benchmarking in NLP.” ArXiv abs/2104.14337 (2021): n. pag.
2022 Award Committee
Prof. D. K. Panda, the Ohio State University
Prof. Lizy Kurian John, the University of Texas at Austin
Prof. Geoffrey Fox, Indiana University
Prof. Jianfeng Zhan, University of Chinese Academy of Sciences
Prof. Torsten Hoefler, ETH Zürich (Since 2021)
Prof. Vijay Janapa Reddi, Harvard University (Since 2022)
Dr. Peter Mattson, Google, USA (Since 2022)
Dr. Wanling Gao, ICT, Chinese Academy of Sciences (pending)
Contributions
Dr. Peter Mattson,Google Senior Engineer:
Contributions: "As a lead researcher, proposing AI training benchmarks and performing large-scale industry testing" and "co-proposing memory access scheduling technique that reorders memory references to exploit locality within the 3-D memory structure."
Dr. Vijay Janapa Reddi, Associate Professor, Harvard University:
Contributions: "As a lead researcher, proposing AI inference benchmarks and performing large-scale industry testing" and "co-proposing Pin: customized program analysis tools with dynamic instrumentation."
Dr. Wanling Gao, Associate Research Fellow, Chinese Academy of Sciences:
Contributions: "As one of the primary researchers, proposing AI scenario, AI training, and HPC AI benchmarks" and "proposing a data motif abstraction that tries to unify the big data and AI workloads."
Selective Publications
Dr. Peter Mattson,Google Senior Engineer:
[1] Rixner, S., Dally, W. J., Kapasi, U. J., Mattson, P., & Owens, J. D. (2000). Memory access scheduling. ACM SIGARCH Computer Architecture News, 28(2), 128-138.
[2] Mattson, P., Cheng, C., Diamos, G., Coleman, C., Micikevicius, P., Patterson, D., ... & Zaharia, M. (2020). Mlperf training benchmark. Proceedings of Machine Learning and Systems, 2, 336-349.
[3] Mattson, P., Reddi, V. J., Cheng, C., Coleman, C., Diamos, G., Kanter, D., ... & Wu, C. J. (2020). MLPerf: An industry standard benchmark suite for machine learning performance. IEEE Micro, 40(2), 8-16.
Dr. Vijay Janapa Reddi, Associate Professor, Harvard University:
[1] Luk, C. K., Cohn, R., Muth, R., Patil, H., Klauser, A., Lowney, G., ... & Hazelwood, K. (2005). Pin: building customized program analysis tools with dynamic instrumentation. Acm sigplan notices, 40(6), 190-200.
[2] Reddi, V. J., Cheng, C., Kanter, D., Mattson, P., Schmuelling, G., Wu, C. J., ... & Zhou, Y. (2020, May). Mlperf inference benchmark. In 2020 ACM/IEEE 47th Annual International Symposium on Computer Architecture (ISCA) (pp. 446-459). IEEE.
[3] Kanev, S., Darago, J. P., Hazelwood, K., Ranganathan, P., Moseley, T., Wei, G. Y., & Brooks, D. (2015, June). Profiling a warehouse-scale computer. In Proceedings of the 42nd Annual International Symposium on Computer Architecture(pp. 158-169).
Dr. Wanling Gao, Associate Research Fellow, Chinese Academy of Sciences:
[1] Gao, W., Tang, F., Zhan, J., Wen, X., Wang, L., Cao, Z., ... & Jiang, Z. (2021, September). Aibench scenario: Scenario-distilling ai benchmarking. In 2021 30th International Conference on Parallel Architectures and Compilation Techniques (PACT) (pp. 142-158). IEEE.
[2] Gao, W., Zhan, J., Wang, L., Luo, C., Zheng, D., Tang, F., ... & Ren, R. (2018, November). Data motifs: A lens towards fully understanding big data and ai workloads. In Proceedings of the 27th International Conference on Parallel Architectures and Compilation Techniques (pp. 1-14).
[3] Tang, F., Gao, W., Zhan, J., Lan, C., Wen, X., Wang, L., ... & Ye, H. (2021, March). AIBench training: Balanced industry-standard AI training benchmarking. In 2021 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS) (pp. 24-35). IEEE.
2021 Award Committee
Prof. D. K. Panda, the Ohio State University
Prof. Lizy Kurian John, the University of Texas at Austin
Prof. Geoffrey Fox, Indiana University
Prof. Jianfeng Zhan, University of Chinese Academy of Sciences
Prof. Torsten Hoefler, ETH Zürich (Since 2021)
Contributions
"Proposing the fastest routing algorithm for arbitrary topologies with J. Domke” and “co-authoring the latest versions of MPI message-passing standard with Jack Dongarra and Rajeev Thakur" and "the recent work on the Deep500 project—a deep learning meta-framework and HPC AI benchmarking library."
Selective Publications
[1] Domke, J., Hoefler, T., & Nagel, W. E. (2011, May). Deadlock-free oblivious routing for arbitrary topologies. In 2011 IEEE International Parallel & Distributed Processing Symposium(pp. 616-627). IEEE.
[2] Ben-Nun, T., Besta, M., Huber, S., Ziogas, A. N., Peter, D., & Hoefler, T. (2019, May). A modular benchmarking infrastructure for high-performance and reproducible deep learning. In 2019 IEEE International Parallel and Distributed Processing Symposium (IPDPS) (pp. 66-77). IEEE.
[3] Gropp, W., Hoefler, T., Thakur, R., & Lusk, E. (2014). Using advanced MPI: Modern features of the message-passing interface. MIT Press.
2020 Award Committee
Prof. D. K. Panda, the Ohio State University
Prof. Lizy Kurian John, the University of Texas at Austin
Prof. Geoffrey Fox, Indiana University
Prof. Jianfeng Zhan, University of Chinese Academy of Sciences
The BenchCouncil Distinguished Doctoral Dissertation Award consists of two tracks on computer architecture and other areas. The computer architecture track has individual nomination submission form and award subcommittee. For each track, all the candidates are encouraged to submit articles to BenchCouncil Transactions on Benchmarks, Standards, and Evaluation (TBench) (see the article submission guideline below). Among the submissions of each track, four candidates will be selected as finalists. They will be invited to give a 30-minute presentation at the BenchCouncil Bench Conferences and contribute research articles to TBench. Finally, for each track, one among the four will receive the award, which carries a $1,000 honorarium.
Note that the two tracks share the same rules on the submission including eligibility, submission guidelines, submission deadline, and review criteria.
This award recognizes and encourages superior research and writing by doctoral candidates on benchmarks, workload characterization, and evaluations of the computer architecture community.
Online Nomination Form: Sumbmission Site (Deadline: October 15, 2023 End of Day, AoE)
Award Subcommittee
2022 BenchCouncil Distinguished Doctoral Dissertation Award in Computer Architecture
This award recognizes and encourages superior research and writing by doctoral candidates in the broad field of benchmarks, data, standards, evaluations, and optimizations community.
Online Nomination Form: Sumbmission Site (Deadline: October 15, 2023 End of Day, AoE)
Award Committee
2021 BenchCouncil Distinguished Doctoral Dissertation Award
The committee welcomes the proposals from the following but not limited to the following communities: architecture, systems, database, high-performance computing, machine learning or AI, scientific computing, medicine or other disciplines.
Eligibility
Only those who were awarded Ph.D. in the past two years are eligible for this award.
Only the accepted final version of a nominated Ph.D. dissertation will be considered, and it must have been filed with the writer’s institution during the nomination cycle.
The writer or the writer’s Ph.D. advisor can nominate a dissertation, and a dissertation may be nominated only once.
The benchmarks, data, or tools that are the essential contributions of the dissertation should be open-sourced.
The committee/subcommittee members can not nominate their students.
Submissions
Submission Deadline: October 15, 2023 End of Day, Anywhere on Earth (AoE)
Online Nomination Form (Computer Architecture, Other Areas)
Name, address, phone number, and email address of the candidate's thesis advisor.
Name, address, and email address of the candidate. Affiliation should be the name of the school.
Suggested citation. The citation should be a concise statement (maximum of 25 words) describing the critical technical or professional accomplishment for which the candidate merits this award. Note that the final wording for awardees will be at the discretion of the Award Committee.
Nomination statement (200-300 words in length) addressing why the candidate should receive this award. This should address the significance of the dissertation, not simply repeat the information in the abstract.
A copy of the dissertation. Each submitted dissertation for consideration must include an English abstract of maximally 3000 words.
Endorsement letters. At least two supporting letters should be included from experts in the field who can provide additional insights or evidence of the dissertation’s impact. (The nominator/advisor may not write a letter of support.) Each letter should include the name, address, and telephone number of the endorser. The nominator should collect the letters and bundle them for submission. The endorsement letter and supporting letters can be combined in one file in your pdf upload.
Article Submission Guideline
According to the submission guide, we request that you submit an article to the BenchCouncil Transaction on Benchmarks, Standards, and Evaluations (TBench). Considering you may have already published some material in your dissertation, a survey article is also eligible.
A submission should use the TBench template, available at https://www.elsevier.com/authors/policies-and-guidelines/latex-instructions.The author list can (a) include your name solely, or (b) list you and your Ph.D. supervisor but with you as the first author. In your submission, you should remove the authors' names, as the follow-up review will be double-blind.
The article should include the following contents. (a) The fundamental issue your dissertation tackles. Why is it essential and challenging? (10%). (b) The summary of state-of-the-art and state-of-the-practice (30%). (c) How do you advance state-of-the-art and state-of-the-practice? What are your innovative approaches, systems, tools, and insights? (40%) (d) Open issues and future work (20%).
Please directly submit your article to the TBench editorial system. The web link is https://www.editorialmanager.com/tbench/default1.aspx.
Review Criteria
Dissertations will be reviewed for technical depth and significance of the research contribution, the potential impact on theory and practice.
In the first round, the four candidates will be singled out. Each one will give a 30-minute presentation in the distinguished Ph. D. dissertation session chaired by the committee.
Finally, one candidate will be awarded the BenchCouncil Distinguished Doctoral Dissertation Award carrying a $1,000 honorarium.
The award is presented each year at the Awards Banquet during BenchCouncil Bench Conference.
This award recognizes a paper presented at the BenchCouncil conferences, demonstrating the potential impact on research and practice in benchmarks, data, standards, evaluations, and optimizations. ($1000). Since 2021, this award will only be awarded to the papers whose first-authors are not students.
An Analysis of Long-tailed Network Latency Distribution and Background Traffic on Dragonfly+
Majid Salimi Beni (University of Salerno), Biagio Cosenza (University of Salerno)
Comparative Evaluation of Deep Learning Workload for Leadership-class Systems
Junqi Yin (Oak Ridge National Laboratory), Aristeidis Tsaris (Oak Ridge National Laboratory), Sajal Dash (Oak Ridge National Laboratory), Ross Miller (Oak Ridge National Laboratory), Feiyi Wang (Oak Ridge National Laboratory), Arjun Shankar (Oak Ridge National Laboratory)
Characterizing the Sharing Behavior of Applications using Software Transactional Memory
Douglas Pereira Pasqualin (Universidade Federal de Pelotas), Matthias Diener (University of Illinois at Urbana-Champaign), André Rauber Du Bois (Universidade Federal de Pelotas) and Mauricio Lima Pilla (Universidade Federal de Pelotas)
swRodinia: A Benchmark Suite for Exploiting Architecture Properties of Sunway Processor
Bangduo Chen (Beihang University), Mingzhen Li (Beihang University), Hailong Yang (Beihang University), Zhongzhi Luan (Beihang University), Lin Gan (Tsinghua University), Guangwen Yang (Tsinghua University) and Depei Qian (Beihang University)
Performance Analysis of GPU Programming Models using the Roofline Scaling Trajectories
Khaled Ibrahim, Samuel Williams and Leonid Oliker (Lawrence Berkeley National Laboratory Researchers Won BenchCouncil Bench'19 Best Paper Award)
RISC-V Track First Prize: Jiageng Yu, Yuxia Miao, Yang Tai (Institute of Software, Chinese Academy of Sciences)
RISC-V Track Second Prize: Yangyang Kong (Institute of Information Engineering, Chinese Academy of Sciences)
Cambricon Track First Prize: Guangli Li, Xueying Wang, Xiu Ma (ICT, CAS)
Cambricon Track Second Prize: Zihan Jiang, Jiansong Li (ICT, CAS)
Cambricon Track Second Prize: Yifan Wang, Chen Zeng, Chundian Li (ICT, CAS)
Cambricon Track Third Prize: Peng He, Ge Chen, Kai Deng ((ICT, CAS))
X86 Track First Prize: Weixin Deng, Jing Wang, Pengyu Wang (Shanghai Jiao Tong University)
X86 Track Second Prize: Tianshu Hao (ICT, CAS), Ziping Zheng (Google)
X86 Track Second Prize: Maosen Chen (360), Qianyun Chen (Georgia Institute of Technology), Tun Chen (ICT, CAS)
X86 Track Third Prize: Yi Liang, Shaokang Zeng, Yande Liang, Kaizhong Chen (Beijing University of Technology)
Algorithm Track First Prize: Xingwang Xiong, Xu Wen, Cheng Huang (ICT, CAS)
Algorithm Track Second Prize: Tongyan Gong (ICT, CAS), Huiqian Niu (JD.com)
Algorithm Track Second Prize: Heming Sun, Xi Xiong (The Ohio State University)
Thanks for his generosity, Prof. Tony Hey donated to the BenchCouncil Award committee to spin off the best student paper award. The committee will present this award to a student as the first author who publishes a paper with potential impact on benchmarking, measuring, and optimizing at the BenchCouncil conference. ($1000). Prof. Dr. Tony Hey is the Chief Data Scientist at Rutherford Appleton Laboratory STFC, a fellow of ACM, the American Association for the Advancement of Science, and the Royal Academy of Engineering. He is named the 2019 recipient of the BenchCouncil achievement award.
MSDBench: Understanding the Performance Impact of Isolation Domains on Microservice-based IoT Deployments
Sierra Wang (University of California, Santa Barbara), Fatih Bakir (University of California, Santa Barbara), Tyler Ekaireb (University of California, Santa Barbara), Jack Pearson (University of California, Santa Barbara), Chandra Krintz (University of California, Santa Barbara), Rich Wolski(University of California, Santa Barbara)
Latency-Aware Automatic CNN Channel Pruning with GPU Runtime Analysis
Jiaqiang Liu (University of Science and Technology of China), Jingwei Sun (University of Science and Technology of China), Zhongtian Xu (University of Science and Technology of China), Guangzhong Sun (University of Science and Technology of China)
BenchCouncil incubates benchmark projects, hosts the BenchCouncil benchmark projects, and further encourages reliable and reproducible research using publicly available benchmarks. We present the BenchCouncil Award for Excellence for Reproducible Research to the papers related to those projects. (Each paper $100 prize, maximally up to 30 articles per year)
Characterizing the Sharing Behavior of Applications using Software Transactional Memory
Douglas Pereira Pasqualin (Universidade Federal de Pelotas), Matthias Diener (University of Illinois at Urbana-Champaign), André Rauber Du Bois (Universidade Federal de Pelotas) and Mauricio Lima Pilla (Universidade Federal de Pelotas)
swRodinia: A Benchmark Suite for Exploiting Architecture Properties of Sunway Processor
Bangduo Chen (Beihang University), Mingzhen Li (Beihang University), Hailong Yang (Beihang University), Zhongzhi Luan (Beihang University), Lin Gan (Tsinghua University), Guangwen Yang (Tsinghua University) and Depei Qian (Beihang University)
MAS3K: An Open Dataset for Marine Animal Segmentation
Lin Li (Ocean University of China), Eric Rigall (Ocean University of China), Junyu Dong (Ocean University of China) and Geng Chen (Inception Institute of Artificial Intelligence, United Arab Emirates)