Venue of Bench'19

2021 BenchCouncil International Symposium on Benchmarking, Measuring and Optimizing (Bench 2021)


Nov 14-16, 2021

Call for Papers

Overview

Benchmarks, data, standards, measurements, and optimizations are fundamental human activities and assets. The International Open Benchmark Council (BenchCouncil) organizes this symposium (Bench'21).

The Bench conference has three defining characteristics. First, it provides a high-quality, single-track forum for presenting results and discussing ideas that further understand the benchmarks, data, standards, measurements, and optimizations community as a whole. Second, it is a multi-disciplinary conference. The past meetings attracted researchers and practitioners from the architecture, system, algorithm, and application communities. Third, it includes both invited sessions and contributed sessions.

There are three submission opportunities, and the reviewing process is double-blind. Upon acceptance, papers will be scheduled for publication in the BenchCouncil Transactions on Benchmarks, Standards, and Evaluation (TBench) and presentation at the Bench'21 conference. All accepted and eligible papers will be considered, by a panel of reviewers, for the BenchCouncil Best Paper Award and the BenchCouncil Award for Excellence for Reproducible Research.

Topics

We solicit papers describing original and previously unpublished research. Specific topics of interest include, but are not limited to, the following.

Benchmark and standard specifications, implementations, and validations of:

  • Big Data
  • Artificial intelligence (AI)
  • High performance computing (HPC)
  • Machine learning
  • Big scientific data
  • Datacenters
  • Cloud
  • Warehouse-scale computing
  • Mobile robotics
  • Edge and fog computing
  • Internet of Things (IoT)
  • Block chain
  • Data management and storage
  • Financial domains
  • Education domains
  • Medical domains
  • Other application domains

Data:

  • Detailed descriptions of research or industry data sets, including the methods used to collect the data and technical analyses supporting the quality of the measurements.
  • Analyses or meta-analyses of existing data and original articles on systems, technologies and techniques that advance data sharing and reuse to support reproducible research.
  • Evaluations of the rigour and quality of the experiments used to generate data and the completeness of the descriptions of the data.
  • Tools generating large-scale data while preserving their original characteristics.

Workload characterization, quantitative measurement, design and evaluation studies of:

  • Computer and communication networks, protocols and algorithms
  • Wireless, mobile, ad-hoc and sensor networks, IoT applications
  • Computer architectures, hardware accelerators, multi-core processors, memory systems and storage networks
  • HPC
  • Operating systems, file systems and databases
  • Virtualization, data centers, distributed and cloud computing, fog and edge computing
  • Mobile and personal computing systems
  • Energy-efficient computing systems
  • Real-time and fault-tolerant systems
  • Security and privacy of computing and networked systems
  • Software systems and services, and enterprise applications
  • Social networks, multimedia systems, web services
  • Cyber-physical systems, including the smart grid

Methodologies, abstractions, metrics, algorithms and tools for:

  • Analytical modeling techniques and model validation
  • Workload characterization and benchmarking
  • Performance, scalability, power and reliability analysis
  • Sustainability analysis and power management
  • System measurement, performance monitoring and forecasting
  • Anomaly detection, problem diagnosis and troubleshooting
  • Capacity planning, resource allocation, run time management and scheduling
  • Experimental design, statistical analysis and simulation

Measurement and evaluation:

  • Evaluation methodologies and metrics
  • Testbed methodologies and systems
  • Instrumentation, sampling, tracing and profiling of large-scale, real-world applications and systems
  • Collection and analysis of measurement data that yield new insights
  • Measurement-based modeling (e.g., workloads, scaling behavior, assessment of performance bottlenecks)
  • Methods and tools to monitor and visualize measurement and evaluation data
  • Systems and algorithms that build on measurement-based findings
  • Advances in data collection, analysis and storage (e.g., anonymization, querying, sharing)
  • Reappraisal of previous empirical measurements and measurement-based conclusions
  • Descriptions of challenges and future directions that the measurement and evaluation community should pursue

Optimization methodologies and Tools.

Important Dates

There are three submission opportunities over the year.

  • Spring submission website: https://bench2021.hotcrp.com/
    • Abstract registration: June 15, 2021
    • Paper submission: June 21, 2021
    • First-round author notification: July 21, 2021
    • Rebuttal and Revision Period: July 21-August 21, 2021
    • Second-round author notification: September 10, 2021
  • Summer submission website: TBD
    • Abstract registration: August 1, 2021
    • Paper submission: August 7, 2021
    • First-round author notification: September 7, 2021
    • Rebuttal and Revision Period: September 7-October 7, 2021
    • Second-round author notification: November 7, 2021
  • Winter submission website: TBD
    • Abstract registration: December 15, 2021
    • Paper submission: December 21, 2021
    • First-round author notification: January 21, 2022
    • Rebuttal and Revision Period: January 21-February 21, 2022
    • Second-round author notification: March 21, 2022

Deadlines are hard and at 11:59am (noon) US Eastern Time (New York).
Papers may be submitted to any deadline. Upon acceptance, articles will be scheduled for publication in the BenchCouncil Transactions on Benchmarks, Standards, and Evaluation (TBench) and presentation at the BenchCouncil Bench’21 conference. The journal publication of accepted papers will appear in the issue of TBench immediately following acceptance.

Publication: All accepted papers will be presented at the Bench’21 conference and will be published in a special issue of the BenchCouncil Transactions on Benchmarks, Standards and Evaluation (TBench)

Awards: Bench’21 conference will present the BenchCouncil Achievement Award ($3000), the BenchCouncil Rising Star Award ($1000), the BenchCouncil Distinguished Doctoral Dissertation Award ($1000),and the BenchCouncil Best Paper Award ($1000). To encourage reliable and reproducible research using the benchmarks from all organizations, the Bench conference presents the BenchCouncil Award for Excellence for Reproducible Research to the papers using publicly available benchmarks. Each article receives a $100 prize, for up to 12 articles. Please check the conference webpage for more information.

Steering Committee: Jack Dongarra (University of Tennessee), Geoffrey Fox (Indiana University), D. K. Panda (The Ohio State University), Felix Wolf (TU Darmstadt), Xiaoyi Lu (University of California, Merced), Wanling Gao (ICT, Chinese Academy of Sciences & UCAS), Jianfeng Zhan (ICT, Chinese Academy of Sciences & BenchCouncil).

General Chairs: Lei Wang (ICT, Chinese Academy of Sciences), Axel Ngonga (Paderborn University), Chen Liu (Clarkson University).

Special Session Chair: Xiaoyi Lu (The University of California, Merced)

Publications Chair: Chunjie Luo(Institute of Computing Technology, Chinese Academy of Sciences)

Registration Chair: Fanda Fan(University of Chinese Academy of Sciences)

Technical Support Chair: Ke Liu(University of Chinese Academy of Sciences)

Publicity Chairs: Chen Zheng (Institute of Software, Chinese Academy of Sciences), Zhen Jia (Amazon), Biwei Xie (Institute of Computing Technology, Chinese Academy of Sciences), Pengfei Chen (Sun Yat-sen University).

Web Chair: Guoxin Kang (University of Chinese Academy of Sciences)

The Reviewing Process

The reviewing process for submissions is a hybrid of the traditional conference and journal models. There are three possible outcomes from the first round of submission:

  • Accept with Shepherding: a PC member will shepherd every accepted paper to ensure that the reviewers' essential suggestions are incorporated into the article's final version. This is similar to the “Minor Revision” outcome at a journal.
  • One-shot Revision: This is similar to the “Major Revision” outcome in a journal. In such cases, the authors will receive a list of issues that must be addressed before the paper can be accepted. Authors may then submit a revision of the paper during the rebuttal period. The revision should include an author's response to the reviewers' issues as part of the appendix of the article. If this paper's revision is not submitted within this time, then any resubmission will be treated as a new paper. The outcome after resubmission of a “one-shot revision” will either be “Accept with Shepherding” or “Reject.” The one-shot revision may be rejected, for example, if the reviewers find that the issues they raised were not satisfactorily addressed in the revision.
  • Reject: If the paper is rejected, it may not be resubmitted to any Bench deadline within 12 months following the paper's initial submission.