2023 BenchCouncil International Conference on Benchmarking, Measuring and Optimizing
The Bench conference encompasses a wide range of topics in benchmarking, measurement, evaluation methods and tools. Bench’s multi-disciplinary emphasis provides an ideal environment for developers and researchers from the architecture, system, algorithm, computing, and application communities to discuss practical and theoretical work covering workload characterization, benchmarks and tools, evaluation, measurement and optimization, and dataset generation.
Researchers both from academia and industry are invited to submit proposals for special sessions. A special session could consist of a set of individual presentations or a panel. Special sessions complement the regular technical program by highlighting emerging research topics. We welcome special sessions in all areas of Benchmarking and Optimizations (See https://www.benchcouncil.org/bench2023/cfp.html for topics). The participation of speakers from industry in a special session proposal is encouraged and will be positively evaluated when the proposal is assessed.
Please submit your proposals (maximum 3 pages) in PDF format by email to the Special Sessions Chair. Proposal should include the following information:
1. Title and Abstract: Propose a title and a brief abstract of no more than approximately 200 words that will allow conference attendees to understand the topic and the focus of the special session.
2. Rationale: Please explain why the topic of the special session is timely and compelling and why it is relevant to the Bench community.
3. Session Format: Please describe what the session format is (e.g., workshop, tutorial, panel, a series of invited talks, and so on).
4. Expected Length and/or Agenda: Please describe what the expected length of your session is (e.g., one hour, two hours, half day, and one day). Please feel free to include an expected agenda if needed.
5. Biographies: Please provide the list of organizers/speakers with a short bio, including the recent activity that is relevant to the topic of the special session.
6. Other Information: Please feel free to describe other important information about your proposed session. Note that the total proposal length should be within 3 pages.
* Notification of acceptance will be sent to the organizers by September 15, 2023.
Full Papers: TBD
Final Papers Due: TBD
The Bench conference encompasses a wide range of topics in benchmarks, datasets, metrics, indexes, measurement, evaluation, optimization, supporting methods and tools, and other best practices in computer science, medicine, finance, education, management, etc. Bench’s multidisciplinary and inter-disciplinary emphasis provides an ideal environment for developers and researchers from different areas and communities to discuss practical and theoretical work. The areas cover:
Architecture: The benchmarking of the architecture and the hardware, e.g. the benchmark suite for CPU, GPU, Memory, HPC.
Data Management: The evaluation of the data management and storage, e.g. the benchmark specifications and tools for database.
Algorithm: The evaluation of the algorithm, e.g. the evaluation rules and datasets in machine learning, deep learning, reinforce learning.
System: The testing of the software system, e.g. the testing of operating system, distributed system, web server.
Network: The measurement of communication network, e.g. the measurement of network in data center, wireless, mobile, ad-hoc and sensor networks.
Reliability and Security: The measurement of reliability and security.
Multidisciplinary Application: The measurement of multidisciplinary application, e.g. medical, finance, education, management.
We solicit papers describing original and previously unpublished work. The topics of interest include, but are not limited to, the following.
Benchmark and standard specifications, implementations, and validations: Big Data, Artificial intelligence (AI), High performance computing (HPC), Machine learning, Warehouse-scale computing, Mobile robotics, Edge and fog computing, Internet of Things (IoT), Blockchain, Data management and storage, Financial, Education, Medical or other application domains.
Dataset Generation and Analysis: Research or industry data sets, including the methods used to collect the data and technical analyses supporting the quality of the measurements; Analyses or meta-analyses of existing data and original articles on systems, technologies and techniques that advance data sharing and reuse to support reproducible research; Evaluations of the rigor and quality of the experiments used to generate data and the completeness of the descriptions of the data; Tools generating large-scale data.
Workload characterization, quantitative measurement, design and evaluation studies: Characterization and evaluation of Computer and communication networks, protocols and algorithms; Wireless, mobile, ad-hoc and sensor networks, IoT applications; Computer architectures, hardware accelerators, multi-core processors, memory systems and storage networks; HPC systems; Operating systems, file systems and databases; Virtualization, data centers, distributed and cloud computing, fog and edge computing; Mobile and personal computing systems; Energy-efficient computing systems; Real-time and fault-tolerant systems; Security and privacy of computing and networked systems; Software systems and services, and enterprise applications; Social networks, multimedia systems, web services; Cyber-physical systems.
Methodologies, abstractions, metrics, algorithms and tools: Analytical modeling techniques and model validation; Workload characterization and benchmarking; Performance, scalability, power and reliability analysis; Sustainability analysis and power management; System measurement, performance monitoring and forecasting; Anomaly detection, problem diagnosis and troubleshooting; Capacity planning, resource allocation, run time management and scheduling; Experimental design, statistical analysis and simulation.
Measurement and evaluation: Evaluation methodologies and metrics; Testbed methodologies and systems; Instrumentation, sampling, tracing and profiling of large-scale, real-world applications and systems; Collection and analysis of measurement data that yield new insights; Measurement-based modeling (e.g., workloads, scaling behavior, assessment of performance bottlenecks); Methods and tools to monitor and visualize measurement and evaluation data; Systems and algorithms that build on measurement-based findings; Advances in data collection, analysis and storage (e.g., anonymization, querying, sharing); Reappraisal of previous empirical measurements and measurement-based conclusions; Descriptions of challenges and future directions that the measurement and evaluation community should pursue.
1. The online discussion is blind. While the reviewers discuss the papers, they don’t know others’ identities beyond reviewer #A, #B, …. Hence, a single reviewer cannot easily assert seniority and silence other voices, or influence them beyond the strength of their arguments.
2. When the reviewers pointing out closeness to prior work that informs the reviewer’s decision to lower the novelty and contribution of a paper, they should provide a full citation to that prior work.
3. When the reviewers asking authors to draw a comparison with concurrent work (e.g., work that was published or appeared online *after* the paper submission deadline) or with preliminary work (e.g., a poster or abstract that is not archival), this comparison should not inform a lower score by the reviewer.
4. Provide useful and constructive feedback to the authors. Be respectful, professional and positive in your reviews and provide suggestions for the authors to improve their work.
5. Score the paper absolutely and relative to the group of papers you are reviewing.
Absolute overall merit - There are 4 grades you can give to each paper for absolute overall merit; the top 2 ratings mean that you think the paper is acceptable to the conference and the bottom 2 ratings mean that in your opinion the paper is below the threshold for the conference. Please assign these values thinking, whether the paper is above the threshold for the conference or below.
Relative overall merit – is based on the papers that you are reviewing. You can rank your papers and then group the papers into the 4 bins. Except for fractional errors, you should be dividing your papers equally into the 4 categories.
6. Reviewers must treat all submissions as strictly confidential and destroy all papers once the technical program has been finalized.
7. Reviewers must contact the PC chair or EIC if they feel there is an ethical violation of any sort (e.g., authors seeking support for a paper, authors seeking to identify who the reviewers are).
8. Do not actively look for author identities. Reviewers should judge a paper solely on its merits.
9. If you know the authors, do not publicize the authors. If you would like to recuse yourself from the review task, contact the PC Chair.
10. Reviewers should review the current submission. If you have reviewed a previous submission, make sure your review is based on the current submission.
11. Reviewers must not share the papers with students/colleagues.
12. Reviewers must compose the reviews themselves and provide unbiased reviews.
13. Do not solicit external reviews without consulting the PC chairs or EIC. If you regularly involve your students in the review process as part of their PhD training, contact the PC chairs. You are still responsible for the reviews. You may do this on no more than one of your reviews.
14. Reviewers must keep review discussions (including which papers you reviewed) confidential.
15. Do not discuss the content of a submitted paper/reviews with anyone other than officially on the submission management system like HotCRP or EasyChair during the online discussion period or the PC meeting (from now until paper publication in any venue).
16. Do not reveal the name of paper authors in case reviewers happen to be aware of author identity. (Author names of accepted papers will be revealed after the PC meeting; author names of rejected papers will never be revealed.)
17. Do not disclose the outcome of a paper until its authors are notified of its acceptance or rejection.
18. Do not download or acquire material from the review site that you do not need access to.
19. Do not disclose the content of reviews, including the reviewers' identities, or discussions about a paper.
This set of review ethics is derived and based on the MICRO 2020, ASPLOS 2020-2021, ISCA 2020-21 review guidelines.
Papers must be submitted in PDF. For a full paper, the page limit is 15 pages in the LNCS format, not including references. For a short paper, the page limit is 8 pages in the LNCS format, not including references. The submissions will be judged based on the merit of the ideas rather than the length. After the conference, the proceeding will be published by Springer LNCS (Pending, Indexed by EI). Please note that the LNCS format is the final one for publishing. Distinguished papers will be recommended to and published by the BenchCouncil Transactions on Benchmarks, Standards and Evaluation (TBench).
At least one author must pre-register for the symposium, and at least one author must attend the symposium to present the paper. Papers for which no author is pre-registered will be removed from the proceedings.
Springer needs all source files (LaTex files with all the associated style files, special fonts and eps files, or Word or rft files) and your final pdfs of your paper. References are to be supplied as Bbl files to avoid omission of data while conversion from Bib to Bbl. A mixture of LaTex and Word files is fine.
Please make sure your submission satisfies ALL of the following requirements:
Aoying Zhou, East China Normal University
Weining Qian, East China Normal University
Biwei Xie, Institute of Computing Technology, CAS
BenchCouncil Distinguished Doctoral Dissertation Award Committee in Other Areas:
Prof. Jack Dongarra, University of Tennessee
Dr. Xiaoyi Lu, The University of California, Merced
Dr. Jeyan Thiyagalingam, STFC-RAL
Dr. Lei Wang, ICT, Chinese Academy of Sciences
Dr. Spyros Blanas, The Ohio State University
BenchCouncil Distinguished Doctoral Dissertation Award Committee in Computer Architecture:
Dr. Peter Mattson, Google
Dr. Vijay Janapa Reddi, Harvard University
Dr. Wanling Gao, Chinese Academy of Sciences
Prof. Dr. Jack Dongarra, University of Tennessee
Prof. Dr. Geoffrey Fox, Indiana University
Prof. Dr. D. K. Panda, The Ohio State University
Prof. Dr. Felix, Wolf, TU Darmstadt
Prof. Dr. Xiaoyi Lu, University of California, Merced
Prof. Dr. Resit Sendag, University of Rhode Island, USA
Dr. Wanling Gao, ICT, Chinese Academy of Sciences & UCAS
Prof. Dr. Jianfeng Zhan, BenchCouncil
|钻石级 - ￥250,000 及以上|
|Platinum Level - ￥200,000|
|Gold Level - ￥150,000|
|Silver Level - ￥100,000|
|Bronze Level - ￥50,000|