Call For Papers
Important Dates:
Full Papers: TBD
Notification: TBD
Final Papers Due: TBD
The Bench conference encompasses a wide range of topics in benchmarks, datasets, metrics, indexes, measurement, evaluation, optimization, supporting methods and tools, and other best practices in computer science, medicine, finance, education, management, etc. Bench’s multidisciplinary and inter-disciplinary emphasis provides an ideal environment for developers and researchers from different areas and communities to discuss practical and theoretical work. The areas cover:
Architecture: The benchmarking of the architecture and the hardware, e.g. the benchmark suite for CPU, GPU, Memory, HPC.
Data Management: The evaluation of the data management and storage, e.g. the benchmark specifications and tools for database.
Algorithm: The evaluation of the algorithm, e.g. the evaluation rules and datasets in machine learning, deep learning, reinforce learning.
System: The testing of the software system, e.g. the testing of operating system, distributed system, web server.
Network: The measurement of communication network, e.g. the measurement of network in data center, wireless, mobile, ad-hoc and sensor networks.
Reliability and Security: The measurement of reliability and security.
Multidisciplinary Application: The measurement of multidisciplinary application, e.g. medical, finance, education, management.
We solicit papers describing original and previously unpublished work. The topics of interest include, but are not limited to, the following.
Benchmark and standard specifications, implementations, and validations: Big Data, Artificial intelligence (AI), High performance computing (HPC), Machine learning, Warehouse-scale computing, Mobile robotics, Edge and fog computing, Internet of Things (IoT), Blockchain, Data management and storage, Financial, Education, Medical or other application domains.
Dataset Generation and Analysis: Research or industry data sets, including the methods used to collect the data and technical analyses supporting the quality of the measurements; Analyses or meta-analyses of existing data and original articles on systems, technologies and techniques that advance data sharing and reuse to support reproducible research; Evaluations of the rigor and quality of the experiments used to generate data and the completeness of the descriptions of the data; Tools generating large-scale data.
Workload characterization, quantitative measurement, design and evaluation studies: Characterization and evaluation of Computer and communication networks, protocols and algorithms; Wireless, mobile, ad-hoc and sensor networks, IoT applications; Computer architectures, hardware accelerators, multi-core processors, memory systems and storage networks; HPC systems; Operating systems, file systems and databases; Virtualization, data centers, distributed and cloud computing, fog and edge computing; Mobile and personal computing systems; Energy-efficient computing systems; Real-time and fault-tolerant systems; Security and privacy of computing and networked systems; Software systems and services, and enterprise applications; Social networks, multimedia systems, web services; Cyber-physical systems.
Methodologies, abstractions, metrics, algorithms and tools: Analytical modeling techniques and model validation; Workload characterization and benchmarking; Performance, scalability, power and reliability analysis; Sustainability analysis and power management; System measurement, performance monitoring and forecasting; Anomaly detection, problem diagnosis and troubleshooting; Capacity planning, resource allocation, run time management and scheduling; Experimental design, statistical analysis and simulation.
Measurement and evaluation: Evaluation methodologies and metrics; Testbed methodologies and systems; Instrumentation, sampling, tracing and profiling of large-scale, real-world applications and systems; Collection and analysis of measurement data that yield new insights; Measurement-based modeling (e.g., workloads, scaling behavior, assessment of performance bottlenecks); Methods and tools to monitor and visualize measurement and evaluation data; Systems and algorithms that build on measurement-based findings; Advances in data collection, analysis and storage (e.g., anonymization, querying, sharing); Reappraisal of previous empirical measurements and measurement-based conclusions; Descriptions of challenges and future directions that the measurement and evaluation community should pursue.
• Paper Submission: The full version of the paper should be submitted as a PDF file following the submission guidelines. Submission site will be opened soon.
• Publication: All accepted papers will be presented at the Bench 2023 conference, and will be published by Springer LNCS (Pending). Distinguished papers will be recommended to and published by the BenchCouncil Transactions on Benchmarks, Standards and Evaluation (TBench).
• Awards: Regularly, the Bench conference will present the BenchCouncil Achievement Award ($3000), the BenchCouncil Rising Star Award ($1000), the BenchCouncil Best Paper Award ($1000), and the BenchCouncil Distinguished Doctoral Dissertation Award in Computer Architecture ($1000) and other areas ($1000). This year, the BenchCouncil Distinguished Doctoral Dissertation Award includes two tracks of computer architecture and other areas. Among the submissions of each track, four candidates will be selected as finalists. They will be invited to give a 30-minute presentation at the Bench 2023 Conference and contribute research articles to BenchCouncil Transactions on Benchmarks, Standards, and Evaluation. Finally, one among the four will receive the award for each track, which carries a $1,000 honorarium.