TBench (BenchCouncil Transactions on Benchmarks, Standards and Evaluations) Calls for Papers
BenchCouncil Transactions on Benchmarks, Standards and Evaluations (TBench) is an open-access multi-disciplinary journal dedicated to benchmarks, standards, evaluations, optimizations, and industry best practices. This journal is a peer-reviewed, subsidized, open-access journal where the International Open Benchmark Council pays the OA fee. Authors do not have to pay any open-access publication fee. However, at least one of the authors should register BenchCouncil International Symposium on Benchmarking, Measuring, and Optimizing (Bench) (https://www.benchcouncil.org/bench/) and present their work. It seeks a fast-track publication with an average turnaround time of one month.
TBench publishes position papers that open new research areas, research articles that address new problems, methodologies, and tools, survey articles that build up comprehensive knowledge, and comments articles that argue the published articles. Particular areas of interest include, but are not limited to:
1. benchmark science and engineering across multi-disciplines, including but not limited to:
- the formulation of problems or challenges in emerging and future computing.
- the benchmarks, datasets, and indexes in multidisciplinary applications, e.g., medical, finance, education, management, psychology, etc.
- benchmark-based quantitative approaches to tackle multidisciplinary and interdisciplinary challenges
- industry best practices
2. Benchmark and standard specifications, implementations, and validations of:
- Big Data
- AI
- HPC
- Machine learning
- Big scientific data
- Datacenter
- Cloud
- Warehouse-scale computing
- Mobile robotics
- Edge and fog computing
- IoT
- Block Chain
- Data management and storage
- Financial domains
- Education domains
- Medical domains
- Other application domains
3. Datasets
- Detailed descriptions of research or industry datasets, including the methods used to collect the data and technical analyses supporting the quality of the measurements.
- Analyses or meta-analyses of existing data and original articles on systems, technologies, and techniques that advance data sharing and reuse to support reproducible research.
- Evaluating the rigor and quality of the experiments used to generate the data and the completeness of the data description.
- Tools that can generate large-scale data while preserving their original characteristics.
4. Workload characterization, quantitative measurement, design, and evaluation studies of:
- Computer and communication networks, protocols, and algorithms
- Wireless, mobile, ad-hoc and sensor networks, IoT applications
- Computer architectures, hardware accelerators, multi-core processors, memory systems, and storage networks
- High-Performance Computing
- Operating systems, file systems, and databases
- Virtualization, data centers, distributed and cloud computing, fog, and edge computing
- Mobile and personal computing systems
- Energy-efficient computing systems
- Real-time and fault-tolerant systems
- Security and privacy of computing and networked systems
- Software systems and services and enterprise applications
- Social networks, multimedia systems, Web services
- Cyber-physical systems, including the smart grid
5. Methodologies, metrics, abstractions, algorithms, and tools for:
- Analytical modeling techniques and model validation
- Workload characterization and benchmarking
- Performance, scalability, power, and reliability analysis
- Sustainability analysis and power management
- System measurement, performance monitoring, and forecasting
- Anomaly detection, problem diagnosis, and troubleshooting
- Capacity planning, resource allocation, run time management, and scheduling
- Experimental design, statistical analysis, simulation
6. Measurement and evaluation:
- Measurement standards
- Evaluation methodology and metric
- Testbed methodologies and systems
- Instrumentation, sampling, tracing, and profiling of Large-scale real-world applications and systems
- Collection and analysis of measurement data that yield new insights
- Measurement-based modeling (e.g., workloads, scaling behavior, assessment of performance bottlenecks)
- Methods and tools to monitor and visualize measurement and evaluation data
- Systems and algorithms that build on measurement-based findings
- Advances in data collection, analysis, and storage (e.g., anonymization, querying, sharing)
- Reappraisal of previous empirical measurements and measurement-based conclusions
- Descriptions of challenges and future directions the measurement and evaluation community should pursue
Guide for authors
Types of paper
Contributions falling into the following categories will be considered for publication: Position papers, Full-length articles/Research articles, Review articles, Short communications, Discussions, Editorials, Case reports, Practice guidelines, Product reviews, Conference reports, and Opinion papers. Please ensure that you select the appropriate article type from the list of options when making your submission. Authors contributing to special issues should ensure that they choose the special issue article type from this list.
- Position Papers – No page limits
- Full-Length Articles/Research Articles - 12 double-column pages (All research article page limits do not include references and author biographies)
- Review Papers - no page limits
- Short Communications - 4 double-column pages (All short communication article page limits do not include references and author biographies)
- Discussions - 2 double-column pages (All discussion article page limits do not include references and author biographies)
- Editorials - 10 double-column pages (All editorial page limits do not include references and author biographies)
- Case Studies - 8 double-column pages (All case report page limits do not include references and author biographies)
- Practice Guidelines -12 double-column pages (All practice guideline page limits do not include references and author biographies)
- Product Reviews - 4 double-column pages (All product review page limits do not include references and author biographies)
- Conference Reports - 10 double-column pages (All conference report page limits do not include references and author biographies)
- Opinion Papers - - 4 double-column pages (All opinion page limits do not include references and author biographies)
Peer review
This journal operates a double anonymized review process. All contributions are typically sent to at least two independent expert reviewers to assess the paper's scientific quality. The Editor is responsible for the final decision regarding the acceptance or rejection of articles. The Editor's decision is final. Editors are not involved in decisions about papers that they have written themselves or have been written by family members or colleagues or which relate to products or services in which the Editor has a conflict of interest. Any such submission is subject to the journal's usual procedures, with peer review handled independently of the relevant Editor and their research groups.
If you have any issues or concerns, please get in touch with EIC via jianfengzhan.benchcouncil@gmail.com
About BenchCouncil
The International Open Benchmark Council (BenchCouncil) is a non-profit international organization that aims to benchmark, standardize, evaluate, and incubate emerging technologies. Since its founding, BenchCouncil bears four fundamental responsibilities: establish unified benchmark science and engineering across multi-disciplines; define the problems or challenges of emerging and future computing; keep the benchmarks and standards community open, inclusive, and growing; and promote benchmark-based quantitative approaches to tackle multidisciplinary and interdisciplinary challenges. BenchCouncil also hosts a series of influential benchmark projects. BenchCouncil presents the achievement and rising star awards each year at its flagship Bench conference. The BenchCouncil website is https://www.benchcouncil.org/.