Virtuoso TPC-H
TPC-H benchmark — the industry-standard decision support benchmark from the Transaction Processing Performance Council.
TPC-H simulates complex analytical queries (22 ad-hoc queries over 8 tables (e.g., Lineitem, Orders, Supplier) and is widely used for data warehousing evaluations. Virtuoso treats TPC-H data as Linked Data, enabling the same dataset to be queried via both SQL and SPARQL. Official and community results show Virtuoso scaling to 100 TB+ datasets on clusters and cloud instances. Key benchmark results documents include:
- In Hoc Signo Vinces (Part 1): Virtuoso meets TPC-H (2013 analysis of Virtuoso’s TPC-H implementation).
- Annuit Coeptis, or, Star Schema and The Cost of Freedom (2013 deep dive into schema optimizations for TPC-H).
- E Pluribus Unum, or, Star Schema Meets Cluster (2013 cluster scaling for TPC-H workloads).
- TPC-H Kit Now on V7 Fast Track (2014 update on Virtuoso V7 enhancements for TPC-H).
LDBC Benchmarks
The Linked Data Benchmark Council (LDBC) develops vendor-neutral, industry-driven benchmarks for Labeled Property Graph (LPG) and RDF systems using SPARQL.
Virtuoso-specific results:
- LDBC SNB Interactive Workload Results (2015 paper showing Virtuoso outperforming in short reads and updates on SF300 dataset).
- Graphalytics Benchmark on Virtuoso (2018 overview of Virtuoso’s superiority in LDBC audits).
BSBM (Berlin SPARQL Benchmark)
The Berlin SPARQL Benchmark (BSBM) is one of the earliest and most widely used SPARQL benchmarks.
Built around an e-commerce use case (products, vendors, reviews, offers), it offers:
- Explore use case (browsing/search)
- BI use case (aggregations, analytics)
- Update use case (concurrent inserts)
Available in multiple versions (V1–V3) and dataset sizes from thousands to hundreds of millions of triples. Commonly used to compare native RDF stores (Virtuoso, Blazegraph, GraphDB, Stardog) against RDBMS-to-RDF mappings. Virtuoso-specific results: - BSBM Results for Virtuoso (April 2013, V3.1) (10M to 150B triples, Explore/BI use cases; Virtuoso7 cluster achieves up to 1,170 QMpH on 1B triples).
- 150 Billion Triple BSBM on LOD2 Virtuoso Cluster (2013; first cluster results, 750x scale increase over prior benchmarks).
- BSBM V3 Results (Feb 2011) (100M/200M triples, Explore/Update; Virtuoso leads in throughput).
- BSBM V2 Results (Nov 2009) (100M/200M triples; Virtuoso fastest among tested stores).
DBpedia Benchmarks
These benchmarks use real-world data and queries from DBpedia (the structured data extracted from Wikipedia).
- DBpedia SPARQL Benchmark (DBPSB) – Mines actual query logs from the public DBpedia SPARQL endpoint, clusters queries by features, and builds representative query templates. Tests real query diversity and endpoint robustness.
- Training Benchmark & Cold-Start Benchmark – Variants for evaluating federated querying and knowledge-graph completion systems.
- Often combined with BEIR or S-DBpedia for spatial and retrieval-augmented generation tasks. Virtuoso-specific results:
- DBpedia Benchmark on Virtuoso (DBPSB on 198M triples; Virtuoso v6 cluster reduces cold query times to 33s vs. 210s on v5).
- DBPSB Performance Assessment (2011; Virtuoso leads in real-query scalability over Sesame, Jena-TDB, BigOWLIM).
- FEASIBLE SPARQL Benchmark on DBpedia (2015; Virtuoso 59% faster than Fuseki on DBpedia query mixes).
| Benchmark | Primary Focus | Data Type | Key Use Cases | Main Maintainer |
|---|---|---|---|---|
| Virtuoso TPC-H | Decision support / data warehousing | Relational & RDF | Complex analytics, ad-hoc queries | TPC + OpenLink Software |
| LDBC | Graph & RDF systems | Property & RDF graphs | Interactive, BI, analytics | Linked Data Benchmark Council |
| BSBM | SPARQL engine performance | RDF triples | Explore, BI, updates | Freie Universität Berlin / OpenLink |
| DBpedia Benchmarks | Real-world SPARQL queries | DBpedia knowledge graph | Query-log driven testing | AKSW Group, Leipzig University |