Linked Data View definition. "The quad storage is edited by other client"

During the process I am getting this errors:

22023 The quad storage “http://www.openlinksw.com/schemas/virtrdf#DefaultQuadStorage” is edited by other client, started 2022-06-08 15:38:12.101491
00000 Quad storage http://www.openlinksw.com/schemas/virtrdf#DefaultQuadStorage is flagged as being edited 2022-06-08 15:38:12.101491
42000 Can not process data that are being edited by someone else.
00000 To force tests/bugfixing, pass 1 as first argument and either ‘2ab126ce83b4e0581bbddb4185587c1e’ or ‘*’ as second argument of the DB.DBA.RDF_AUDIT_METADATA() call
22023 The quad storage “http://www.openlinksw.com/schemas/virtrdf#DefaultQuadStorage” is edited by other client, started 2022-06-08 15:38:12.101491
00000 Quad storage http://www.openlinksw.com/schemas/virtrdf#DefaultQuadStorage is flagged as being edited 2022-06-08 15:38:12.101491
42000 Can not process data that are being edited by someone else.
00000 To force tests/bugfixing, pass 1 as first argument and either ‘2ab126ce83b4e0581bbddb4185587c1e’ or ‘*’ as second argument of the DB.DBA.RDF_AUDIT_METADATA() call
00000 OK

I will try with first:
SQL> DB.DBA.RDF_AUDIT_METADATA(1,‘2ab126ce83b4e0581bbddb4185587c1e’)
Then:
SQL> DB.DBA.RDF_AUDIT_METADATA(1, ‘*’);

Still not working properly…

Thanks.

What are the steps being performed to get that Linked Data View Quad map storage creation phase ?

Normally the DB.DBA.RDF_AUDIT_METADATA(1, '*'); command does resolve the 42000 Can not process data that are being edited by someone else. messages.

Also, have you tried restarting the Virtuoso instance and then retry the Linked Data View creation ?

Linked data after the fix command is working but I think not properly because as result I have :

SQL Relations (Tables) to RDF Statements (Predicate / Property Graph) Mappings

http://localhost:8890/schemas/CSV/qm-VoidStatistics
http://localhost:8890/schemas/CSV/qm-pfai2019_csv

Sample Graph IRIs & Linked Data Entity URIs

RDF Document (Named Graph) IRIs:

Transient Views: http://localhost:8890/CSV#
http://localhost:8890/CSV/stat#
http://localhost:8890/CSV/stat#Stat

Metadata Data Document (VoiD) URI/URL: http://localhost:8890/CSV/stat#

Linked Data Ontology URI: http://localhost:8890/schemas/CSV/

In the ### RDF Document (Named Graph) IRIs was missing:

http://localhost:8890/CSV/qm-pfai2019_csv/ID/1#this

Sample IRIs to test the generated Linked Data View IRIs should have been presented. I presume the http://localhost:8890/CSV/qm-pfai2019_csv/ID/1#this you reference was generated previously assuming qm-pfai2019_csv to be the table name and ID to be a primary key column. Is the URI actually deference-able ie generated a Linked Data View page when loaded into browser ?

Hi,
the problem is that in a new (“good”) istance of Openlink Virtuoso the iRI: http://localhost:8890/CSV/qm-pfai2019_csv/ID/1#this is provided in the “bad” istance this istance is not provided and I don’t know how to debug the case.
I can’t also clean the Virtuoso DB (11 GB) that was partially loaded cause “load errors” using the IMPORT function in the CONDUCTOR.
Regards.

What are the “good” vs “bad” instances ? With the “good” instance, was the database recreated from scratch starting with an empty database and data reload or is it some repaired variant of the “bad” database ?

When the RDF Linked Data Views were created were any error messages reported during the quad map strange creation and do you still have this output such that it can be provided for review ?

Does the http://localhost:8890/sparql/rdfviews.vsp page list the expected RDFView definitions ?

If you query the http://localhost:8890/CSV# Transient Graph name does the http://localhost:8890/CSV/qm-pfai2019_csv/ID/1#this IRI not exist as a subject value of some of the triples ?

Hi,
I solved the issue deleting the TABLE and recreating the TABLE with bulk load (very long time execution 647015 msec.), still the database has very big dimension (11 GB).

Thanks.

Glad to hear your having solved the issue …

647015 msec is about 10mins, how long did he data take to load previously and how much data rows of CSV data is being bulk loaded (I presume) ?

When you delete data Virtuoso does not release the allocated space, on the premise the database will grow again and can reuse the space without needing to allocate new space and gain some performance.

If you want to reclaim the space you would have to perform a backup-dump and restore of the database, which would rebuild it with only space required allocated.