Deadlock in loader issue

Hi Team,
There was a scenario where the rdf_loader_run() was performed, but unfortunately, the process got stuck and suddenly we exited from it and later sometime we again we triggered the same command without stopping the previous one.
So in this case while looking into the log, we observed that there are many locks occurred in the that timeframe. So can we avoid this such situation in future? what are the solution for it?

05:16:31 * Monitor: Locks are held for a long time
05:18:31 * Monitor: Many lock waits
05:18:31 * Monitor: Locks are held for a long time
05:20:31 * Monitor: Many lock waits
05:20:31 * Monitor: Locks are held for a long time
05:22:32 * Monitor: Many lock waits
05:22:32 * Monitor: Locks are held for a long time
05:24:32 * Monitor: Many lock waits
05:24:32 * Monitor: Locks are held for a long time
05:26:32 * Monitor: Many lock waits
05:26:32 * Monitor: Locks are held for a long time
05:28:32 * Monitor: Many lock waits
05:28:32 * Monitor: Locks are held for a long time
05:30:32 * Monitor: Many lock waits
05:30:32 * Monitor: Locks are held for a long time
05:32:32 * Monitor: Many lock waits
05:32:32 * Monitor: Locks are held for a long time
05:34:32 * Monitor: Many lock waits
05:34:32 * Monitor: Locks are held for a long time
05:36:33 * Monitor: Many lock waits
05:36:33 * Monitor: Locks are held for a long time
05:38:33 * Monitor: Many lock waits
05:38:33 * Monitor: Locks are held for a long time
05:40:33 * Monitor: Many lock waits
05:40:33 * Monitor: Locks are held for a long time
05:41:20 Checkpoint started
05:41:20 Checkpoint finished, log reused
05:42:33 * Monitor: Many lock waits
05:42:33 * Monitor: Locks are held for a long time
05:44:33 * Monitor: Many lock waits
05:44:33 * Monitor: Locks are held for a long time
05:46:09 PL LOG: Loader started
05:46:09 PL LOG: Loader started
05:46:09 PL LOG: Loader started
05:46:09 PL LOG: Loader started
05:46:09 PL LOG: Loader started
05:46:09 PL LOG: Loader started
05:46:09 PL LOG: Loader started
05:46:33 * Monitor: Many lock waits
05:46:55 * Monitor: Locks are held for a long time
05:48:41 * Monitor: Many lock waits
05:49:03 * Monitor: Locks are held for a long time
05:49:05 PL LOG: deadlock in loader, waiting 253 milliseconds
05:49:08 PL LOG: deadlock in loader, waiting 906 milliseconds
05:49:10 PL LOG: deadlock in loader, waiting 868 milliseconds
05:49:13 PL LOG: deadlock in loader, waiting 857 milliseconds
05:49:15 * Monitor: Should read for update because lock escalation from shared to exclusive fails frequently (2)
05:49:18 PL LOG: deadlock in loader, waiting 680 milliseconds
05:49:22 PL LOG: deadlock in loader, waiting 292 milliseconds
05:49:24 PL LOG: deadlock in loader, waiting 343 milliseconds

What is the Virtuoso version being used which can be obtained as detailed in the link ?

Has your Virtuoso instance be configured to run on the system is use, as detailed in the RDF Performance Tuning Guide ?

How many CPUs and RAM is available on the machine Virtuoso is running on ?

When running the bulk load operation, what does the output of running the Virtuoso status(); command from isql report as to the state of the Virtuoso server at that point ?

Virtuoso version: 07.20.3234

Has your Virtuoso instance be configured? Yes

CPUs and RAM
16 CPUs, 128 GB RAM

what does the output of running the Virtuoso status();
I don’t have the log trace for now

The issue has resolved once the DB is restarted, but is there any solution to avoid or how to solve without restarting the DB?

Can’t comment on how to avoid without more information like the status(); output when in the state, knowing more about the triples in the database and how many are being loaded etc