Hi Team,
There was a scenario where the rdf_loader_run() was performed, but unfortunately, the process got stuck and suddenly we exited from it and later sometime we again we triggered the same command without stopping the previous one.
So in this case while looking into the log, we observed that there are many locks occurred in the that timeframe. So can we avoid this such situation in future? what are the solution for it?
05:16:31 * Monitor: Locks are held for a long time
05:18:31 * Monitor: Many lock waits
05:18:31 * Monitor: Locks are held for a long time
05:20:31 * Monitor: Many lock waits
05:20:31 * Monitor: Locks are held for a long time
05:22:32 * Monitor: Many lock waits
05:22:32 * Monitor: Locks are held for a long time
05:24:32 * Monitor: Many lock waits
05:24:32 * Monitor: Locks are held for a long time
05:26:32 * Monitor: Many lock waits
05:26:32 * Monitor: Locks are held for a long time
05:28:32 * Monitor: Many lock waits
05:28:32 * Monitor: Locks are held for a long time
05:30:32 * Monitor: Many lock waits
05:30:32 * Monitor: Locks are held for a long time
05:32:32 * Monitor: Many lock waits
05:32:32 * Monitor: Locks are held for a long time
05:34:32 * Monitor: Many lock waits
05:34:32 * Monitor: Locks are held for a long time
05:36:33 * Monitor: Many lock waits
05:36:33 * Monitor: Locks are held for a long time
05:38:33 * Monitor: Many lock waits
05:38:33 * Monitor: Locks are held for a long time
05:40:33 * Monitor: Many lock waits
05:40:33 * Monitor: Locks are held for a long time
05:41:20 Checkpoint started
05:41:20 Checkpoint finished, log reused
05:42:33 * Monitor: Many lock waits
05:42:33 * Monitor: Locks are held for a long time
05:44:33 * Monitor: Many lock waits
05:44:33 * Monitor: Locks are held for a long time
05:46:09 PL LOG: Loader started
05:46:09 PL LOG: Loader started
05:46:09 PL LOG: Loader started
05:46:09 PL LOG: Loader started
05:46:09 PL LOG: Loader started
05:46:09 PL LOG: Loader started
05:46:09 PL LOG: Loader started
05:46:33 * Monitor: Many lock waits
05:46:55 * Monitor: Locks are held for a long time
05:48:41 * Monitor: Many lock waits
05:49:03 * Monitor: Locks are held for a long time
05:49:05 PL LOG: deadlock in loader, waiting 253 milliseconds
05:49:08 PL LOG: deadlock in loader, waiting 906 milliseconds
05:49:10 PL LOG: deadlock in loader, waiting 868 milliseconds
05:49:13 PL LOG: deadlock in loader, waiting 857 milliseconds
05:49:15 * Monitor: Should read for update because lock escalation from shared to exclusive fails frequently (2)
05:49:18 PL LOG: deadlock in loader, waiting 680 milliseconds
05:49:22 PL LOG: deadlock in loader, waiting 292 milliseconds
05:49:24 PL LOG: deadlock in loader, waiting 343 milliseconds