SongKong Jaikoz

SongKong and Jaikoz Music Tagger Community Forum

Estimate remaining time

Should I restart ?

by the way, I still have this laying on my disk :

Can I assume I cat safely get rid of the Prefs_backup foler and that songkong now uses Prefs ?

I would restart,

SongKong has never used a Prefs_backup folder, that must be something you created.

OK sent the report files, so you can have a look at them as well. :wink:

1 Like

Okay actually reviewing it I remembered something. There are two separate queues:

A limited size queue for most tasks that controls matching and once this queue is full no more songs are loaded, this has threads to match number of cpus, it also has a timeout that is triggered for any job that is taking too long to complete.

An unbounded queue for song saving. It has no timeout for jobs and is unbounded this is to ensure that if task is cancelled we cant interrupt any jobs as they are writing data to file, potentially corrupting the file. Song saving task does use cpu but the majority of the work done is I/O, hence we only provide two threads so that we dont get too much data contention writing multiple files to the disk .

Once I had fixed the server, song saving went up but was still way behind song matching. So I dont know if this was because of issue with server or because there is a mismatch between the many threads used for song matching and only two threads for song saving.

This is summary from monitor logs

1/06/2023 09.14.54:CEST:MonitorExecutors:outputQueueSize:SEVERE: QueueSize:Worker:100
21/06/2023 09.14.54:CEST:MonitorExecutors:outputQueueSize:SEVERE: QueueSize:SongSaver:21800
21/06/2023 09.14.54:CEST:MonitorExecutors:outputPipelines:SEVERE: DbConnectionsOpen:39
21/06/2023 09.14.54:CEST:MonitorExecutors:outputPipelines:SEVERE: SongPreMatcherMatcher:0:0:0:0
21/06/2023 09.14.54:CEST:MonitorExecutors:outputPipelines:SEVERE: NaimMatcher:0:0:0:0
21/06/2023 09.14.54:CEST:MonitorExecutors:outputPipelines:SEVERE: AcoustidSubmitter:111718
21/06/2023 09.14.54:CEST:MonitorExecutors:outputPipelines:SEVERE: StartMatcher:27356:27321:27321:260000
21/06/2023 09.14.54:CEST:MonitorExecutors:outputPipelines:SEVERE: MusicBrainzSongGroupMatcher1:27321:27321:27287:259804
21/06/2023 09.14.54:CEST:MonitorExecutors:outputPipelines:SEVERE: MusicBrainzUpdateSongOnly:0:0:0:0
21/06/2023 09.14.54:CEST:MonitorExecutors:outputPipelines:SEVERE: MusicBrainzMetadataMatches:0:0:0:0
21/06/2023 09.14.54:CEST:MonitorExecutors:outputPipelines:SEVERE: MusicBrainzSongGroupMatcher2(SubGroupAnalyser):630:630:630:10043
21/06/2023 09.14.54:CEST:MonitorExecutors:outputPipelines:SEVERE: MusicBrainzSongGroupMatcher3(WithoutDuplicates):182:182:182:1772
21/06/2023 09.14.54:CEST:MonitorExecutors:outputPipelines:SEVERE: MusicBrainzSongGroupMatcher4(AnalyserArtistFolder):0:0:0:0
21/06/2023 09.14.54:CEST:MonitorExecutors:outputPipelines:SEVERE: DiscogsMultiFolderSongGroupMatcher:14886:14821:14820:160768
21/06/2023 09.14.54:CEST:MonitorExecutors:outputPipelines:SEVERE: DiscogsSongGroupMatcher:14886:14821:14820:160768
21/06/2023 09.14.54:CEST:MonitorExecutors:outputPipelines:SEVERE: MusicBrainzSongMatcher:0:0:0:0
21/06/2023 09.14.54:CEST:MonitorExecutors:outputPipelines:SEVERE: DiscogsSongMatcher:275:275:275:275
21/06/2023 09.14.54:CEST:MonitorExecutors:outputPipelines:SEVERE: DiscogsUpdateSongGroup:2631:2631:2631:21343
21/06/2023 09.14.54:CEST:MonitorExecutors:outputPipelines:SEVERE: SongSaver:27784:5984:4410:257829

So we see at that point in time worker queue size is at maximum of 100 tasks , songsaver is 21,800 tasks

Then we list the different tasks

MusicBrainzSongGroupMatcher1:27321:27321:27287:259804

Means MusicBrainzSongGroupMatcher1 task queued 27,321 jobs, 27,321 were started, 27,287 completed (but this can include timeouts) and 259,804 music files were loaded into the task for processing, each job usually groups multiple related songs e.g an album

SongSaver:27784:5984:4410:257829

Whereas here, only 5984 of the 27784 have even started, but you said the disks were not busy so confusing.

Unfortunately, the part of monitoring that shows trace of each thread only shows the Worker threads not the SongSaver threads so I cannot see what the SongSaver threads were doing when this snapshot was taken. But I have fixed this for next version due next week.

Every time I start songkong we end up witn a new release the next week man… ^^

Don’t know if you restarted Fix Songs or waiting on this, but although not mentioned in release notes the SongKong 9.2 Brutalism released 26th of June 2023 release has the extra debugging

Hey @paultaylor, it’s much appreciated. I am running a fix task though. I think I’ll keep it running for now :wink:

thanks for letting me know ! :slight_smile:

Great, could you just post screenshot of progress so far.

Sure,

So far, 688,058 songs have been processed out of a total of 3,339,210. This means there are 2,651,152 songs remaining to be processed.

Assuming the process started on June 22, 2023, at 11:46:19 AM and approximately 6 days and 20 hours have elapsed since the beginning.

Total number of songs processed per hour: 688,058 / 164 ≈ 4,195 songs/hour

Estimated time remaining: 2,651,152 remaining songs / 4,195 songs/hour ≈ 632.04 hours

Now, to convert that into days and hours, we can divide 632.04 hours by 24 hours per day:

632.04 hours / 24 hours/day ≈ 26.34 days

So, approximately, the processing process may take around 26 days and 8 hours from June 27, 7:48 AM to complete.

What do you think about this 4195 song per hour number ? Is this in the average ?

Yes, I think that is okay.

for some reason SK stopped processing my files and now this :

So, it reports it only saved 43 tracks after 206 hours of process…

I started a new fix task, it did instatly quit, reporting 0 tracks processed.

Any idea @paultaylor?

Is this the same run as the screenshot above shows ?

Make no sensebecause it had already saved 450,000 songs then, you had better send the support files but I won’t be able to look until tommorrow

Okay so if you go to Browse/Browse by Album, you can see what it actually did, which is alot !

The summary page is completely wrong showing invalid values for all values not just Saved column

So questions

Why did it stop?

According to logs there were no errors it was cancelled by user, could you have cancelled it accidently ?

30/06/2023 22.14.59:CEST:CmdRemote:lambda$start$179:SEVERE: >>>>>/fixsongs.cancel
30/06/2023 22.14.59:CEST:TaskMonitor:setUserCancelled:SEVERE: ***********User Cancelled Task
30/06/2023 22.14.59:CEST:SongKong:stopTask:SEVERE: Set StopTask
java.lang.Exception
	at com.jthink.songkong.cmdline.SongKong.stopTask(SongKong.java:1517)
	at com.jthink.songkong.server.callback.serverstartcontroller.AbstractServerModifySongs.cancelTask(AbstractServerModifySongs.java:212)
	at com.jthink.songkong.server.CmdRemote.lambda$configureWebEndPoints$70(CmdRemote.java:202)
	at spark.RouteImpl$1.handle(RouteImpl.java:72)
	at spark.http.matching.Routes.execute(Routes.java:61)
	at spark.http.matching.MatcherFilter.doFilter(MatcherFilter.java:134)
	at spark.embeddedserver.jetty.JettyHandler.doHandle(JettyHandler.java:50)
	at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1598)
	at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
	at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
	at org.eclipse.jetty.server.Server.handle(Server.java:516)
	at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:487)
	at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:732)
	at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:479)
	at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:277)
	at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
	at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)
	at org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104)
	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:338)
	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:315)
	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173)
	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131)
	at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:409)
	at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883)
	at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034)
	at java.base/java.lang.Thread.run(Thread.java:832)

How much work has it actually done?

According to the logs we have

30/06/2023 22.16.20:CEST:MonitorExecutors:outputPipelines:SEVERE: SongPreMatcherMatcher:0:0:0:0
30/06/2023 22.16.20:CEST:MonitorExecutors:outputPipelines:SEVERE: NaimMatcher:0:0:0:0
30/06/2023 22.16.20:CEST:MonitorExecutors:outputPipelines:SEVERE: AcoustidSubmitter:154530
30/06/2023 22.16.20:CEST:MonitorExecutors:outputPipelines:SEVERE: StartMatcher:103042:103042:103042:1010560
30/06/2023 22.16.20:CEST:MonitorExecutors:outputPipelines:SEVERE: MusicBrainzSongGroupMatcher1:87292:87292:87292:857335
30/06/2023 22.16.20:CEST:MonitorExecutors:outputPipelines:SEVERE: MusicBrainzUpdateSongOnly:0:0:0:0
30/06/2023 22.16.20:CEST:MonitorExecutors:outputPipelines:SEVERE: MusicBrainzMetadataMatches:99:99:99:100
30/06/2023 22.16.20:CEST:MonitorExecutors:outputPipelines:SEVERE: MusicBrainzSongGroupMatcher2(SubGroupAnalyser):0:0:0:0
30/06/2023 22.16.20:CEST:MonitorExecutors:outputPipelines:SEVERE: MusicBrainzSongGroupMatcher3(WithoutDuplicates):851:851:851:14714
30/06/2023 22.16.20:CEST:MonitorExecutors:outputPipelines:SEVERE: MusicBrainzSongGroupMatcher4(AnalyserArtistFolder):205:205:205:2298
30/06/2023 22.16.20:CEST:MonitorExecutors:outputPipelines:SEVERE: DiscogsMultiFolderSongGroupMatcher:58571:58570:58570:645039
30/06/2023 22.16.20:CEST:MonitorExecutors:outputPipelines:SEVERE: DiscogsSongGroupMatcher:58571:58570:58570:645039
30/06/2023 22.16.20:CEST:MonitorExecutors:outputPipelines:SEVERE: MusicBrainzSongMatcher:0:0:0:0
30/06/2023 22.16.20:CEST:MonitorExecutors:outputPipelines:SEVERE: DiscogsSongMatcher:825:825:825:825
30/06/2023 22.16.20:CEST:MonitorExecutors:outputPipelines:SEVERE: DiscogsUpdateSongGroup:5697:5697:5697:43453
30/06/2023 22.16.20:CEST:MonitorExecutors:outputPipelines:SEVERE: SongSaver:103707:103706:103706:1010532

So it had processed about 1 milltion songs

Why does summary report show wrong figures ?

That I dont know yet, but im wondering if it is something to do with going over 1 million, and we not handling numbers over 1 million

i started a new task. And even with the skip already fixed tracks. It is processing all files again.

So for some reason, songkong takes the 47 tracks processed as truth.

It’s not really processing them it’s sending straight to the completed task without searching for matches or updating tags.

Don’t understand the comment about 47 songs.

it mght be, but then, even with the skip already matched setting, it’s still pretty damn slow :

360k files processed in 28 hours. It’s faster than the original matching time, sure, but it’s still a lot of time just to check the DB if this or that file was already written in it. rght ?

111,754 songs have been saved so indicates that perhaps these files were not part of set matched before, but difficult to say really. Also if you only have skipped already fixed tracks set, it will still retry fixing tracks it has tried before but didn’t fix firs time round.

You need to check Ignore songs prevoously checked that could not be matched to skip songs that failed first time round

Bit stumped on the incorrect summary page, the 1 million song boundary does not seem to be issue, and I cannot find anything else that would cause it. Have to see if it happens again.

now I remember why I stopped the previous task. It stopped processing my files. I started it again (as you know) and it is blocked again.

as you can see my disks are not running, meaning my array stopped them due to inactivity :

This made me have a look at the status screen of SK, and it is stuk here :

here is what the debug logs shows, repeatingly :

05/07/2023 23.44.07:CEST:MonitorExecutors:run:SEVERE: ----Worker:6
05/07/2023 23.44.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/jdk.internal.misc.Unsafe.park(Native Method)
05/07/2023 23.44.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.util.concurrent.locks.LockSupport.park(LockSupport.java:341)
05/07/2023 23.44.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionNode.block(AbstractQueuedSynchronizer.java:505)
05/07/2023 23.44.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3137)
05/07/2023 23.44.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1614)
05/07/2023 23.44.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:435)
05/07/2023 23.44.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1056)
05/07/2023 23.44.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1116)
05/07/2023 23.44.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)
05/07/2023 23.44.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.lang.Thread.run(Thread.java:832)
05/07/2023 23.44.07:CEST:MonitorExecutors:run:SEVERE: ----Worker:11
05/07/2023 23.44.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/jdk.internal.misc.Unsafe.park(Native Method)
05/07/2023 23.44.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.util.concurrent.locks.LockSupport.park(LockSupport.java:341)
05/07/2023 23.44.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionNode.block(AbstractQueuedSynchronizer.java:505)
05/07/2023 23.44.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3137)
05/07/2023 23.44.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1614)
05/07/2023 23.44.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:435)
05/07/2023 23.44.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1056)
05/07/2023 23.44.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1116)
05/07/2023 23.44.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)
05/07/2023 23.44.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.lang.Thread.run(Thread.java:832)
05/07/2023 23.44.07:CEST:MonitorExecutors:run:SEVERE: ----Worker:32
05/07/2023 23.44.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/jdk.internal.misc.Unsafe.park(Native Method)
05/07/2023 23.44.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.util.concurrent.locks.LockSupport.park(LockSupport.java:341)
05/07/2023 23.44.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionNode.block(AbstractQueuedSynchronizer.java:505)
05/07/2023 23.44.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3137)
05/07/2023 23.44.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1614)
05/07/2023 23.44.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:435)
05/07/2023 23.44.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1056)
05/07/2023 23.44.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1116)
05/07/2023 23.44.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)
05/07/2023 23.44.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.lang.Thread.run(Thread.java:832)
05/07/2023 23.44.07:CEST:MonitorExecutors:run:SEVERE: ----Worker:36
05/07/2023 23.44.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/jdk.internal.misc.Unsafe.park(Native Method)
05/07/2023 23.44.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.util.concurrent.locks.LockSupport.park(LockSupport.java:341)
05/07/2023 23.44.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionNode.block(AbstractQueuedSynchronizer.java:505)
05/07/2023 23.44.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3137)
05/07/2023 23.44.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1614)
05/07/2023 23.44.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:435)
05/07/2023 23.44.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1056)
05/07/2023 23.44.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1116)
05/07/2023 23.44.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)
05/07/2023 23.44.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.lang.Thread.run(Thread.java:832)
05/07/2023 23.49.07:CEST:MonitorExecutors:outputQueueSize:SEVERE: QueueSize:Worker:0:32
05/07/2023 23.49.07:CEST:MonitorExecutors:outputQueueSize:SEVERE: QueueSize:SongSaver:0:2
05/07/2023 23.49.07:CEST:MonitorExecutors:outputPipelines:SEVERE: DbConnectionsOpen:3
05/07/2023 23.49.07:CEST:MonitorExecutors:outputPipelines:SEVERE: SongPreMatcherMatcher:0:0:0:0
05/07/2023 23.49.07:CEST:MonitorExecutors:outputPipelines:SEVERE: NaimMatcher:0:0:0:0
05/07/2023 23.49.07:CEST:MonitorExecutors:outputPipelines:SEVERE: AcoustidSubmitter:53199
05/07/2023 23.49.07:CEST:MonitorExecutors:outputPipelines:SEVERE: StartMatcher:75320:75320:53835:719713
05/07/2023 23.49.07:CEST:MonitorExecutors:outputPipelines:SEVERE: MusicBrainzSongGroupMatcher1:26053:26053:26053:229154
05/07/2023 23.49.07:CEST:MonitorExecutors:outputPipelines:SEVERE: MusicBrainzUpdateSongOnly:0:0:0:0
05/07/2023 23.49.07:CEST:MonitorExecutors:outputPipelines:SEVERE: MusicBrainzMetadataMatches:0:0:0:0
05/07/2023 23.49.07:CEST:MonitorExecutors:outputPipelines:SEVERE: MusicBrainzSongGroupMatcher2(SubGroupAnalyser):0:0:0:0
05/07/2023 23.49.07:CEST:MonitorExecutors:outputPipelines:SEVERE: MusicBrainzSongGroupMatcher3(WithoutDuplicates):61:61:61:1789
05/07/2023 23.49.07:CEST:MonitorExecutors:outputPipelines:SEVERE: MusicBrainzSongGroupMatcher4(AnalyserArtistFolder):0:0:0:0
05/07/2023 23.49.07:CEST:MonitorExecutors:outputPipelines:SEVERE: DiscogsMultiFolderSongGroupMatcher:5693:5693:5693:61189
05/07/2023 23.49.07:CEST:MonitorExecutors:outputPipelines:SEVERE: DiscogsSongGroupMatcher:5693:5693:5693:61189
05/07/2023 23.49.07:CEST:MonitorExecutors:outputPipelines:SEVERE: MusicBrainzSongMatcher:0:0:0:0
05/07/2023 23.49.07:CEST:MonitorExecutors:outputPipelines:SEVERE: DiscogsSongMatcher:225:225:225:225
05/07/2023 23.49.07:CEST:MonitorExecutors:outputPipelines:SEVERE: DiscogsUpdateSongGroup:13322:13322:13322:103179
05/07/2023 23.49.07:CEST:MonitorExecutors:outputPipelines:SEVERE: SongSaver:75534:75534:75534:719713
05/07/2023 23.49.07:CEST:MonitorExecutors:run:SEVERE: MusicBrainzQueries:109464
05/07/2023 23.49.07:CEST:MonitorExecutors:run:SEVERE: DiscogsQueries:37636
05/07/2023 23.49.07:CEST:MonitorExecutors:run:SEVERE: ----Worker:8
05/07/2023 23.49.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/jdk.internal.misc.Unsafe.park(Native Method)
05/07/2023 23.49.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.util.concurrent.locks.LockSupport.park(LockSupport.java:341)
05/07/2023 23.49.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionNode.block(AbstractQueuedSynchronizer.java:505)
05/07/2023 23.49.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3137)
05/07/2023 23.49.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1614)
05/07/2023 23.49.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:435)
05/07/2023 23.49.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1056)
05/07/2023 23.49.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1116)
05/07/2023 23.49.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)
05/07/2023 23.49.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.lang.Thread.run(Thread.java:832)
05/07/2023 23.49.07:CEST:MonitorExecutors:run:SEVERE: ----Worker:14
05/07/2023 23.49.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/jdk.internal.misc.Unsafe.park(Native Method)
05/07/2023 23.49.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.util.concurrent.locks.LockSupport.park(LockSupport.java:341)

I see no reason why this would have crashed on my end so I suspect something bad happened on server side again. Thing is, I already spent almost 20 days trying to get this to work. Each time I restart the process, I’m loosing several days again waiting for SK to check the already processed files again.

So should I stop it again now ? and start again ? :frowning:

Hi, okay I have checked server and it is fine with no errors reported.

Looking at the log extract all the pipelines including MusicBrainzMatcher1 and SongSaver have completed all the tasks sent to their queues except StartMatcher, this has only completed 53,835 of 75,320 tasks it is not clear why this is.

There are a number of worker threads to do work on the queue when their stacktrrace says

 java.base@14.0.2/jdk.internal.misc.Unsafe.park(Native Method)
05/07/2023 23.49.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.util.concurrent.locks.LockSupport.park(LockSupport.java:341)
05/07/2023 23.49.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionNode.block(AbstractQueuedSynchronizer.java:505)
05/07/2023 23.49.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3137)
05/07/2023 23.49.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1614)
05/07/2023 23.49.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:435)
05/07/2023 23.49.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1056)
05/07/2023 23.49.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1116)
05/07/2023 23.49.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)
05/07/2023 23.49.07:CEST:MonitorExecutors:run:SEVERE: java.base@14.0.2/java.lang.Thread.run(Thread.java:832)

This means they are just waiting to do something, they have no tasks to do.

So the question is are all the threads just waiting to do something, or are some busy ?
What is the last actual work done by the tasks, have tasks been timing out and the timeout is not wotrking properly ?

If you could leave running for now, but zip up the logs folder and send it to support@jthink.net that would be helpful.

If at some time point you do restart then two points to consider:

  1. If database is fine, then SongKong reads every file already added to database direct from database to check, But if database corrupted for some reason then it will have to read song from music file itself which is significantly slower.

  2. You need to have checked Ignore songs previously checked that could not be matched to prevent rematching of songs that failed to match first time, as well as setting For songs already matched to Ignore to skip over songs already matched.