Hello everyone and thanks for any feedback.
Please note that my exposure to MatLab and cluster submission is at the university level, so my foundation and experience is not really terribly useful to the question. I'm consulting for a small business who has what I'd consider a micro-cluster, a small heterogeneous setup with 4 nodes.
They've asked for some input on why workers show busy on a parfor loop (based on looking as Windows task manager) yet they "know" that there are many iterations of the loop left.
Having a quick and dirty look at their code and dynamic, they are using the MJS on a head node. My assumption is that when submitting the parfor as a batch, MJS is deterministically and simply dividing the job to workers. (e.g. 10,000 iterations over 10 workers produces 1000 iterations / wkr). If any segment of their iteration array has a longer processing time (assuming all nodes are essentially performance equal) and that segment is allocated to say workers #3 & #4, that those workers may process for a noticeably longer time than the overall field of workers.
I know answering this without any code reference is difficult, but I think that the condition is code agnostic.
That said is there any white paper I can direct them to in order to perhaps create a round-robin type allocation of parfor iterations? ...Or is the method for granular management to create tasks in lieu of the parfor and distribute via multiple batch jobs?
As my university has a large cluster I've not run into this, but it reasons that as available workers increase (significantly) and workers have less allocated iterations per job, this becomes less significant. I'm just trying to understand if the condition is possibly being mitigated by the university's job scheduler to which I can advise they consider an alternate to MJS and / or if I am missing something.
Thanks for direction and feedback.