We are on 10.2.100.12. We have been experiencing an issue with our nightly scheduled process set on nights we run MRP in Regen mode.
Our regenerative MRP has recently been running for 11 hours. We run MRP and several other tasks through a process set, with MRP being first. We will kick off MRP at 9pm. At 6am, if MRP is still running, the next task is kicking off. If MRP runs for less than 9 hours, everything runs normally.
Thoughts on why the next task in the process set runs nine hours after the first is kicked off? Is there any way to change this threshold?
The other tasks execute correctly after the second task finishes. These tasks also will not cancel through the system monitor without bringing the task agent down, but we do not want to do that with MRP running.
I understand that an 11 hour MRP could be an issue in itself. I am working to bring runtime down there. We were running MRP regeneratively on a nightly basis, but I was able to have stakeholders authorize a change to Net Change during the week with a full regen on Friday.
I say, first we need to attack the long MRP runā¦
how many Processors are you running?
how many scheduling processors are you running?
are you running FINITE scheduling?
Are you running MRP Pegging at the same time as MRP?
I have worked with many ālargeā accountsā¦ with huge databases, and have seen a couple where MRP took more than 4-5 hours but that is rare. One was taking over 24 hours (incredibly huge). But most take under 2 hours for a complete regen.
Are you running MRP Pegging at the same time as MRP? No; multi-level pegging is our 2nd task in the process set
We upgraded from 9.05 to 10.2.100 last August, and these were the settings we ran in E9.
In E9, we averaged a bit over 9 hours per MRP. MRP averaged about 10-12k jobs generated.
In E10, since go-live, we have averaged an 5.87 hour runtime with an average of 26k jobs generated.
Our last 7 regens have run for an average of 9 hours with an average 37k jobs generated.
Last night (2/1) ran for 11.8 hours and set a record jobs generated of 44.5k.
With the job increase over the last couple of weeks, the unfirm job purge that the regen runs has gone up from an average of 50 minutes to an average of 90 minutes (last nightās was 120 minutes).
MRP does run to completion, and that is why my initial thought was to focus on the process set issue.
one quick thing to tryā¦ click the flag that will cause it to re-use the unfirm jobsā¦ this way it doesnt delete and re-create the same thing over and over againā¦ that should save THAT block of time, although it still may need to do some adjustments with those unfirm jobs that are still there.
Have you considered limiting how far out into the future you are planning your jobs? 44+k jobs is a ton of workā¦ only need to plan what you will need to buy/make in the āimediateā future.
Alsoā¦ you could try changing the number of processorsā¦ instead of 8/2, change it up and bump up the number of scheduling processorsā¦ since you are running finite, the Scheduling portion may be your critical path. Example, you could try 4 and 4, which would double the amount of time available for scheduling.
You canāt run multiple scheduling processes when you are running finite. They canāt see what the other processes are doing and canāt constrain the finite resources properly.
I will take a look at using the re-use unfirm jobs flag. There is usually plenty of hesitancy to alter the current MRP process, so I will have to go through a test process. I have feelers out with the business side to see why we decided to plan the way we do and if we can scale it back.
For the scheduling processes portion. @Mercer_Sisson indicates I cannot run multiple scheduling processes when running finitely. Mercer, could you expand upon that explanation, or point me to something that explains the issue in more depth?
Sorry I am new to E10Help and missed your response.
When MRP does Finite scheduling each Job needs to see the previously scheduled load on the resources as it decides which resource to assign to each op detail as it schedules. If you are running multiple scheduling processes, a single processes will not see the load being scheduled by the other processes until it is committed to the DB which generally doesnāt happen until the whole job is scheduled. So it is possible that resources could get overloaded even in a finite environment.