I am working on automatically adding records to the XFileRef and XFileAttch tables. The process went swimmingly, right up until it got to the end of my list, then refused to stop making records.
My condition to not go over 2400 rows clearly didn’t stop the process from continuing. After about 5 minutes of unresponsiveness, I killed the Epicor program on my end, then logged back in to survey the damage. Both tables have over 60,000 records in them now, and it grows every second. I waited about 20 minutes and they are still growing.
I submitted a support ticket to resolve the issue, but I wanted to reach out to you all. Have you ever made this mistake? Is there a way to kill server side processes to stop these records from being constantly created?
EDIT: I am on Dedicated Cloud Tenancy, so I don’t have direct access to our servers.
No idea why it keeps growing if you kill the client making the calls, but also no idea what your code does. I think all you can do is wait for the team to restart the app.
Can you share the code to maybe see what happened?
Did you do it from a BPM? If so the BPM will keep spinning even if you close the client. Only way to fix it is to Recycle the appserver. Sometimes, you can login and disable the BPM or cause it to error out, which will stop it from continuing.
No it is not, but go into the UBAQ, and make a breaking change to the BPM. Something that makes it throw an exception then save it.
I believe then next “loop” it should pick that up and blow up. I’ve done it before on regular BPMs
So I opened my BPM and just deleted the first arrow to the first widget. Then I added a raise exception widget, so it just goes start > raise exception. I saved and closed the BPM and BAQ. But the XFile tables keep growing still.
This worked!! I put in an in-transaction data directive to raise an exception on both tables XFileRef and XFileAttch. After I saved it, the records have finally stopped increasing! Now I can run my delete BPM to clear out those tables and try again!
Thank you so much for the little workaround @Mark_Wonsil!
Here is my ImportCerts BAQ. Ideally this will look at all the partlots with an on hand quantity, and with a lot number that starts with VA. From there, it will add a record to the XFileRef and XFileAttch tables. The filename is the same as the lot number with .pdf tacked on at the end. The base url is defined at the document type level, and this query assumes that you have at least one cert attachment on part lot. It uses this first record to grab the base url and use it for the rest of the records.
The BPM has been put into debug mode, where it confirms each and every record addition. To run it wide open, without the confirmation, just move the arrow from the “More Rows?” true condition to the second execute custom code block.
I made a few changes in the hopes that my infinite loop will not continue. But I have to wait until tomorrow to test it again, as we still need epicor support to recycle our app server.
Be warned! If you use this UBAQ, it may have unintended consequences. My advice is just to load it for code review. I will post back here and on another thread once I have the whole thing working correctly.
Recycling the App Server seems to have stopped the process properly. I still have over 140,000 records in those tables to delete. Yesterday, I ran this delete code in a custom code widget. It seemed to work fine on 2000 or so records. Today I tried to run the same code on my 140k+ records. I figured it would take a while, so I let it run for 15 minutes or so. At the end I got a server error, and none of the records have been deleted.
using (var txScope = IceContext.CreateDefaultTransactionScope())
{
foreach(var XFile in (from row in Db.XFileAttch where row.RelatedToFile == "PartLot" && row.RelatedToSchemaName == "Erp" select row))
{
var XRef = (from row in Db.XFileRef where row.XFileRefNum == XFile.XFileRefNum select row).FirstOrDefault();
{
Db.XFileRef.Delete(XRef);
}
Db.XFileAttch.Delete(XFile);
}
Db.Validate();
txScope.Complete();
}
Is there something wrong with my code? Is there a better, faster, cleaner way to clear out my erroneous records?
Thanks all!
Nate
Yes I could ask that, and probably will have to. It is an all day process for them to reimage live to pilot.
I should really stop doing these DB breaking tests in the morning.