Has anyone noticed that DMT Version 11+ has been a memory hog?
We have used DMT to do mass additions and updates to our system since we were implementing 4 years ago. Never have I ran into memory issues until recently when I have had some long running DMTs crash on me.
I am currently 30 minutes into a 10 hour DMT, and it is consuming 2 gb and growing.
Hi
If you are importing large amounts of data, Epicor recommends that you split the file into no more than 5000 records (3000 records if on SaaS) and run multiple DMT import instances at the same time.
I am using 2023.1.9 and have just tried to load 5000 lines on the Part Combined DMT using the standard DMT and it has been crashing out at about 3000 lines. I have been using standard DMT for doing this amount of lines and more on our Pilot Cutovers over the last couple of months and not had an issue. Something must have changed in a recent upgrade to cause this issue as as soon as the DMT hits 2GB Memory Usage it shuts down.
I was able to get over 2GB of usage, but my machine was 32GB. I was able to run my DMTs just had to break them into 18 files and run them 4 at a time on separate instances of the application. Not ideal as I used to be able to just run a massive file overnight and not have to worry about it.
Submitting a ticket today, now that I know i am not alone.
I have just tried in it 64 bit and that froze when it reached 5GB of memory usage. Support have said it is a performance issue and to break the file sizes down BUT i am using exactly the same files and sizes as i have done in 2021 and 2022 without issue.
We are hitting similar DMT issues with 2023.1.6.
Imports slow down, process hangs (no error messages or indication of issue in log files), memory increasing over time etc.
And its’ very slow, only getting 140 RPM on parts, we have 200k parts to import!
Have also tried restarting task and appserver before imports, no change.
DMT is the only process running on the server, AV exclusion have been applied to the servers.
Running the DMT on the app server, so no client PC involved.
Server Configuration:
Hyper-V virtual servers with Intel Xeon Gold 6248 @ 2.50Ghz, SDD’s
SQL Std 2019 – 12 Virtual Processors, 96GB ram
2 App servers, 4 Virtual Processor’s, 24GB ram each
Server hyperthreading enabled, servers not in “green” mode
No BPM’s , no logging, sever in simple recover mode (no log shipping), rebuild index’s run.
Memory uses on app servers does not go above 50%, during the import. CPU’ use minimal.
Steve, can i suggest raising a support ticket on this as well. That would be 3 in a day which to me shows it is not a Performance Issue as i keep getting told. Like i have previously said and told support, i am using the same scripts in the same format, with the same amount of records on each and i am now having issues in 2023.
I have never had to use 64bit DMT before and i have never had the out of memory exception in 32bit DMT before either.
We are currently doing our last pre-go-live cutover and this is not helping, especially when we are going to have to load 100s of thousands of records.
They did send me the hotfix, but I haven’t had to DMT anything since, and now they are claiming it is fixed in the version that is currently in cloud pilot, so I haven’t had much desire to test it.
I was experiencing extremely slow rpms on a DMT update of Job Mtl descriptions, but other DMTs worked quickly.
What finally worked - I sorted my data by JobNum, AsssySeq, MtlSeq (instead of by PartNum), and then it went super fast.
Makes sense in hindsight - DMT needs to access the job once instead of bouncing between jobs. I’m sure there’s an optimal sort order for each table, but I don’t know that it’s written down anywhere.