I would say this is not usual or expected behavior. I would check a few common PC issues first like how full is your HD, how used is your network connection, how utilized is your CPU/RAM?
After those common issues are ruled out, check the performance on more than one machine. Is it poor for all users? Do reports run in a normal amount of time? You can view the processes and history for all users (if you are a system user) by going to System Monitor > Actions > Display All Tasks. Then take a look through your scheduled, pending and history tasks to see if anything stands out. Are there reports running over and over? Are reports failing for anyone? Are there reports running longer than they should?
Being a MT Cloud client little I can do about RPC compression.
The inconsistencies with performance is driving me crazy. Occasionally I get reasonable response and it feels normal. That lasts about 2-5 minutes. Then it drops to completely unusable. Any mouse click takes a minute to propagate through and screen updates take forever.
Whilst I have a case open with Support not expecting much joy as their own experience seems to suggest normal behavior. Yet all my users have the same slowdown and there is nothing I can do to fix it.
For what its worth, once we moved off MT to DT, performance has been more consistent. While it was working, MT was about as fast, but it was rarely working full speed.
Migrating wasn’t too hard, and we would have done it much much sooner if Epicor didn’t discourage it with endless fees and scary quotes. They are doing themselves and incredible disservice by not moving everyone over and sun setting MT.
Are you using Windows SSO? I’ve just found on two systems that the client sends two requests for each call to the appserver at 2023.2. First is anonymous and gets rejected, second is with windows creds and gets a proper reply. This meant one (customised) screen took 15s to load data vs 5s if using basic auth (which I do NOT recommend). See here
There is a CPU threshold on Dynamic Compression (DynamicCompressionDisableCpuUsage) which, if exceeded for 30s disables compression. According to the Microsoft docs, the default for this is 90%, but every Epicor server I have looked at has this set to 70%. So the setup in @josecgomez’s post might be correct, but if the CPU is busy (as it might be in MT), then maybe compression isn’t working. Best way I can think of to validate is in the IIS Logging adding a custom field on Response Header - Content-Encoding. This should give you g-zip for all logged requests over 2700b (sc-bytes can be added to see the size of the response).
Obviously with MT you can’t do anything with logging - but maybe you can ask Epicor to.
Finally Epicor admitted to a problem and have provided an edit to the hosts files that has dramatically improved performance.
A bit of pain to have to edit all users files but has made a massive difference.
Not much to tell other they admitted to a problem with their network and the hosts fix is an interim measure.
Being in Australia (bit of a backwater I know), auseastdtapp00.epicorsaas.com resolved to 4.147.81.58 but now resolves to 20.193.41.167.
The difference is night and day. It used to take an hour to load a 250 line journal via DMT. Now down to less than 2 minutes.