Can we specify “client lock ups” We’ve been running it live system with 500+ users all last week and this week without issues. Do you guys have more details?
We are having system agent issues which we are investigating but might not be related.
Can we specify “client lock ups” We’ve been running it live system with 500+ users all last week and this week without issues. Do you guys have more details?
We are having system agent issues which we are investigating but might not be related.
We are running 2022.1.29
The client will effectively seize from a UI perspective. It was very interesting. The CPU on the Kinetic.exe and the EPO subprocesses seemed to still be doing stuff but the UI is completely frozen. Normally when we see a lockup the application will show like 0% CPU or will lock a whole thread so a % of CPU and stay there. This was not happening. The CPU usage would still bounce around like it was trying to do things.
This was happening both in MES and the Full Desktop Client.
We saw it in the Move Inventory screen in Epicor Handheld mode, Purchase Order Entry, and a tracker. We didn’t get a chance to investigate further, since it was affecting production.
Changed it back last week, haven’t seen the client lockups since.
Our remote location said the Epicor client was much faster with the compression setting in the config file. Unfortunately, it was causing the regular client and MES to freeze so we removed the setting.
Same here. Can confirm we saw better speeds but the clients started locking up. After 2 hours rolled back we got the all clear from production. Digging in deeper to see if we can pinpoint why this is causing lock ups.
On 2022.2 we saw DMT locking up after enabling RPC compression.
Also, just sent a company wide alert out earlier about this change we were supposed to be implementing tonight - kind of embarrassing but better than the alternative of affecting production.
We confirmed with A/B testing and 2 appservers this will lock up clients. The issue is more frequent with the more calls made I.E. Task Agent and DMT
I think even though it appears the client is locking up, this issue must occur server side when there are a lot of simultaneous connections, and the client is just waiting for the server response… I am guessing the people with the issue have a large number of simultaneous sessions on a server that might not have enough CPU cores available…
Not here, this was in my test environment with 2 connected users.
We are trying to narrow down server or client issue, but I can get it to do it with only 3 active sessions to an appserver. It’s not a connection count issue at all. During our A/B test the PC that locked up was actually logged into Epicor but wasn’t even being actively manned.
Setting up a testing station now that we can try to capture the lock up on with fiddler. Looking to find out, are we calling out and not getting a response, are we failing to decode a response, are calls still made after the UI seamingly locks up.
It’s a whole new world of troubleshooting now with the web involved. Thanks for documenting the steps and what not y’all.
I assume that you are testing with Clients newly started after making the Compression change on the Server.
@Rich oh yes for sure. We did appserver recycles between. Full client re-installs. The works.
New development is we can’t get it to do it when we are using Fiddler for packet inspection, on PCs that otherwise exhibit the issue.
Race condition? We’re running out of ideas. At a point where I’m wondering if it’s a bug flaring up in .NET 4.8. I can’t imagine it would be because of compatibility between 4.8 and 6.0 on the server since GZIP is GZIP isn’t it?
This has been running calling BOs every 10 seconds for about 4 hours without issue when Fiddler is running. Otherwise it will poop out after 20ish minutes. Can 100% isolate and correlate with Compression but… WHY?!
Looks like something fixed in 2023.1, as Jose does not see it.
I would take process dump to see what is client process doing, but without symbols you yourself probably cannot see much inside it.
Might be worth it to try to set it up to use deflate instead of gzip… Add the following as the first child element under httpCompression:
<scheme name="deflate" dll="%Windir%\system32\inetsrv\gzip.dll" />
Gzip uses deflate internally, but with added heuristics for better compression. All browsers support both.
Right @Olga we aren’t getting client lock ups. We are seeing some intermittent issues with System Agent which we can’t (yet) tell if its compression or not. Doing a lot of testing and logging to make sure we can prove it out, might still be a red herring.
@HLalumiere funny enough I was just about to setup system agent testing using deflate… I’ll report back. Running a few different experiments.
I did quite a bit of reading and it looks like .NET 6 did Quite the cupid shuffle with compression. They implemented new algorithms and all sorts of fun stuff. This is so low level I’m not even sure Epicor has a ton of control over it. We’ll keep poking and report back, this is why I said at the top test, test test test and test some more and why Epicor hasn’t yet come back and officially said turn it on for all!
Carefully optimistic that we’ll get past these issues and rest assured not only are we testing the hell out of this, but so is Epicor i’m sure they have an entire SaaS deployment that could benefit from this, but they need to make sure everything is tip top shape.
Note, even after enabling deflate server side, all requets from the client go in with an
Accept-Encoding gzip
So IIS does not respond with deflate, it either defaults to gzip (as requested), or if you disable the gzip scheme completely and leave only deflate it will respond with no compression since the client isn’t advertising that it accepts deflate