Application Errors Multiple Users - Exception of type 'System.OutOfMemoryException' was thrown

We have random occurrences where different users are receiving “Application Error: Exception of type ‘System.OutOfMemoryException’ was thrown”. This happens intermittently at random for about 4 of our users, sometimes daily or twice daily.

Each user uses different forms and resources within Epicor so it isn’t one particular resource.

Has anybody else received these errors before:

Oh hey! interesting when did that start?

Hi Joshua,

According to our end users, recently… but after another conversation and asking around abit more… some users have been getting random errors similar to this for what seems like a couple of months now.

The further you dig, the more interesting it gets it seems.
Fun times. :sweat_smile:

Shoot. We started seeing a similar server side error starting monday after server maint in one of our UBAQs was trending down is it a windows update that caused it.

this is our error

Server Side ExceptionUnable to write message due to insufficient memory.Exception caught in: mscorlibError Detail ============Correlation ID:  00000000-0000-0000-0000-000000000000Description:  Unable to write message due to insufficient memory.Inner Exception:  Exception of type 'System.OutOfMemoryException' was thrown.Program:  System.ServiceModel.Internals.dllMethod:  AllocateByteArrayOriginal Exception Type:  InsufficientMemoryExceptionFramework Method:  WriteMessageFramework Line Number:  0Framework Column Number:  0Framework Source:  WriteMessage at offset 282 in file:line:column <filename unknown>:0:0Server Trace Stack:     at System.Runtime.Fx.AllocateByteArray(Int32 size)   at System.Runtime.InternalBufferManager.PooledBufferManager.TakeBuffer(Int32 bufferSize)   at System.Runtime.BufferedOutputStream.ToArray(Int32& bufferSize)   at System.ServiceModel.Channels.BufferedMessageWriter.WriteMessage(Message message, BufferManager bufferManager, Int32 initialOffset, Int32 maxSizeQuota)   at System.ServiceModel.Channels.BinaryMessageEncoderFactory.BinaryMessageEncoder.WriteMessage(Message message, Int32 maxMessageSize, BufferManager bufferManager, Int32 messageOffset)   at Epicor.ServiceModel.Channels.CompressionEncoder.WriteMessage(Message message, Int32 maxMessageSize, BufferManager bufferManager, Int32 messageOffset)Inner Trace:Exception of type 'System.OutOfMemoryException' was thrown.:   at System.Runtime.Fx.AllocateByteArray(Int32 size)Client Stack Trace ==================
Server stack trace:    at Epicor.ServiceModel.Channels.CompressionEncoder.WriteMessage(Message message, Int32 maxMessageSize, BufferManager bufferManager, Int32 messageOffset)   at System.ServiceModel.Channels.FramingDuplexSessionChannel.EncodeMessage(Message message)   at System.ServiceModel.Channels.FramingDuplexSessionChannel.OnSendCore(Message message, TimeSpan timeout)   at System.ServiceModel.Channels.TransportDuplexSessionChannel.OnSend(Message message, TimeSpan timeout)   at System.ServiceModel.Channels.OutputChannel.Send(Message message, TimeSpan timeout)   at System.ServiceModel.Channels.SecurityChannelFactory`1.SecurityOutputChannel.Send(Message message, TimeSpan timeout)   at System.ServiceModel.Dispatcher.DuplexChannelBinder.Request(Message message, TimeSpan timeout)   at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout)   at System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(IMethodCallMessage methodCall, ProxyOperationRuntime operation)   at System.ServiceModel.Channels.ServiceChannelProxy.Invoke(IMessage message)Exception rethrown at [0]:    at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg)   at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type)   at Ice.Contracts.DynamicQuerySvcContract.RunCustomAction(DynamicQueryTableset queryDS, String actionID, DataSet queryResultDataset)   at Ice.Proxy.BO.DynamicQueryImpl.RunCustomAction(DynamicQueryDataSet queryDS, String actionID, DataSet queryResultDataset)   at Ice.Adapters.DynamicQueryAdapter.<>c__DisplayClass33_0.<RunCustomAction>b__0(DataSet datasetToSend)   at Ice.Adapters.DynamicQueryAdapter.ProcessUbaqMethod(String methodName, DataSet updatedDS, Func`2 methodExecutor, Boolean refreshQueryResultsDataset)   at Ice.Adapters.DynamicQueryAdapter.RunCustomAction(DynamicQueryDataSet queryDS, String actionId, DataSet updatedDS, Boolean refreshQueryResultsDataset)Inner Exception ===============Failed to allocate a managed memory buffer of 67108864 bytes. The amount of available memory may be low.   at System.Runtime.Fx.AllocateByteArray(Int32 size)   at System.Runtime.InternalBufferManager.PooledBufferManager.TakeBuffer(Int32 bufferSize)   at System.Runtime.BufferedOutputStream.ToArray(Int32& bufferSize)   at System.ServiceModel.Channels.BufferedMessageWriter.WriteMessage(Message message, BufferManager bufferManager, Int32 initialOffset, Int32 maxSizeQuota)   at System.ServiceModel.Channels.BinaryMessageEncoderFactory.BinaryMessageEncoder.WriteMessage(Message message, Int32 maxMessageSize, BufferManager bufferManager, Int32 messageOffset)   at Epicor.ServiceModel.Channels.CompressionEncoder.WriteMessage(Message message, Int32 maxMessageSize, BufferManager bufferManager, Int32 messageOffset)Inner Exception ===============Exception of type 'System.OutOfMemoryException' was thrown.   at System.Runtime.Fx.AllocateByteArray(Int32 size)

Monday sounds about right but that is me speculating. Windows updates is a good shout, I will check server side when we last recieved/applied updates.

1 Like

Yes!! I have one user who reports getting this error all the time, and has been for months. I actually got it myself yesterday though I had about a million windows open so I wasn’t too concerned. I’d love to know how to fix it.

1 Like

We did Updates Sunday afternoon. First occurrence we have reported is Monday. However I believe I only took our Epicor server to the October Cumulative update. I am checking on that now

1 Like

Yes confirmed I took the server to October Cumulative. KB5005573

Last updated server was July (KB5004230).

Well shoot. Now we are grasping at straws. I plan to take us to the December Cumulative (KB5008207) this weekend in response to this. What was your last applied Cumulative for .NET?

1 Like

We have July and no issues, although polling users in case.

1 Like

We seem to be having the same issue although we can’t pin point when it started. On our app server this is the last update.

2021-10 Security and Quality Rollup for .NET Framework 3.5, 4.5.2, 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2, 4.8 for Windows Server 2012 R2 for x64 (KB5006763)

There is a 2022-01 update waiting but can anyone confirm if it would get rid of this issue?

1 Like

Can you get more info from maybe server log or event viewer?

Hi Fred.

I believe I have remedied this problem. Or at least for now, I fail to see the logic here but 10 days from the fix and we seem to be getting positive feedback. (No errors, and slight improvement to performance and response times).

In previous roles I’ve used this “fix” to combat performance issues with software working with database engines IE: SQL or intersystems cache…

Go to the NIC (network adaptor) and change the speed and duplex settings under IPv4 from Auto Negotiation to 1Gbit Full Duplex on the client PC. Also make the same change to the EPICOR APP SERVER so that you are getting true 1Gbit connection.

Restart the Server, and the client PC. If you run performance monitor or any network utilization tool that tracks resources, you will notice an improvement in performance right away.

Hope this makes a difference, it seems to have done it for us.

We have started experiencing this now 2021.2.10
image

Happens Randomly but often enough now that I see it at least a few times a week. Restarting the Epicor client solves it, but its annoying.

Hi Josec, try the solution above and see if it works for you. Maybe try a couple of Pc’s and monitor over a week or so.

Hi daniel
Thanks to be honest it doesn’t make any sense for this to be a network issue. We are using HTTPS endpoints not net.tcp so the connection is restful so Tcp jitter wouldn’t generally cause issues other than performance.

Also the issue is Memory exception which again I’m not sure how Network would cause Memory issues, we are behind a Load Blancer so changing anything network related is not an easy thing to do. If the pay off was more clear I’d give it a go but it makes no sense sounds like a red herring.

I think (from very little experience) it appears to be related to the 129084567819561 copies of the EOBrowser that epicor is spinning up in Kinetic but that’s just a guess.

Thanks for the suggestion though I’ll keep it in mind.

1 Like

The fun thing about OOM errors is that the stack trace is totally useless. The allocation that pushes the system over the edge likely has nothing to do with whatever is hogging memory. The real cause can only be identified with a heap analyzer. Unless you’re on prem and running a debug build, there’s very little a user can do to help Epicor track this down.

Can you share the complete exception or log details?

Do you see the memory being full by EOBrowser instances?