Sorry to chime in late here, but we, too, are looking to move our Epicor appservers and database servers to our private Azure cloud. We used Azure’s Disaster Recovery tool to replicate our two test servers to the cloud. When we run Epicor’s PDT Configuration Check, we consistently fail the SQL Query Test. Are there certain tweaks or configuration changes that should be made to pass this test and get closer to matching our on-prem performance?
Thank you,
Bob Beaghan
@Hallyhad a good post that makes one wonder how the infrastructure was setup in Azure. Is the app server and DB on the same machine? If not, (hopefully not!), are the two servers on the same vnet in Azure? What kind of disc did you procure for your SQL Server?
I’m sure some of the other private Azure folks will also chime in.
I keep hearing conflicting opinions on this… curious to hear why you prefer separate machines over one for both db and app (assuming it has plenty of horsepower). Is there anything Azure/cloud-specific that affects your answer?
Not Azure. I keep hearing that SQL server doesn’t play well with others, even internal discs. It eats as much resources as it can take. It’s recommended, in production anyway, to not even log into it with SSMS. So a whole app server seems to go against the grain of this advice.
I’ve always heard to not log into SSMS on the server logged in as the account that’s running the SQL server service. I haven’t heard not to do it at all. Although I typically don’t do it anyway, only remotely.
The counter argument I’ve heard is that when you separate db from app server, you add latency of having to push data over a network between the two machines…
And that if you set SQL’s maximum memory to a number that leaves plenty of memory available to the Epicor appserver, then SQL can play nicely enough with other things to make it worth it (again, assuming the machine has plenty of horsepower to go around).
I don’t know enough to take a firm stance either way, just playing devil’s advocate…
Same. For example, would it make a difference to have them split up by VMs on the same machine? The network latency goes away and there’s logical separation of CPU, memory, and storage.
I think questioning the infrastructure is your best bet her. We made sure to replicate our Production performance specifications on our DEV server for initial viability testing and have no issues with the SQL Query Test. Make sure you followed SQL best practices with Drive Separatations (if on prem hardware was Fusion IO or something similar you may have had all on single disk. we’ll likely have that issue to address with our own production server). I would expect disk to be your big problem here, but check for other resource constraints as well.
If you modified the amount of RAM on the system, make sure your SQL RAM limit is still valid
We do have separate app and db servers. Both are running in the same zone, same vnet.
App server config:
4 vcpus, 32 GB memory, standard SSD drives.
DB server config:
8 vcpus, 128 GB memory, premium SSD drives (read-only host caching is enabled on the db disk).
Separate disks for data and transaction logs. Tempdb is on yet another, ephemeral SSD (temp disk).
SQL memory allocation is set to a minimum of 32 GB and a max of 96 GB.
PDT check is otherwise clear with the exception of a ‘data file available space’ warning and a ‘log file size’ warning, neither of which seemed to affect performance in our on-prem version of the server.
I’m actually running PDT on my client. But I have tried running it from the appserver VM with no difference.
VM sizes are as follows:
DB server: Standard E16-8as v4
Appserver: Standard E4s v3
–Which of the specs came from Epicor recommendations and are most critical? IOPS? Read/write speeds?
We did migrate from on premise. The db server was a physical server and the appserver was a VM.
DB server was 2 sockets, 16 cores for 32 CPUs and 128 GB RAM (at least).
That’s only at rest so it should not have any impact.
Personally I think that it is the workload that can impact how it behaves, and I agree having two applications on the same server that do compete for the same resources is not an ideal situation (Disk, and potentially memory) in this instance App server reading files processing and dishing them out and the SQL server accepting those requests, performing operations on data in memory, reading and writing to disk, both at pretty high volumes. That’s why Epicor have the sizing based on users in the sizing document. Ideally having separate servers is great, but someone has to pay for that and SQL licensing is not cheap. Unfortunately I don’t think that using SQL Managed Instances or SQL Azure is supported, not to mention your hands get a bit tied if you went for those options.
The beauty of being in the cloud is you have the opportunity to experiment without burning a too big a hole in your pocket.
I do recall in the early days of migrations of physical to virtual servers there were performance issues in resultant VM, not sure if that issue has been fixed now or not.
To prove it is a storage issue you could run some performance monitoring on those services and check. This may help you determine the cause.
I don’t think you mentioned how many users you are running.
@Mark_Wonsil is on the money I think the storage for your SQL server may be misconfigured.
On a high level you also want to ensure that your databases are sized correctly from the outset, and that you don’t have some ridiculous default that is causing file growth events to happen too frequently. You should also ensure that your Database maintenance process are up to snuff and you don’t have fragmented indexes.
One final note. I do recall Haso written something that simulated the CPU test from the PDT tool, not sure if he did one for the SQL test. It would be useful to know what that test actually did programmatically. I have never run SQL profiler when the test was being run, it may be interesting to see, possibly some network monitoring for that 2 server scenario.