We built a pretty meaty SQL server for about $11k. With SQL server, you need to put some thought into the layout of your disk subsystem. This is typically the biggest bottleneck in most cases. Don't skimp on your hardware or you will certainly hear it from your users. As a general rule of thumb, any operation that makes a user wait more than 4 seconds = complaints, and Epicor 9 is a bit of a beast.
Use two drives in RAID 1 for the OS. Use RAID 10 (RAID 0+1) for your .MDF data file. RAID 10 gives you the highest Input/Output Operations Per Second (IOPS) with fault tolerance. The downside is you need a minumum of 4 drives, and you take a 50% capacity hit. If possible, put your transaction logs (.LDF) files on a separate RAID 10 array. Even better if you can put it on a separate controller on the PCIe bus. The more drives (spindles) you add, the better the performance. A RAID 10 array with 6 drives is better than 4 drives etc.
I wouldn't use RAID 5 at all anymore. RAID 5 can only suffer a single drive failure, and during the rebuild process, you are vulnerable. With the current size of drives in the terabyte range, you run the risk of encountering an Unrecoverable Read Error (URE) while the array is rebuilding if you suffer a drive failure, and thus the whole array fails and people are really unhappy with you.
RAID 6 is an option, but I wouldn't use it with anything less than 9 drives. You can suffer two drive failures, but the same vulnerability as RAID 5 exists. RAID 10 is really the only way to go in my opinion.
We couldn't afford a FusionIO, so we built a budget version. We used 6 Intel 32 GB X25-E Solid State Drives in a RAID 10 array for the .MDF data file (Our Epicor database is just over 10 GB). A typical database I/O pattern is 75% random reads, 25% writes. SSDs make sense here as these drives are unholy when it comes to random data reads. We placed the .LDF log files on a 6 drive RAID 10 array of Western Digital RE4 2TB SATA II drives. These drives are actually snappier than the SSDs as transaction logs are written sequentially. I also moved the SQL server tempdb database to a 4 drive RAID 0 array. The risk here is that if one of those drives fails, SQL server will shut down, however, no data is actually lost as tempdb is recreated each time SQL server starts. Worst case we're down until the drive is replaced, or the RAID 0 array is recreated without the failed drive. You could also use RAID 10 here, but I think RAID 0 is worth the performance boost. Just make sure to ONLY put tempdb files here. As always a hotspare drive is a great idea.
If you can afford it, go for 15k rpm SAS drives. We chose to save a little money and used higher end SATA drives. You find that with the larger capacity drives, the increased platter aerial density on a 2TB 7200rpm SATA drive can keep pace with a 15k rpm 300 GB SAS drive. But like I said, SAS is preferred.
SQL server is very greedy and will attempt to use all avaialable system memory. Ideally you want enough RAM to cache your entire database in memory. With your 25 users you are right at the edge of when Epicor wants you to break apart your SQL server and Application server to separate physical machines. We're about the same with 22 users and we get by with a single server doing both roles. It's never a bad idea to overcompensate with RAM here.
Also, SQL server 2008 reads and writes in 64kb pages, so you will get the best performance using a default allocation size of 64kb when formating your partition for your .MDF array. During testing, I got the best performance using a stripe size of 256kb, but your mileage may vary depending on your RAID controller.
So here's a rough breakdown of what we spent, and we used eBay pretty heavily for components.
$800 - Supermicro chassis and motherboard
$800 - Adaptec 51645 Unified SAS RAID controller
$400 - Adaptec 5805 Unified SAS RAID controller
$2,600 - 2 x Intel X5570 2.93 GHz Quad Core Xeon Nehalem processor
$2,400 - 6 x Intel X25-E SSDs
$2,700 - 11 x Western Digital RE4 2TB SATA II drives
$500 - 2 x Western Digital 300 GB Velociraptor drives
$800- 24 GB DDR3 RAM
$11,000 Total *(ballpark figure as some rounding involved)
If you start talking SANs, they usually start at $5k, so we chose the direct attached storage route.
So yeah, I could see them telling you $20k for a real high end server. I think what I just listed would easily be over that if purchased complete from an HP or Dell, but if you do some research on what components work well together (and it's hard to find those that don't if you stick with late model brand name components) it's not real difficult to build your own. Assuming you are handy with a phillips head screwdriver of course!
I wrote a lengthy post on using SSDs in the enterprise with SQL server here awhile ago, but I think it's too old for the group search to find. I still have a copy in my email, so contact me offline if you want to check it out.
Hope that helps. Feel free to contact me offline if you want any clarification.
Jared
-----------------------
Jared Allmond
jallmond@...
Wright Coating Technologies
--- In vantage@yahoogroups.com, "Caster" <casterconcepts@...> wrote:
>
> I need a little input here. We are looking to move from Vista 8.03 to 9.05 soon. We consulted Epicor and their recommendations for hardware would cost about 20 grand. We are on progress now and want to move to SQL. Our SQL server is on a 4 drive RAID 5 array. Epicor says not to do this, they won't support it. They say we need a RAID 1 or 0. I won't put a RAID 0 in a production environment because of the absense of fault tolerance. This leaves a raid 1. THey recommend we span (for our 21 office and 5 mes users) over 6 drives. This means 12 drives and a very expensive server or SAN to make this happen. Is all this really necessary? What are others out there running? Is anyone out there running on raid 5 or something less than recommended? I can't believe that a system for 26 users should require such high hardware requirements. Any input here would be welcome.
>
> Rob