We went live with Epicor 10 this week. Our Production trans log file is growing at an enormous rate. We are currently syncing our production db to AWS which means our recovery mode is set to full. If we set to simple, we break our sync. How can we automate the log shrink leaving recovery mode set to full?
Support actually told us to set our max log file size to 10GB and our initial size to 256MB which will “prevent growth.” And of course Epicor crashed this morning because our trans log hit its limit.
If you can prevent it you do NOT want your log auto-growing - only in emergency situations. I would set the Initial Size to something that is more rational. I would look at making it 25% of the size of your database and if you are seeing average log growth per day of x MB I would set it according to what that look like. For example, if you log is growing 2500MB every day then I would allow for at least twice that in initial size so you get rid of auto-grow event. The next item would be to properly checkpoint the log and then add proper log backups / truncate which will keep the log file in check. I guess one question would be when you say “syncing” what are you using for this? There might also be an underlying reason that causing the log to grow that can be eliminated.
10 Gb is far from being enormous. Unless you perform less transactions in the system, you can’t prevent transaction logs from growing and you shouldn’t restrict it’s size. The most important thing is to make sure you backup transaction logs multiple times a day, which will prevent from growing out of control. At some point, it will reach his “maximal” size based on what is going on in your system.
OK. So it should be as simple as setting up a trans log backup to happen every hour and that will prevent the log file from growing the way it has been?
Any transactions going on in the system is kind of backed up in transaction logs. Processes like MRP can generate a lot of data movement and then increase the size of the transaction logs.
Again, 10 GB is not that huge. I can’t evaluate what you can expect since I don’t know what is going on in your system, but you should plan to reserve something like 50 GB for transaction logs.
And YES, it is critical to take transaction log backups.
One thing that may cause a roadblock with the theory of the log backups keeping your log clean is long running process / active transactions. If you have this scenario going on no log backup will help you because of uncommitted transactions. I would track down the source of your log growth and make sure you understand why it is occurring. There are quite a few tips on the web for tracking down long running processes, etc. in SQL.
I had been battling this problem since upgrading to 10.1 in 2016 - the solution I went with which has worked to date for the last two months was to switch the recovery model to Simple.
I tried everything under the sun (increase resources, have consultants analyze) to prevent the replication lock / transaction log swell with recovery model set to full. The only solution I could find was to take Transaction log backups every 15 minutes - which I why I chose Simple recovery instead.
How big does your transaction log get at the 1 hour backup interval? And does your server take any performance impact during the transaction log backup?
I would not say “unfortunately”. Transaction log backups are important!!! If you need to restore your data at some at a specific point in time, you need them. It’s way more secure than having only the last night backup available, which could lead in loosing multiple hours of transaction if data get corrupted or something.