Direct access to Microsoft SQL Database?

I ditto @Mark_Wonsil on the common sense MS / AD Auth access group approach. You can also use that group to lock down payroll which a lot of VPs get sensitive about for some reason.

The third item in the list @embedded mentioned - calculated fields. There are some fields that do not exist in the db and are calculated on the fly. Sure you can emulate the calculations in some scenarios (until the next feature enhancement which changes the math or the db structure).

Really a lot of this is around future proofing. If you enjoy constantly updating your processing and slowing your adoption of new releases, you are welcome to do so. I would not like explaining to my boss how we cannot absorb new features due to making more complex integration’s that were against best practices of the vendor but if you think you can handle it, you are the domain expert for your company and IT group.

To your specific question on the caching. There are two levels. I’ll point our User because I know that off the top of my head and am in a hotel without source code in front of me.

In the beginning there was the UserFile table (V8 or before). It had the name, email address, password and a few other columns. When the user sent up the password, it went thru an encryption process to convert ‘Epicor123’ into some blob of characters. The blob of character was compared against the db column and if they were the same, user passed and was let into the server. Each server call. Many times a form. Needless to the say the db (and dba) was not happy.

Then a cache was created. A static cache in the app server memory:

//Horrid over simplification
static Dictionary<string, string> Users; //username, password name value pairs

When the user logged in, the same encryption occurred and it was compared against the cache. If a user changed their password, the cache was refreshed with that new data in the static cache. Performance was improved, the dbas put down their pitchforks and all rejoiced.

Then a second (or more) app server was added (or second garden on a single server). Now a user would log into app server 1 to change password and when the load balancing gods routed you to a new app server, the user had an incorrect password because the encrypted passowrd was different from app server to app server. To solve this, a 5 minute refresh was created. On every app server, every 5 minutes the cache would be cleared and the db table for user would be refreshed from the db. That lovely dialog in User Maintenance would tell you please wait 5 minutes for the password to refresh. It SORT of worked for User - with a horrid UX - but in other caches, we could not wait like this. A new BPM for example - anything based on the customization framework really (Posting Engine, BPM, BAQ, UBaq, Configurator…). They could not handle a 5 minute delay.

Finally in 10.1 we tackled the farm issue for real and is the first version I would want to use in a cluster.

There is a pluggable notification system for app server to app server messaging. When a value is updated, the cache is hooked into the update pipeline. When an update to ‘embedded’ is detected, a note is sent to the Cache to clear user ‘embedded’. How we notify the other app servers is the interesting part.

We support the default of db but also have a UDP version of direct app server notification and have played with a few other out of process caching systems that probably folks would recognize the names of. These have never been shipped because honestly, the db one has performed so well and has not effected scale to any significant degree. I assume one day in SaaS I will have to add in another cache server but for now, keeping admin simplistic was a higher priority.

Next, we built a base class for the cache that knows about the notification engine and sends it a note -

Psst - User Cache, Embedded updated their password

The other app servers would get a note on next db access that that entry was invalidated and the remote servers removed / refreshed the data as appropriate to the cache usage.

Note here - if you go spelunking in Just Decompile or the like over a few releases we have actually built three versions of that cache base class. It was private so we optimized algorithms and improved API over time as more areas picked up the ‘app farm friendly’ cache. I suggested that as a class name and for some reason had the naming czars shot me down.

The last note is critical to note. We changed the API as other service owners wanted to leverage the cache. Who will start using it next release? No idea. Probably a service that needs more performance. Like one used a lot. Like a common one to modify and tweak in BPMs and integrate with by customers.

In other words, one most likely to hurt you if you go against a db table directly.

Hopefully that more fully explains the reasons I am rather against he idea. We spent a lot of time on REST and OData to make consuming data externally easier and less prone to breakage. We will continue to do some more tweaks in the coming releases - for our ‘integrations’ like Active Home Page and Mobile CRM and a few more projects in the pipeline. So if you have external feedback, please raise it here or in the EUG. It probably aligns with the internal complainnts but it’s also nice to tie a customer request to an internal request to bump up priorities.

I don’t know what service will be the next to leverage the latest cache classes. I just hope it is not one you choose to integrate against.

1 Like