Service Connect - Automate Deletion of Document Traces

Hello,

We are starting to use Service Connect more and more to help automate many of our business processes. Right now we use it for Order Entry, Pick List Inventory Transfers, Labor Entry (vacations and holidays), Packing Slip Creation, Job Closings, and we are getting ready to start using it on each of our product lines to report quantity on Jobs and complete them. This means a huge increase in the amount of Traces that are created in Document Tracking and therefore an increase in the size of the Service Connect database. I’d like to know how I can best automate the clean-up of these Traces on a regular basis without having to go in daily and manually delete large groups of them all the time.

I have tried to set the Document Tracking - Properties - Database Cleanup - Archiving options but I’m not sure I completely understand the settings and how Archiving works (even after reading about it in the Help). Or at least it doesn’t seem to work like I thought it should.

Why would I want to Archive my traces? I’m assuming it just helps to separate the traces between those that you want to keep and those that are ready for deletion. Does archiving help to compress the data in any way and therefore take up less space in the database?

I can get the system to automatically delete all but 1 Process/Trace in the Archive (seems silly that they don’t let you choose 0 processes - why would you only want to keep ONE?). But I’m having problems getting the Processes/Traces to move from the Main Document Tracking to the Archives. Is anyone successfully using the Database Cleanup options and could you share with me how you are using it?

Or is there another way in which I could automatically delete these Traces? Maybe some settings that I don’t know about or a better way through SQL? (Which I just might look into considering these SC DB Cleanup settings seem pretty quirky…)

We have about 50-75 Processes/Traces that show up in Document Tracking a day (not sure if that is a lot or not compared to others) and within each of those is sometimes hundreds of Iterations with collectively hundreds maybe even thousands more workflow node traces. These numbers will probably double soon when we roll out our Job Report Quantity workflow. Which means our SC SQL DB could get huge pretty quickly. So it would be nice to know how to automate deleting these traces.

We are on Service Connect 10.1.4020

Thanks so much for any help anyone can provide,
-Heather

First of all, I’ve learned to create custom searches to minimize the information pulled when bringing up traces. This keeps the screen from taking a long time to load without even considering archival. I find it useful to have a search for traces within the past hour (testing mainly), from today, and from yesterday, for example.

Second, if you use .NET references in particular (or possible SQL updates too), you can keep the verbosity, and therefore load time, of your traces to a minimum by structuring data loads with for-each relationships. For example, I learned that in order to update about 50k partwhse records, instead of iterating 50k times through updating each record from one SQL query as a data source, I could break do one query for part numbers and then 5k repeated SQL calls (one for each part) and use a single .NET call per part to import the same with a for-each on the partwhse table, and it is much faster than any other method I’ve found as well as a shorter trace.

Also, if you use SQL, performing one SQL call to do all your calculation before sending the information to write to Epicor really speeds things up, simplifies workflows, and minimizes your trace.

Heather,

50-75 workflows a day isn’t that heavy. I’ve been doing Service Connect for 10 years and I’ve got some clients with a lot more than that.

I’ve never really found much of a need for archiving the document tracing. When I’m going back looking for Service Connect transactions - it’s very time consuming - I couldn’t imagine storing them further. I’ve always taken the defaults, it seems like it keeps around 2000 document traces and deletes them off FIFO - other than those that had some type of error. So some clients have 3 or 4 days I can go back, others a couple weeks or more.

What I often do is archive the files that are going into the Service Connect channel. So, for instance if I’m taking input files to upload into Customer Shipments, I’m copying those off to a backup area on disk BEFORE copying into the Service Connect workflow. That way if someone has a question about data, I can at least go back to the source file . And, if I’ve got error handling in my workflow, I know about any errors that occur in real-time.

One other thing that I’ve found real helpful - and that’s breaking input files down into smaller files. So, say I’m running something that’s going to generate hundreds of report quantity transactions. I’d write something that generates them 50 at a time into a file, then closes that file and opens a new one. So, each “document trace” is only 50 transactions instead of hundreds. That comes in real handy if (a) you’ve got an error that occurred SOMEWHERE in the workflow, it’s a lot easier looking at a workflow with 50 sub-workflows versus hundreds. And (b) if for some reason Service Connect or your server’s having problems, you can more easily disable the channel or move the files out of the channel, and effectively stop the process part way through.

Not sure I really answered your question, but hopefully gave some helpful information.

Kevin Simon
SimsTrak Consulting, LLC

Wow. You let it keep the full 2000 document traces and auto-delete them FIFO as you go? :astonished: Now I’m embarrassed for complaining about my little 50-75 workflow document traces a day. :blush: LOL It just takes so long to load the traces (even when I have different views setup with filters on how many and what traces to display) and greatly increases the size of our SQL database. (I’m using SQL Server Express which has a 10GB database size limit. Once, I let the traces accumulate for a period of time and didn’t delete them and the database grew beyond the 10GB limit and I think it started causing some strange issues for us with Service Connect. Maybe I should stop using Express but I still don’t think I want a Service Connect SQL DB larger than 10GB - full of nothing but trace files - sitting out there.) But that’s all very interesting…

Yes, thank you for the tip. We definitely keep a limited amount of archived input files out on our network before we submit them to Service Connect.

:nerd: Fun Fact: This might be a “duh” moment (especially if you’ve worked with Service Connect for 10 years) but did you guys know that you can easily get the input file back from the trace if you go into Activity Progress and double-click the very top node and then click the Inbound Document tab in the Trace Details window? I’m only mentioning this because I attended a Service Connect training class (many moons ago, mind you) where the instructor didn’t even know that was there….

Still a good idea to keep the Archived Input Files for when the traces get deleted.

I do kind of like the idea of breaking the input files into smaller files and submitting them. Forgive my ignorance, but what are you using to do this? Keeping in mind that we are a Progress database and right now we use a lot of Crystal Reports that get exported as an Excel or CSV file and then submitted to Service Connect by our users. (I’m sure there are probably better ways to do this but this is what has worked for us at this time.)

I also like the idea of trying to use more for-each relationships. I understand the concept but I’m not quite sure exactly how to implement it within our workflows? We do use .NET references but we are a Progress database (hope to migrate to SQL in a year or so).

I still would like to figure out how the Database Cleanup Settings can be used to help me delete my traces just so that my database doesn’t get too large. Probably will just require more testing on my part with that I guess.

Maybe I need to take a look at my workflows and try to optimize them more so that there isn’t so much information being captured in the trace. I have a feeling that I’m probably making unnecessary Method calls and could probably pull more into just the Update method of the workflow. But also, maybe if I worked more on my error reporting and made it a little more detailed I wouldn’t even have to rely on the traces to load and to sort through them finding the problem. I do struggle some with how best to capture errors and report them (grab them in a container somehow and send them all at once after the entire input document has been processed or shoot them out one at a time as they occur?) I had some basic training on Service Connect and of course have just learned most of what I know along the way of trial and error and using it but do you guys know of any more in-depth training sessions or documentation that could help me fill-in the missing pieces to the Service Connect puzzle that I have? How did you guys learn all that you know? Not looking for basic info but more Advanced User info. I’m sure I’m not even touching the surface on what we can do with Service Connect and of making my workflows as efficient as possible. I know the help files are great but I’ve been wondering about best practices from expert experience of use. I know you guys are that (expert, experienced users) but I sometimes wish someone could actually take a look at my workflows and give me suggestions, advice, etc. I don’t want someone to make the changes for me but rather to teach me how to build my workflows in the most efficient way possible.

Sorry I’m so long-winded….just really passionate about my Service Connect. :heart_eyes: It is so much fun to work with and it is amazing (to me) what all it can do. Can’t believe the time and money it has saved us already (even with my primitive workflows :wink:) and I only hope to use it even more to help automate processes in Epicor.

Thanks for taking the time to read this and for all of the thoughts, suggestions, advice and help! I really appreciate it!
-Heather

1 Like

A couple standard filters that I use often:

  • Last2Hours
  • Today
  • Yesterday
    Then, I have a filter based on each of the various workflows. Typically, I don’t find waiting for these to load to be obtrusive, as long as I’m using the filters. Since I’m getting emailed on any issues, the “Last2Hours” and “Today” filters get used most of the time.

As for breaking up files into multiple, that depends on whether I’m creating the files in ABL code (or C# in E10), or whether the file is coming from an external system. In ABL code, I set a variable that’s the max records per file, and as I’m opening my output file I reset a counter to 1. When I hit the max, I close the file, increment a sequence that’s part of my output name, and reopen a new file.

If it’s coming from an external source, I basically have a script that does the same thing. I’ve got a scheduled Windows vbscript that reads any files in a folder, and writes to my SC channel using a counter and breaking into multiple files.

As for education - there’s nothing like experience when it comes to Service Connect. Service Connect is DEFINITELY more an art than a science. There are certainly “rules” to follow - but it’s more like spelling in the English language - every time you turn around there’s an exception to the rule.

Contact me offline if you’re stuck on something.

Kevin Simon
SimsTrak Consulting, LLC

1 Like

Hi Guys,

We have Service Connect 10.2.500 and we have a space issue because of all the traces in the db. I am not an expert on Service Connect but I need to remove the old traces somehow.

We have 44M rows in the ScaTrace table and it is consuming 395GB.

I have disabled archiving on the Document Tracking properties but that is not clearing any of these records. Am I able to delete these traces in SQL? I tried deleting some traces in the Admin Console but that is limiting me to 100 at a time and it takes a long time.

Please can someone assist me with clearing these traces out. Thanks.

Hi @epicGeoff ,

There are 2 EpiCare KBs on reducing the size of ESC DB - clean up Trace and TraceAttach

  1. ESC Reducing the size of the trace tables ScaTrace and ScaTraceArch

https://epicorcs.service-now.com/epiccare?id=epiccare_kb_article&sys_id=9113fdb6db2c878460d8f8621f961971

  1. Managing the Size of the ESCDB Database for Service Connect

https://epicorcs.service-now.com/epiccare?id=epiccare_kb_article&sys_id=37a5252adb323bc048e959b2ca961935

That should solve your issue(s).

Best Regards,
PSE

1 Like

Thanks Carlos.

That is what I needed.