UD Total Field vs Calculated Field Best Practice

I have a question about best practices regarding UD fields and using calculated fields in BAQs/SSRS reports. I want to add line weights for parts to the quote and order applications. I have decided that I want the part unit weight to be a UD field on both QuoteDtl and OrderDtl as opposed to setting a data view column to the value from the part record so we can handle a “Part-on-the-Fly” and to also make the unit weight editable (never have we ever fudged the weights of our parts so that a whole order can fit on a truckload). Beyond that, I don’t know if I should make the extended weights for the lines and the header total weights in each application data view columns and calculated fields in baqs/reports that are calculated from the unit weights and quantities, or if I should make them UD columns themselves that are stored in the database. Epicor appears to have actual DB fields for extended line price totals as well as header price totals. Is this done for system performance? Ease user experience in report writing/BAQs? I’m not sure if I should follow the system’s lead. Does anyone have a strong opinion? Does it matter? Would appreciate the input on philosophies regarding DB design.

One reason to store a calculated total is because you are preserving the value that it was at the time. For example, if you just recalculate on the fly every time the screen is open (or something is printed), then you open the possibility that the total can come out differently in the future due to future code changes or data changes. When you store the calculation result, then you have a record of what it was (and what was printed) at the time that isn’t subject to change later.

We have a similar requirement (but haven’t implemented it so comprehensively as you propose). The headache becomes recalculating every time the part changes, the quantity changes, or the line status changes (a void line shouldn’t count towards the weight even if there is a quantity on the line).

2 Likes

Thanks for the input Alisa!

I am completely on board with @aosemwengie1

Use it when it makes sense and make sure you create Field Mappings so the weight flows in from Part → QuoteDtl → OrderDtl → ShipDtl.

2 Likes

When you say create “Field Mappings,” are you referring to using the “UD Column Map” application? We were planning on populating the flow through method directives. Is there any easier way to set the QuoteDtl.UnitGrossWeight_c to the Part.GrossWeight for a part on quote line?

If there is a UD Column Map available use it. Sometimes you will find that there is a UD Column Map missing and you will need to use a Method Directive (BPM) and create your own Pull down for example on the OnPartNumChange method. (Need to trace).

2 Likes

Thank you for telling me about this feature. I have never played around with it…

So this tool would replace the need to create a BPM to copy the value from one table to another during a transaction? For instance, copying the value from Part to a Job or Quote, etc?

1 Like

Dan, that’s what I am trying to discern!

1 Like

This feature existed in Vantage 8. It’s a bit buggy and not all table-to-table definitions are there. Play with it!

2 Likes

Good to know Mark, I haven’t seen many people bring it up on here.

1 Like

@Mark_Wonsil @hkeric.wci so if I don’t see the table there in the UD mappings then I am assuming I don’t have an option to use the UD column map program?

1 Like

That is correct. Then you need to use a BPM.

1 Like

Thanks Haso.

1 Like

There is also this question… Well why don’t I just use BPMs to begin with and remain Standardized, period. Chances are the UD Mapping may go away.

Well… If you have ever tried to implement your own CheckIn, CheckOut, CheckInAll, CheckOutAll, GetDetails, AppendDetails, Job Order Wizard (Order to Job), Quote to Order BPMs to Map a single field. Then there is the Resequence Materials Action which Deletes/Re-creates your Mtls.

You will start to super-fast appreciate UD Mappings, despite their limitations.

@Ross.Fireball.Kuiper recently had to map OprInsp via a BPM and it took alot of Method Directives to be accurate and Data Directives lacked enough variables to make good decisions.

I think he ended up with 8-13 Method Directives, for a SINGLE FIELD.

Early Stages Drawing before we found out Data Directives Sucked.

1 Like

That’s what I am saying too, I can’t imagine how much effort that took and how many traces had to be run. Nice work @Ross.Fireball.Kuiper ! I can’t even imagine!

2 Likes

It was a great learning experience honestly. Helped me to understand the data flow and the way parts move through the system in relation to jobs. Every time I thought I was done we would find a case we missed and have to create a new BPM. I count 18 BPMs on 13 BOs with more work in the future to include quotes, all of this was just on ECOOpr, PartOpr, JobOper.

3 Likes

Pretty amazing Ross. Did you find out after you put it in production or via testing?

Via testing, some of it was my own testing during development and some of it was other’s testing and then teaching me what I missed and why it was important. I kind of went solo without much direction on this so it took a long time but it taught me a lot of valuable things about tracing and debugging and so much more.

1 Like

Nice work Ross, tough to truly complete something like that with success given the endless ways the data can flow and the limitations of data directives and BPMs.

When you and Haso write the customizations, do you measure the impact they have on business objecy method execution times?

1 Like