There’s nothing wrong with joining to multiple tables, it’s a normal thing to do and unavoidable in normalized databases. Just looks funky in BAQ artist mode.
To the point… Your SSMS query is reusing cached content and statistics. It’s common for a text query in SSMS, or views and most functions or stored procedures to run exponentially faster after they’re cached to RAM, because RAM is super fast. Applying option(recompile) to your SSMS query will make that first-run worst case performance repeatable.
BAQ runs your query through sp_executesql. That’s a very hard one to get cached in a way that the SQL Optimization Engine will usefully reuse if you’re artisinally crafting your query, much less throwing end user’s spaghetti at the wall. You know something’s gone wrong when option(recompile) improves performance. Not much to do about that in BAQ other than be aware and accomodate.
Here’s what I do in BAQ when this comes up:
Make a BAQ copy you can chop up. Drop subqueries first, then tables. Reload without saving so you’re only testing one thing at a time. Usually it’s one or two troublemakers chucking a clog in the gears.
What I see most often - referencing large tables in a BAQ can punt your query execution onto paging to disk if you’re RAM constrained. The optimizer may assume the worst when it sees sp_executesql referencing large volumes, requesting more RAM than is available just in case. Subquerying to narrow down the set of rows and columns of a table you’re looking at can give the optimizer half a chance of making better decisions.
If you were doing RBAR type things in joins or where statements, I’d say rethink those, but I’m not seeing any.
Side note, that’s a lot of GROUP BY. Windowed functions are normally faster but usually aren’t orders of magnitude faster. Sometimes, though.
Try adding a table criteria on the OrderRel table for Company = BaqConstants.CompanyID.
Also, if OPTION (RECOMPILE) causes a change in performance that generally indicates that indexes need rebuilding, or database statistics need updating. This is a common maintenance task for any database, and is supposed to be done periodically by the database maintenance plan. Ask your DBA to do it, or file a ticket with Epicor if you are on Epicor cloud.
To @Randy’s point not cloud, but OnPrem is fairly quick. @Banderson has posted some code to add the territory info, so SSMS can run queries with customer. I have Brandon’s headers saved with my common parameters needed and can run with territories in just a few minutes all thanks to @Banderson because sql is not my best area.
with [_TerrCTE_SalesRep] as
(
select uc.Company as Company, sr.SalesRepCode, ISNull(sr.ViewAllTer,0) as ViewAllTer, (case when em.FeatureID is null then 0
else 1 end) as TrCrAllowed
from [Erp].[UserComp] uc
left join [Erp].[SaleAuth] sa on sa.Company = uc.Company and sa.DcdUserID = uc.DcdUserID
left join [Erp].[SalesRep] sr on sr.Company = uc.Company and sr.SalesRepCode = sa.SalesRepCode
left join [Ice].[SysCompany] sc on sc.Company = uc.Company
left join [Ice].[EnabledModule] em on em.InstallationID = sc.InstallationID and em.FeatureID =
'00000003-9662-4B1D-AD67-8D90C3599092'
where
(uc.Company = N'' or uc.Company = @CurrentCompany) and uc.DcdUserID = @CurrentUserID
)
insert into @TerritoryLst
select st.Company, st.TerritoryID -- CRM disabled
from [_TerrCTE_SalesRep]
inner join [Erp].[SalesTer] st on st.Company = [_TerrCTE_SalesRep].Company and st.Inactive = 0
where [_TerrCTE_SalesRep].TrCrAllowed = 0
union
select st1.Company, st1.TerritoryID -- CRM enabled and ViewAllTerr=1
from [_TerrCTE_SalesRep]
inner join [Erp].[SalesTer] st1 on st1.Company = [_TerrCTE_SalesRep].Company and st1.Inactive = 0
where [_TerrCTE_SalesRep].ViewAllTer = 1 and [_TerrCTE_SalesRep].TrCrAllowed = 1
union
select st2.Company, st2.TerritoryID -- CRM enabled and no SaleAuth
from [_TerrCTE_SalesRep]
inner join [Erp].[SalesTer] st2 on st2.Company = [_TerrCTE_SalesRep].Company and st2.Inactive = 0
where [_TerrCTE_SalesRep].TrCrAllowed = 1 and not exists(select top 1 1 from [Erp].[SaleAuth] sa2 where sa2.Company =
[_TerrCTE_SalesRep].Company and sa2.DcdUserID = @CurrentUserID and sa2.SalesRepCode = [_TerrCTE_SalesRep].SalesRepCode)
Typically, I will use sql Profiler to get the true query or use the plan in ssms to see any recommended additional indexes… I use that as a starting point if I am new to a really complex baq.
One thing I have Co. Across in the past where someone has joined a ud field in one table with a key field in another it will. Impact performance. Which makes sense as you are really modifying the schema of the dB to make it behave in ways it was not designed to do.
So this clearly shows that there is some sort of caching issue happening somewhere in epicor. If you can introduce something just different enough to kick it, it comes out of it’s funk. Rebuilding indexes and statistics don’t do it. Does anyone have any better ways to do this besides adding in unnecessary CTE’s?
Did you add the table criteria for OrderRel.Company = BaqConstants.CompanyID? I know it sounds bogus and dumb, but it’s critical… If that filter isn’t there, the indexes are not being used… I mention OrderRel, but really that filter should be there for every first table of every subquery.
Next, run the SQL in SMSS and see if the execution plan suggests any missing indexes. If yes, create those.
This is why I put in that idea that BAQs should generate proper named SQL queries and or Stored Procs.
Stuff run through sp_execute SQL just doesn’;t seem to cache in the same way as everything else.
We have query store, and we should be able to see these queries in the query store and even manipulate / force plans but there’s no way that I’ve ever found to locate these BAQs in the query store they simply don’t seem to show up.
If it was not cached, you would never use OPTION RECOMPILE successfully.
Also here you don’t have any plan to work better than others, so you could enforce its usage in the Query store.
So in any case, you have to adjust your query some way SQL server can rethink its execution plan that you did.
sproc won’t change anything here.
This would require a complete rework of BAQ but the advantages are not clear.
Also functions are often known to make a bad impact on query performance.
and don’t forget that some BAQ consumers do not just execute BAQ by id, but modify the SQL statement on the fly.
Maybe you like Postgres only because you never worked with it seriously
They don’t have “Clippy” - as Brent Ozar calls index creation suggestions in SQL server.
I’ve just recalled one more trick that sometimes help when sql server is upset with internal BAQ - try to specify Alternate Company Security in BAQ execution settings, sometimes it helps.