Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
| Author |
Topic |
|
flamblaster
Constraint Violating Yak Guru
384 Posts |
Posted - 2011-09-05 : 14:37:37
|
Folks,I've been reading articles on how to interpret the execution plan in SQL Server and for the most part, I think I get the basic concept, but I still have a tough time determining which portions of a query I should try to go and reconfigure. Since everything has to equal 100%, is it better to have every portion of the plan even out? Or does it matter when certain portions have a higher percentage of the batch while other portions are low on the batch?I've been tinkering around with a calendar to run based on input dates and ran the execution plan. The way it looks to me, even though I'm using a cross join, there seems to be very low overhead on the cross join itself and the majority of the overhead seems to be on the nested loop.Here is the query and I've taken a screen shot of the execution plan if anyone wants to weigh in.Thanks!DECLARE @From DATE ,@To DATE ,@N VARCHAR(MAX) int --should be int, earlier typo SET @From='2012-01-01'SET @To ='2012-06-30'SET @N =DATEDIFF(DD, @FROM, @TO)SELECT DATEADD(DD, N.Number, @From) AS CalendarDateFROM ( SELECT (N1.NUMBER*100) + N2.number AS Number FROM ( SELECT NUMBER FROM master..spt_values WHERE TYPE='P' AND NUMBER BETWEEN 0 AND 99 ) N1 CROSS JOIN ( SELECT NUMBER FROM master..spt_values WHERE TYPE='P' AND NUMBER BETWEEN 0 AND 99 ) N2 ) NWHERE N.Number BETWEEN 0 AND @N Edit: Didn't realize you couldn't add attachments. Here is the execution plan:SELECT: 0%COMPARE SCALAR: 0%NESTED LOOP (Inner Join): 83%COMPUTE SCALAR: 0%TABLE SPOOL (Lazy Spool): 14%CLUSTERED INDEX SEEK (Clustered) [spt_values].[spt_valuesclust] Cost: 2%CLUSTERED INDEX SEEK (Clustered) [spt_values].[spt_valuesclust] Cost: 2% |
|
|
Lumbago
Norsk Yak Master
3271 Posts |
Posted - 2011-09-06 : 04:18:53
|
As always; it depends. I generally try to get the highest cost as far to the right in the execution plan as possible and I also try to avoid any scans. Index seeks are always the fastest way of retrieving data. Ten I usually create two different queries and compare the Query cost relative to the batch to find out which is more effective. I'm by no means an expert but after some practice you will learn which methods are faster...Here is a "tally-table" that outperforms a cross join to spt_values by far:;with a(n) as (select 0 union all select 0 union all select 0), b(n) as (select 0 from a x cross join a y cross join a z), c(n) as (select 0 from b x cross join b y cross join b z), d(n) as (select 0 from c x cross join c y), e(rowumber) as (select row_number() over (order by n) n from d)select DATEADD(DD, rowumber, @From) AS CalendarDate from e where rowumber BETWEEN 0 AND @N - LumbagoMy blog-> http://thefirstsql.com/2011/07/08/how-to-find-gaps-in-identity-columns-at-the-speed-of-light/ |
 |
|
|
flamblaster
Constraint Violating Yak Guru
384 Posts |
Posted - 2011-09-06 : 09:34:27
|
| Thanks Lumbago! That's a start. I've never really been sure which portion of the execution to value one way or another other than the seeks versus scans. Also, very cool method for the tally table...very simple but effective. Just out of curiousity though, the spt_values column has a clustered index on it, wouldn't that increase the performance? I noticed that it's performing clustered index seeks on that table. I normally see folks multiply by 2048, but that returns in around 4 million rows when you cross join it to itself. When you only need 10K rows or so, setting the max values to 99 and multiplying by 100 returns 10K rows which seems fairly quick. |
 |
|
|
|
|
|
|
|