Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
| Author |
Topic |
|
mike123
Master Smack Fu Yak Hacker
1462 Posts |
Posted - 2007-08-27 : 21:10:01
|
| Hi,Just a general question here.. I'm designing a web application that might have 50 million - 100 million rows plus. Basically its a simple logging table each row probably only 24 bytes wide, however I can see it taking quite awhile to execute.The query is basically a group by, showing the amount of "hits" per day. Are there any special types of strategies I should implement ? Or is a properly designed structure with indexes likely sufficient (on the right hardware of course)Thanks for any advice!,Mike |
|
|
rmiao
Master Smack Fu Yak Hacker
7266 Posts |
Posted - 2007-08-27 : 21:28:32
|
| Do you have execution plan? Any sp code and table schema? |
 |
|
|
evjo
Starting Member
20 Posts |
Posted - 2007-08-28 : 05:15:42
|
| Like most things in SQL Server.... it depends, we need more specific's. We need to know if your table has a primary key, what the group by column is and what the aggregate column is. Then we might be able to help. Also posting the query plan would be very beneficial.Whenever I see an old lady slip and fall on a wet sidewalk, my first instinct is to laugh. But then I think, what if I was an ant, and she fell on me. Then it wouldn't seem quite so funny. - Jack Handey |
 |
|
|
AndrewMurphy
Master Smack Fu Yak Hacker
2916 Posts |
Posted - 2007-08-28 : 05:27:44
|
| split table into two...one for todays activity....small, highly activeand one for historical stuff...large, but static in size and read-only...ideal for queries. |
 |
|
|
mike123
Master Smack Fu Yak Hacker
1462 Posts |
Posted - 2007-08-28 : 15:23:49
|
quote: Originally posted by AndrewMurphy split table into two...one for todays activity....small, highly activeand one for historical stuff...large, but static in size and read-only...ideal for queries.
this is kind of what I was thinking might be an option, but I havent read the best approach to doing it.. I realize its difficult or impossible to answer this properly for the other answers, but I am just happy with something general... I will be back with the table structures and everything else once its fully designed, for more feedback .. thanks very much for the opinions so far !! much appreciated mike123 |
 |
|
|
SwePeso
Patron Saint of Lost Yaks
30421 Posts |
Posted - 2007-08-28 : 15:35:48
|
A clustered index on "theDate" column would help speed things up. E 12°55'05.25"N 56°04'39.16" |
 |
|
|
Kristen
Test
22859 Posts |
Posted - 2007-08-29 : 07:47:10
|
We have a write-once-then-update-once logging table. Gets about 10 millions rows added a day. We delete stale-data after about 7 days.There is quite a lot to this table, and ad hoc queries can cripple the web site if not careful!We query with NOLOCK (this is one of the VERY few occasions where we do this - repeatability and 100% accuracy of the query is not a requirement in my mind).We have a few peculiar things we do because we could not persuade SQL Server to produce a query plan that looked good - might be because the Stats are useless within about an hour of having been freshened.We delete > 7-days-old every night, in a loop, automatically adjusting per-iteration the number of rows deleted according to how busy the server is, and we change the TLog backup routine to run more frequently during this time to prevent excessive LDF file growth.We have routines that run every hour that aggregate the most-recent-data into a separate table, and that is generally what we query from - as users mostly just want aggregate queries and are not too fussed about most-recent-hour - but of course when they get some unexpected publicity:1) They want to know what is "hurting" on the server so they can mitigate it2) Their very query compounds the strain on the server So not a quick-fix area, but FWIW I've quite enjoying fiddling with this one over the years Kristen |
 |
|
|
jen
Master Smack Fu Yak Hacker
4110 Posts |
Posted - 2007-09-03 : 16:45:26
|
+1 Kristen, and kudos to DBA life's simple pleasures Statistics table to query from, compact, semi-summarized and independent of the messed-up-supposed-to-be-normalized-schema (if you have one that is)--------------------keeping it simple... |
 |
|
|
|
|
|
|
|