Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
| Author |
Topic |
|
maxi_ng
Starting Member
6 Posts |
Posted - 2012-03-05 : 21:01:23
|
| I usually deal with data must less, adding index would be just fine.But this time, my science project generate 15 sample a second each channel, a node has 4 channels. and there are many nodes, so 15*60*60*24*4 is number of data for one node a day. There will be many nodes (20~50) and will run 24*7 for at least a year. the table would grow more than 37,843,200,000.My table after normalization is like "Id","datetime","Value","ChannelId", my query is mostly against datetime"select * from table where datetime between t1 and t2"I have only loaded 1 day data of 1 node into it, and this select took me about 3s. I think it would get a lot worse after all data are loaded.I don't have the SQL analyzer, what could I do to improve select performance?If I break the normalization by dividing year,month,day into three column and add index against them, would that be helpful? |
|
|
tkizer
Almighty SQL Goddess
38200 Posts |
Posted - 2012-03-06 : 00:08:31
|
| Well you are going to need to install SQL client tools for us to help you. If your queries are mostly by date, then did you put the clustered index on it? And how big of a range are you specifying for the between?I don't see any reason to break it up just yet, although table partitioning would help. Get SQL client tools installed, and then show us the execution plan as well as the statistics io and time. We need those to help you.Tara KizerMicrosoft MVP for Windows Server System - SQL Serverhttp://weblogs.sqlteam.com/tarad/Subscribe to my blog |
 |
|
|
|
|
|
|
|