Author |
Topic |
Mielie
Starting Member
5 Posts |
Posted - 2015-02-26 : 02:03:28
|
Morning,We are running SQL 2008 R2 and I have a question about publishing of data.We have a batch server which run daily transforms and after the transforms have completed we need to publish this data to the WEB SQL servers. Currently we use snapping and cloning of the LUN's the SQL databases are on to get it from the one server to the other. This database is about 400GB in size and this cloning process takes about 1.5h to complete.We need to find another way of getting this data to the web sql servers. How do you people out there do this? |
|
gbritton
Master Smack Fu Yak Hacker
2780 Posts |
Posted - 2015-02-26 : 03:31:22
|
try snapshot replication |
|
|
Mielie
Starting Member
5 Posts |
Posted - 2015-02-26 : 03:52:33
|
The way it currently works is , we have 2 databases on the WEB sql box. One live database and 1 offline database. These database names is different with an X and Y suffix. So X is now active and after the transforms ran on the batch box we will publish Y to the web sql box and swop the databases around. So now Y is active and X is offline. This publishing process needs to be as quick as possible. We use Merge replication as well as transactional replication and the thing with replication is it takes quite some time to replicate the data on our other databases where we use replication. The one table that contains all the history data has about 280million rows and is about 180 GB in size. I have not yet worked with snapshot replication, how fast is snapshot replication? |
|
|
gbritton
Master Smack Fu Yak Hacker
2780 Posts |
Posted - 2015-02-26 : 03:58:35
|
Confused. First you said you were cloning I the San now you say its replication. What's the whole picture? |
|
|
Mielie
Starting Member
5 Posts |
Posted - 2015-02-26 : 04:01:48
|
Sorry for the confusion, We use cloning for the large database but for other smaller databases we use replication. The cloning of the large database is the problem and we need to find another way of getting the data across. Does this makes sense? |
|
|
gbritton
Master Smack Fu Yak Hacker
2780 Posts |
Posted - 2015-02-26 : 04:15:30
|
OK so instead of cloning the whole thing every time can you just replicate the changed tables or partitions? |
|
|
Mielie
Starting Member
5 Posts |
Posted - 2015-02-26 : 04:20:29
|
On the batch server when transforms run it changes a lot of data every time. We have about 6 transforms running during the night and our cloning normally starts at 05:00 in the morning and website must be updated at 06:30 latest. So to replicate all that changed data across will take some time. I don't think replication will work for us due to my experience working with the current replication in our environment. Replication is slow...... |
|
|
gbritton
Master Smack Fu Yak Hacker
2780 Posts |
Posted - 2015-02-26 : 05:27:42
|
Well then it sounds like you're hitting hardware limits. |
|
|
Mielie
Starting Member
5 Posts |
Posted - 2015-02-26 : 05:36:49
|
What is the biggest table\ Database you replicate? And which replication do you use? Will be interesting to know? |
|
|
tkizer
Almighty SQL Goddess
38200 Posts |
Posted - 2015-02-26 : 12:28:29
|
We used transactional replication for a database that was half a terabyte. All of the big tables were replicated and most of the medium/small tables were too. It was a high transaction environment. When I opened a case with Microsoft a while back, the engineer indicated he'd never seen such a busy replication. So if you are having performance issues with replication, I'd be looking at your hardware as gbritton mentioned, plus your replication architecture. Is the distributor on a separate box? How much retention in the distribution database? Is the cleanup job keeping up (ours wasn't, so we had to upgrade the hardware)? ETc.Tara KizerSQL Server MVP since 2007http://weblogs.sqlteam.com/tarad/ |
|
|
|