Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
Author |
Topic |
CorpDirect
Yak Posting Veteran
92 Posts |
Posted - 2006-02-03 : 12:19:59
|
Does anyone know if it is possible to create triggers on the subscriber side for transactional replication? I would like to have triggers fire only on the replication subscriber when queued transactions are applied to target tables.Thank you,Daniel |
|
TG
Master Smack Fu Yak Hacker
6065 Posts |
Posted - 2006-02-03 : 13:30:56
|
Yes, you can create a trigger on a table that is a subscriber table for transactional replication.It may not be a good idea though. If something in your trigger ever causes a transaction to rollback it would also rollback the repl transaction.There may be a "better way". If you want to expain your objective we may have an alternative for you.Be One with the OptimizerTG |
 |
|
CorpDirect
Yak Posting Veteran
92 Posts |
Posted - 2006-02-03 : 18:18:23
|
Alright, I'll give it a shot. This is somewhat complex, at least for me...Here's the situation: we have an ERP system (relevant applications are order processing, invoicing and accounting) under an increasingly heavy transactional load. We also have a demanding reporting requirement for another application to have access to near-realtime data from the ERP database. We cannot use WITH (NOLOCK) as "dirty reads" are not acceptable for accounting data. Locking is becoming an issue, causing transaction timeouts in the ERP database while heavy reporting is under way.We have some triggers for INSERT, UPDATE on tables in the ERP app's database which create records in the reporting application's tables. Data is updated there, then those tables are used in joins with tables in the ERP database for some of the reports, so may also contribute to locking timeouts while under heavy reporting load.The objective is to eliminate the locking issue by maintaining a near-realtime copy of key ERP tables in a reporting database, and use this as the data source for reports.My thought is to use transactional replication to maintain this reporting database, and queue transactions for replication. It is my understanding that this method will allow SQL Server to apply these transactions to the subscriber as resources allow.The idea with firing triggers on the subscriber side is to avoid the possibility of transactions timing out in the ERP application database due to triggers timing out trying to obtain locks.Does this sound like it could work? -- or am I just pushing the problem further down the line?Thanks,Daniel |
 |
|
nr
SQLTeam MVY
12543 Posts |
Posted - 2006-02-03 : 21:49:47
|
With replication the publisher just stores data in the tr log. The tr log will therefore keep growing until the data can be sent to the distributor (and then the tr log is backed up). Once this happens the publisher is finished.The transactions are held on the distributor until they can be applied (via stored proc calls using the PK for each table) to the subscriber.This sort of scenario is ideal for running reports or batch aggregations - which usually cause problems on an oltp server.As to putting triggers on the subscriber - I've thought of this a few times but always decided against it mainly due to the extra admin when a table has to be re-snapshot. I've always found another way of dealing with it.Remember that whatever you do you need to cater for recreating all the data from the publisher.A lot can be done with just a single server by using a trigger to put the changed data into a table with an identity (and maybe put that id into another sequencing table if you need to process changes across multiple tables). You can then have a scheduled job running to processthe changed data.This is a good way of keeping aggregations up to date-ish - things like stock levels and account balances/positions/nav/pnl without delaying or locking the client access tables. As long as it doesn't have to be done before the transaction completes (in which case replication won't help) it can be ok==========================================Cursors are useful if you don't know sql.DTS can be used in a similar way.Beer is not cold and it isn't fizzy. |
 |
|
CorpDirect
Yak Posting Veteran
92 Posts |
Posted - 2006-02-07 : 18:43:07
|
quote: Originally posted by nrThe transactions are held on the distributor until they can be applied (via stored proc calls using the PK for each table) to the subscriber.
"...until they can be applied..." -- does that mean if the target table has shared locks from the reporting application, the transactions will be held until the table is open for writing? If so that alleviates my concern. The tables to be updated by triggers should be busy and free (locked and open) at the same times.quote: Originally posted by nrAs to putting triggers on the subscriber - I've thought of this a few times but always decided against it mainly due to the extra admin when a table has to be re-snapshot. I've always found another way of dealing with it.
The snapshot should need to be run only monthly, for the accounting period close. The idea is to drop the subscription and subscriber tables before closing the period (a resource-intensive process) then re-snapshot and rebuild the subscriber environment, triggers and all. This should all be scripted so as to be repeatable. Hopefully I'm up to the task...quote: Originally posted by nrA lot can be done with just a single server by using a trigger to put the changed data into a table with an identity (and maybe put that id into another sequencing table if you need to process changes across multiple tables). You can then have a scheduled job running to processthe changed data.
True, but if queued transactional replication will pick the first available window and apply the transactions at the subscriber, I won't have to worry about the scheduled job running into resource contention -- especially since updates to the reporting database need to be as near to real-time as possible.quote: Originally posted by nrAs long as it doesn't have to be done before the transaction completes (in which case replication won't help) it can be ok
Not within the transaction, just as soon as possible thereafter. A delay of a few minutes would impact operations workflow.My biggest remaining concern was raised by TG:quote: Originally posted by TGIf something in your trigger ever causes a transaction to rollback it would also rollback the repl transaction.
Could this happen if a trigger's target table is locked and the trigger operation times out? That would really be bad news! I know the ERP application doesn't use SQL-level transactions, so this could result in part of a transaction {from the app's perspective) being undone while the other parts remain. What a nightmare that could cause!Thoughts? Is this a serious concern?Thank you,Daniel |
 |
|
nr
SQLTeam MVY
12543 Posts |
Posted - 2006-02-07 : 20:24:06
|
To answer the first and last question which are the only ones you have doubts about.When an update is applied to the publisher this is distributed as a series of single update Sp calls using the PK to the subscriber within a transaction. You will note the problems this can cause with updating every row of a large table which can be reasonably quick on the publisher but take hours on the subscriber so thought should be taken deciding whether transactional replication is applicable for this.Also all updates are distributed in order.If the subscriber tables are blocked then the updates will be held until they can be completed. After a time the command will time out and the transaction (of all update sp calls generated by the single update at the publisher) will rollback. On the replication monitor you will see a distribution agent error. When the distribution agent restarts the failed transaction will be resent. The following commands will not be applied until this completes. You will see the distribution database growing as it accumulates updates waiting to be applied to the subscriber. In the worst case this will continue until the distribution data base runs out of room at which time it cannot receive transactions from the publisher. This still will not be a problem on the publisher as the transactions will just accumulate in the transaction log (although there may be resources taken up as it tries to thansfer this to the distributer). Evertually the tr log will run out of space and everything stops - but you will have sorted it out before then.If everything is in a single publication (could be the wrong term) then everything that happens on the publisher will be applied in order on the subscriber and any failures retried until complete (well that's the theory). If you have multiple publications then these can be applied out of order.==========================================Cursors are useful if you don't know sql.DTS can be used in a similar way.Beer is not cold and it isn't fizzy. |
 |
|
CorpDirect
Yak Posting Veteran
92 Posts |
Posted - 2006-02-08 : 09:14:18
|
Righto, thanks nr. This seems like it will work then -- I'm going to give it a try and see how it goes.Thanks for your help.Daniel |
 |
|
|
|
|
|
|