I have a need when a Update, Insert or Delete is done to a record in DB "A", it will send the appropriate UID to a different table in different DB "B".
My first thought was to have a trigger on the table in DB "A" simply call a stored procedure on DB "B" and do the UID.
However - my question is what is the best approach and what's the best way to establish the connection to DB "B" for the UID from within DB "A"? We can't use linked servers - DNSLESS string would be the preferred connect way. Not sure how to execute it within a trigger.
Is even using a Trigger to Stored Proc the best way?
What about Transaction Replication - which I've never attempted - is that a better way?
Just looking for some guidance here so I don't needlessly burn time down a path that isn't recommended or simply won't work.
Thanks as always for your input,
PeterI would never suggest using triggers that reference objects outside of their own database. A recipe for disaster.
A better strategy would be to have the trigger load your data into a staging table, and then have a job scheduled once per minute to transfer the data from the staging table to the remote server. This is very robust if you can handle latency of up to one minute (scheduled jobs can be set to run once per minute).
If you require zero latency, then I would look into replication as a solution.|||Thanks Blindman:
I did some BOL checking and replication sounds good, but the table the data needs to go into has different field names and is in use in another application. The target DB happens to be on the same server.
What about have a trigger call a stored procedure and pass the fields in question along with the key to the SP?
The SP would make the connection to the other DB and do the UID.
You remain helpful,
Peter|||I still don't like it. What you are trying to avoid is having your trigger fail and system "A" come to a crashing halt when system "B" goes down or the connection fails. Your solution needs to be strong enough to fail safely, or you are just doubling or trippling the possible points of failure.
The advantage of the staging tables is that, if the connection to system "B" is lost then system "A" continues merrily on, loading data into the staging tables. When system "B" comes back online the data in the staging tables is automatically transferred.
No downtown for system "A".
No data-loss for system "B".
No late-night hours getting both systems back in synch for "DBA".
If you can stand up to a 1 minute latency, this is the way to go.
If you can't handle the latency, then replicate to duplicate tables in system "B" and put the triggers on THEM.|||Blindman:
I can stand a latency of even up to 5 minutes.
My concerns:
1. The table I'm feeding updates to has a different name, naming scheme and field types. I'll need to manipulate the data I'm "pushing" to Server B regardless of the approach I take.
2. With #1, I didn't know if Replication can handle. Perhaps "Transactional Replication" isn't the proper type of replication for this
3. I'm totally new to Replication and it doesn't look trivial via BOL.
Any additional help here is appreciated.
I'm event considering having the application update the server B tables directly. I don't like that for a number of reasons including if someone updates the back-end directly (unlikely - but possible), it wouldn't be updating server B.
Peter|||The staging table method should handle all your concerns. I would recommend transforming the data when it is shuttled from the staging tables to the destination database, rather than when it is inserted into the staging tables, as you want your triggers to be as streamlined as possible.
No comments:
Post a Comment