Friday, February 24, 2012

Browse Replication commands

Hi,
We are trying to setup a two-way replication from STL to China. We
know that the network is very unreliable, and not very good bandwidth.
Inorder to decide if this is feasible, I need to provide information
about how much data will we be replicating everyhour, and how big a
transaction can be in kb.
In transactional replication, when we issue an update or a delete
statement on a table, that affects 100 rows, when the change gets
replicated, will it issue 100 cmds on subscriber or just 1 statement,
that would affect 100 rows. I would like to know if we will be sending
100 statements over the network or just 1.
When I look at distribution agent history, it tells me the no. of cmds
that it transfered and the rate of no. of trans/sec, latency etc... Is
there a way to know the size of the transaction in KB's?
In order to get a good estimate on the size of the data that would be
replicated, can I rely on the size of Tlog backups generated every day?
The database that we plan to replicate is 17GIG, to do an initial
snapshot, it would take a while, is there an easy way to sync up the
database, incase it goes out of sync( for eg: Restore the database, and
start replication).
Thanks for your help.
GG
Tlogs tend to be much larger than the bandwidth consumed by transactional
replication, however this depends on many factors, such as what sort of
indexing optimizations you are doing (which are logged).
I would look at replicating the execution of stored procedures as this will
transform the 100 singletons to a single command.
And yes 100 commands will go across the network. There is no way I know of
to get an idea of the bandwidth requirements.
The best way to deploy the snapshot is from a backup. Change all the
identity cols, constraints, triggers to Not For Replication on the
subscriber.
The best way to resync is to set your history retention to 1 week or so, and
your minimum transaction to 2 days. This way if your subscriber fails, you
can restore from backup (yesterdays) and all the missing transactions will
be automatically backfilled!
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"GG" <gdabbara@.gmail.com> wrote in message
news:1117649116.626833.200590@.g49g2000cwa.googlegr oups.com...
> Hi,
> We are trying to setup a two-way replication from STL to China. We
> know that the network is very unreliable, and not very good bandwidth.
> Inorder to decide if this is feasible, I need to provide information
> about how much data will we be replicating everyhour, and how big a
> transaction can be in kb.
> In transactional replication, when we issue an update or a delete
> statement on a table, that affects 100 rows, when the change gets
> replicated, will it issue 100 cmds on subscriber or just 1 statement,
> that would affect 100 rows. I would like to know if we will be sending
> 100 statements over the network or just 1.
> When I look at distribution agent history, it tells me the no. of cmds
> that it transfered and the rate of no. of trans/sec, latency etc... Is
> there a way to know the size of the transaction in KB's?
> In order to get a good estimate on the size of the data that would be
> replicated, can I rely on the size of Tlog backups generated every day?
> The database that we plan to replicate is 17GIG, to do an initial
> snapshot, it would take a while, is there an easy way to sync up the
> database, incase it goes out of sync( for eg: Restore the database, and
> start replication).
> Thanks for your help.
> GG
>
|||I am sorry! What does index optimizations mean? We do use indexes
extensively, as we are replicating an OLTP system. We will be
replicating the whole database, and we will set notfor replication
options on triggers and identity columns. As we are replicating the
entire database, do you think Tlog activity should reflect the size of
replicated data?
Yes, we will setup to use stored procs instead of TSQL statements. How
would they transform 100 singletons to one command. I thought they are
effecient because they are cached and the parameters are passed in as
binary values.
When you said restore from the backup if subscriber goes out of sync,
did you mean the subscriber database backup or the publisher? If it
is subscriber's backup, then that would be good, that way we don't have
to mail in a copy of our backup everytime something goes out of sync.
Looks like this might be better than having to compress snapshot files,
and send it to other location.
One question though, how does it know what transactions to apply to the
subscriber, because my subscriber could have gone out of sync say at
10:00AM, and then when If I restore the subscriber from previous
night's backup, how does it know from when to start applying the
transactions. Does it read the Tlogs to find the LSN or something?
Thanks a lot of your reply.
GG

No comments:

Post a Comment