Monday, February 13, 2012

Add a field

I need to add a rowguid to 300+ tables. Is there a way to script it?
Thanks
JeffALTER TABLE ADD ....
There is no built in automated way that I know of but you can use a cursor
to ge thte list of table names and build the alter statement dynamically and
print it out with a GO in between. Just curious as to why you want to add
one to all 300 tables?
--
Andrew J. Kelly SQL MVP
"Jeff Cichocki" <jeffc@.belgioioso.com> wrote in message
news:ui31cHiEGHA.648@.TK2MSFTNGP14.phx.gbl...
>I need to add a rowguid to 300+ tables. Is there a way to script it?
> Thanks
> Jeff
>|||Read this: http://www.databasejournal.com/features/mssql/article.php/1490661
Adam Machanic
Pro SQL Server 2005, available now
http://www.apress.com/book/bookDisplay.html?bID=457
--
"Jeff Cichocki" <jeffc@.belgioioso.com> wrote in message
news:ui31cHiEGHA.648@.TK2MSFTNGP14.phx.gbl...
>I need to add a rowguid to 300+ tables. Is there a way to script it?
> Thanks
> Jeff
>|||loop thru information_schema.tables
select @.TableName = table_name from information_schema.tables
use dynamic sql
alter table @.TableName add rowguid int identity
etc etc
http://sqlservercode.blogspot.com/|||"Jeff Cichocki" <jeffc@.belgioioso.com> wrote in message
news:ui31cHiEGHA.648@.TK2MSFTNGP14.phx.gbl...
>I need to add a rowguid to 300+ tables. Is there a way to script it?
> Thanks
> Jeff
To add to Andrew's answer, there is an undocumented stored procedure called
sp_MSforeachtable that you could use.
Google it for more information.
Here's one link:
http://www.databasejournal.com/features/mssql/article.php/1490661|||"Adam Machanic" <amachanic@.hotmail._removetoemail_.com> wrote in message
news:%23s5n4MiEGHA.3896@.TK2MSFTNGP10.phx.gbl...
> Read this:
> http://www.databasejournal.com/features/mssql/article.php/1490661
Gee Adam, the same link that I gave in my answer.
It's the first one that Google returned, right. ;-)|||I am a pretty new user, so maybe you guys will know a better way than what I
am thinking. I need to replicate two databases. The databases are exactly
the same on the two servers. When I run the wizard to set it up, it asks if
it should copy the schema and data or if the other database has a copy.
Since I already have the database on both servers, the wizard does not
update the schema of either server and add the rowguid to all of the tables.
Since this is the case, I thought I needed to add the field to all of the
tables. There are 300+ tables on each server to set up. I am hoping that
there is a way to add a rowguid to them all through a scipt.
I don't have a lot of experience writing SQL. I understand enough to be
dangerous, but lack the experience to be good. Any hand holding you guys
can give would be greatly appreciated.
Thanks
Jeff
"SQL" <denis.gobo@.gmail.com> wrote in message
news:1136484776.370433.268780@.f14g2000cwb.googlegroups.com...
> loop thru information_schema.tables
> select @.TableName = table_name from information_schema.tables
> use dynamic sql
> alter table @.TableName add rowguid int identity
> etc etc
> http://sqlservercode.blogspot.com/
>|||Yes :)
But I usually link to this one, which I think is better:
http://www.databasejournal.com/features/mssql/article.php/3441031
Adam Machanic
Pro SQL Server 2005, available now
http://www.apress.com/book/bookDisplay.html?bID=457
--
"Raymond D'Anjou" <rdanjou@.canatradeNOSPAM.com> wrote in message
news:OEZYiUiEGHA.1088@.TK2MSFTNGP10.phx.gbl...
> "Adam Machanic" <amachanic@.hotmail._removetoemail_.com> wrote in message
> news:%23s5n4MiEGHA.3896@.TK2MSFTNGP10.phx.gbl...
>> Read this:
>> http://www.databasejournal.com/features/mssql/article.php/1490661
> Gee Adam, the same link that I gave in my answer.
> It's the first one that Google returned, right. ;-)
>|||Just my 0,02$, but I would prefer catching the ouput and executing this
in bacthes rather than doing this operation on the tables
automatically. Sure it works with cursor and with the undocumented
procedure, but for me It would be preferable to do this:
select 'ALTER TABLE ' + Table_name + ' ADD YOURCOLUMNNAME DATATYPE'
from information_schema.tables
WHERE TABLE_TYPE = 'BASE_TABLE'
AND ObjectProperty(OBJECT_ID(Table_name), N'IsMSShipped') = 0
HTH, Jens Suessmeyer.|||"Adam Machanic" <amachanic@.hotmail._removetoemail_.com> wrote in message
news:%23SMfiYiEGHA.140@.TK2MSFTNGP12.phx.gbl...
> Yes :)
> But I usually link to this one, which I think is better:
> http://www.databasejournal.com/features/mssql/article.php/3441031
Yup, much better... but harder on the eyes.
Had to change the Text Size in my browser to at least "Larger" to read it.
I'm getting older, these eyes aren't as good as they used to be.
Mind you, a lot of things aren't as good as they used to be. :-(|||Jeff,
the simplest method is to let the replication configuration add the rowguid
itself on the publisher and subsequently overwrite the tables on the
subscriber during initialization. The problem with trying to populate it
yourself on both the publisher and subscriber is that you'll have different
guid values. You could do an update on each table to copy over the
publisher's guid values, but this seems like unnecessary work. If you have
very low bandwidth, then create the snapshot, zip it up, FTP it over, unzip
then do a nosync initialization using an alternative snapshot location.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)|||Which server is the best server to have as the distributor? The primary
server does all of the work. The secondary server is there only in the
event of failover. After the primary server comes back online, everything
automatically switches back to him.
If I make the secondary server the distributor, can I start him out with a
blank database and then let the replication pull it all back to itself? If
not, any suggestions as to which way to go?
Thanks
Jeff
"Paul Ibison" <Paul.Ibison@.Pygmalion.Com> wrote in message
news:%23akIfQjEGHA.716@.TK2MSFTNGP09.phx.gbl...
> Jeff,
> the simplest method is to let the replication configuration add the
> rowguid itself on the publisher and subsequently overwrite the tables on
> the subscriber during initialization. The problem with trying to populate
> it yourself on both the publisher and subscriber is that you'll have
> different guid values. You could do an update on each table to copy over
> the publisher's guid values, but this seems like unnecessary work. If you
> have very low bandwidth, then create the snapshot, zip it up, FTP it over,
> unzip then do a nosync initialization using an alternative snapshot
> location.
> Cheers,
> Paul Ibison SQL Server MVP, www.replicationanswers.com
> (recommended sql server 2000 replication book:
> http://www.nwsu.com/0974973602p.html)
>|||Jeff,
in the case of merge, there isn't a huge amount of work done on the
distributor, while there's a lot more done for transactional (log reader,
distribution agent and cleanup agent hit the Ms_repl_commands table). So,
usually the publisher is also designated as the distributor to save on
expense, but for very large throughputs in transactional replication, a
separate server is sometimes needed.
Merge replication is bidirectional by default, and can handle offline access
(autonomy). So, if you have a publisher/distributor setup (both on the same
box) and the publisher goes down, you can use the subscriber until the
publisher is fixed and then synchronize and this'll upload all the
subscriber changes to the publisher's database. This can also be achieved
using queued updating subscribers, where the publisher -> subscriber
transaction transmission rate is significantly faster. There are
restrictions in the case of BLOB datatypes though - you can't update or
insert them on the subscriber but I believe this is not the case with the
new varchar(max) etc columns in SQL Server 2005 though.
So, to summarise, I'd use a publisher/distributor setup, and
without BLOBs: queued updating transactional replication
with BLOBS: merge or try out queued updating transactional replication using
SQL Server 2005.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)|||Are you sure you want to use replication for failover/DR planning? Only
what you designate gets replicated, not everything.
Have you looked into log shipping...that doesn't require modifying the
database...
--
Kevin Hill
3NF Consulting
www.3nf-inc.com/NewsGroups.htm
"Jeff Cichocki" <jeffc@.belgioioso.com> wrote in message
news:eMwCvgjEGHA.2040@.TK2MSFTNGP14.phx.gbl...
> Which server is the best server to have as the distributor? The primary
> server does all of the work. The secondary server is there only in the
> event of failover. After the primary server comes back online, everything
> automatically switches back to him.
> If I make the secondary server the distributor, can I start him out with a
> blank database and then let the replication pull it all back to itself?
> If not, any suggestions as to which way to go?
> Thanks
> Jeff
>
> "Paul Ibison" <Paul.Ibison@.Pygmalion.Com> wrote in message
> news:%23akIfQjEGHA.716@.TK2MSFTNGP09.phx.gbl...
>> Jeff,
>> the simplest method is to let the replication configuration add the
>> rowguid itself on the publisher and subsequently overwrite the tables on
>> the subscriber during initialization. The problem with trying to populate
>> it yourself on both the publisher and subscriber is that you'll have
>> different guid values. You could do an update on each table to copy over
>> the publisher's guid values, but this seems like unnecessary work. If you
>> have very low bandwidth, then create the snapshot, zip it up, FTP it
>> over, unzip then do a nosync initialization using an alternative snapshot
>> location.
>> Cheers,
>> Paul Ibison SQL Server MVP, www.replicationanswers.com
>> (recommended sql server 2000 replication book:
>> http://www.nwsu.com/0974973602p.html)
>>
>|||Add a field|||Mike,
part of jeff's requirements is for the standby server to be useable and to
be able to pass control back to the primary server after being repaired, so
I have suggested queued updating subscribers. This will also prevent having
to sort out the lack of identity columns on the subscriber.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)|||Since I am not a replication (or SQL Server) expert by any means, I am still
a little confused as to how to do these different setups.
My ultimate goal is to have both servers containing the exact same data at
all times. The application of these servers is in a cheese factory. The
Federal government has very particular requirements for electronic data
collection. The app that runs on the servers has the ability to collect and
store data on both servers at the same time. The problem with the app
though, is that if one of the servers goes down, then they are out of sync.
The app will not catch the other server up on what was missed. I have both
servers working perfectly on every other aspect of have a redundant fail
over server. Since the servers won't catch each other up, then I think my
best choice is to only have one server collect that data and have SQL Server
do the data replication. My thought is to run replication in real time so
that neither server misses a beat.
Is there a way to accomplish this without using replication? Would log
shipping happen in real time or does it have to run on a schedule? Of
course, in the event that I need to take a server down for maintenance, it
needs to be told what happened while it was gone.
Thanks
Jeff
"Paul Ibison" <Paul.Ibison@.Pygmalion.Com> wrote in message
news:%23WJQ9bqEGHA.740@.TK2MSFTNGP12.phx.gbl...
> Mike,
> part of jeff's requirements is for the standby server to be useable and to
> be able to pass control back to the primary server after being repaired,
> so I have suggested queued updating subscribers. This will also prevent
> having to sort out the lack of identity columns on the subscriber.
> Cheers,
> Paul Ibison SQL Server MVP, www.replicationanswers.com
> (recommended sql server 2000 replication book:
> http://www.nwsu.com/0974973602p.html)
>|||Jeff,
log-shipping and transactional replication will both have latency. On my
systems this can often be minimised in the case of transactional replication
to < 10 secs, but the data will usually be out of sync. You might be
interested in investigating synchronous database mirroring, or clustering to
ensure high-availability.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)|||Log shipping can be as little as 3 - 5 minutes of latency. Clustering is
only going to protect you from hardware failure.
Any replication method which sends data in both directions will require a
schema change, there is absolutely no way around this.
Database Mirroring in 2005 is the only technology which exists which will
allow you to maintain 2 databases synchronized with each other, in real
time, without requiring a schema change. BUT, the mirror database is
inaccessible and you will incur a performance penalty in your applications
when you do this.
--
Mike
Mentor
Solid Quality Learning
http://www.solidqualitylearning.com
"Paul Ibison" <Paul.Ibison@.Pygmalion.Com> wrote in message
news:eZw9LhsEGHA.3384@.TK2MSFTNGP12.phx.gbl...
> Jeff,
> log-shipping and transactional replication will both have latency. On my
> systems this can often be minimised in the case of transactional
> replication to < 10 secs, but the data will usually be out of sync. You
> might be interested in investigating synchronous database mirroring, or
> clustering to ensure high-availability.
> Cheers,
> Paul Ibison SQL Server MVP, www.replicationanswers.com
> (recommended sql server 2000 replication book:
> http://www.nwsu.com/0974973602p.html)
>
>|||Bi-directional transactional replication requires no schema changes.
--
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"Michael Hotek" <mike@.solidqualitylearning.com> wrote in message
news:uJnQlhtEGHA.3200@.tk2msftngp13.phx.gbl...
> Log shipping can be as little as 3 - 5 minutes of latency. Clustering is
> only going to protect you from hardware failure.
> Any replication method which sends data in both directions will require a
> schema change, there is absolutely no way around this.
> Database Mirroring in 2005 is the only technology which exists which will
> allow you to maintain 2 databases synchronized with each other, in real
> time, without requiring a schema change. BUT, the mirror database is
> inaccessible and you will incur a performance penalty in your applications
> when you do this.
> --
> Mike
> Mentor
> Solid Quality Learning
> http://www.solidqualitylearning.com
>
> "Paul Ibison" <Paul.Ibison@.Pygmalion.Com> wrote in message
> news:eZw9LhsEGHA.3384@.TK2MSFTNGP12.phx.gbl...
>> Jeff,
>> log-shipping and transactional replication will both have latency. On my
>> systems this can often be minimised in the case of transactional
>> replication to < 10 secs, but the data will usually be out of sync. You
>> might be interested in investigating synchronous database mirroring, or
>> clustering to ensure high-availability.
>> Cheers,
>> Paul Ibison SQL Server MVP, www.replicationanswers.com
>> (recommended sql server 2000 replication book:
>> http://www.nwsu.com/0974973602p.html)
>>
>>
>|||Hilary,
I would suggest that you exit these newsgroups right now. No schema change
is about as false of a statement as you can ever possibly get. It requires
that you enable the @.loopback_detection parameter and the loopback detection
algorithm uses the additional column to determine the originator of the
change. The MVP program has sunk really low when it has members who
knowingly post blatantly false answers.
The create table statement that I used to initially generate the table is:
create table two_way_test1
(pkcol INTEGER PRIMARY KEY NOT NULL,
intcol INTEGER IDENTITY(1,1) NOT FOR REPLICATION,
charcol CHAR(100))
The table structure after the snapshot completed was as follows:
USE [test1]
GO
/****** Object: Table [dbo].[two_way_test1] Script Date: 01/08/2006 20:43:26
******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[two_way_test1](
[pkcol] [int] NOT NULL,
[intcol] [int] IDENTITY(1,1) NOT FOR REPLICATION NOT NULL,
[charcol] [char](100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[msrepl_tran_version] [uniqueidentifier] NOT NULL CONSTRAINT
[MSrepl_tran_version_default_F6FB929A_624E_4666_919D_08145657CCE8_2073058421]
DEFAULT (newid()),
PRIMARY KEY CLUSTERED
([pkcol] ASC
)WITH (IGNORE_DUP_KEY = OFF) ON [PRIMARY]) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
USE [test1]
GO
ALTER TABLE [dbo].[two_way_test1] WITH NOCHECK ADD CONSTRAINT [checkprimcol]
CHECK NOT FOR REPLICATION (([pkcol]>=(1) AND [pkcol]<=(1000)))
GO
ALTER TABLE [dbo].[two_way_test1] CHECK CONSTRAINT [checkprimcol]
Maybe you can explain how adding a column to a table doesn't qualify as
modifying the schema?
http://support.microsoft.com/default.aspx?scid=820675
Here is the code I used to implement bi-directional, transactional
replication. (Straight out of the KB article.)
use master
go
create database test1
go
create database test2
go
use test1
go
create table two_way_test1
(
pkcol INTEGER PRIMARY KEY NOT NULL,
intcol INTEGER IDENTITY(1,1) NOT FOR REPLICATION,
charcol CHAR(100)
)
use test2
go
create table two_way_test2
(
pkcol INTEGER PRIMARY KEY NOT NULL,
intcol INTEGER IDENTITY(1000000000,1) NOT FOR REPLICATION,
charcol CHAR(100)
)
go
-- Constraint to enforce a range of values between 1 and 1000 in database
test1
use test1
go
alter table
two_way_test1
with nocheck
add constraint
checkprimcol check NOT FOR REPLICATION
(
pkcol BETWEEN 1 and 1000
)
go
use test2
go
-- Constraint to enforce a range of values between 1001 and 2000 in the
database test2
alter table
two_way_test2
with nocheck
add constraint
checkprimcol check NOT FOR REPLICATION
(
pkcol BETWEEN 1001 and 2000
)
go
use test1
go
-- INSERT Stored Procedure
create procedure sp_ins_two_way_test1
@.pkcol int,
@.intcol int,
@.charcol char(100),
@.timestampcol timestamp,
@.rowidcol uniqueidentifier
as
insert into two_way_test1
(
pkcol,
intcol,
charcol
)
values
(
@.pkcol,
@.intcol,
@.charcol
)
go
--UPDATE Stored Procedure
create procedure sp_upd_two_way_test1
@.pkcol int,
@.intcol int,
@.charcol char(100),
@.timestampcol timestamp,
@.rowidcol uniqueidentifier,
@.old_pkcol int
as
declare @.x int
declare @.y int
declare @.z char(100)
select
@.x=pkcol,
@.y=intcol,
@.z=charcol
from
two_way_test1
where
pkcol = @.pkcol
delete
two_way_test1
where
pkcol=@.pkcol
insert into two_way_test1
(
pkcol,
intcol,
charcol
)
values
(
case isnull(@.pkcol,0) when 0 then @.x else @.pkcol end,
case isnull(@.intcol,0) when 0 then @.y else @.intcol end,
case isnull(@.charcol,'N') when 'N' then @.z else @.charcol end
)
go
-- DELETE Stored Procedure
create procedure sp_del_two_way_test1
@.old_pkcol int
as
delete
two_way_test1
where
pkcol = @.old_pkcol
go
use test2
go
-- INSERT Stored Procedure
create procedure sp_ins_two_way_test2
@.pkcol int,
@.intcol int,
@.charcol char(100),
@.timestampcol timestamp,
@.rowidcol uniqueidentifier
as
insert into two_way_test2
(
pkcol,
intcol,
charcol
)
values
(
@.pkcol,
@.intcol,
@.charcol
)
go
--UPDATE Stored Procedure
create procedure sp_upd_two_way_test2
@.pkcol int,
@.intcol int,
@.charcol char(100),
@.timestampcol timestamp,
@.rowidcol uniqueidentifier,
@.old_pkcol int
as
declare @.x int
declare @.y int
declare @.z char(100)
select
@.x=pkcol,
@.y=intcol,
@.z=charcol
from
two_way_test2
where
pkcol = @.pkcol
delete
two_way_test2
where
pkcol=@.pkcol
insert into two_way_test2
(
pkcol,
intcol,
charcol
)
values
(
case isnull(@.pkcol,0) when 0 then @.x else @.pkcol end,
case isnull(@.intcol,0) when 0 then @.y else @.intcol end,
case isnull(@.charcol,'N') when 'N' then @.z else @.charcol end
)
go
-- DELETE Stored Procedure
create procedure sp_del_two_way_test2
@.old_pkcol int
as
delete
two_way_test2
where
pkcol = @.old_pkcol
go
use master
go
exec sp_replicationdboption N'test1', N'publish', true
go
exec sp_replicationdboption N'test2', N'publish', true
go
--In the database test1.
use test1
go
-- Adding the transactional publication.
exec sp_addpublication
@.publication = N'two_way_pub_test1',
@.restricted = N'false',
@.sync_method = N'native',
@.repl_freq = N'continuous',
@.description = N'Transactional publication of database test1.',
@.status = N'active',
@.allow_push = N'true',
@.allow_pull = N'true',
@.allow_anonymous = N'false',
@.enabled_for_internet = N'false',
@.independent_agent = N'false',
@.immediate_sync = N'false',
@.allow_sync_tran = N'true',
@.autogen_sync_procs = N'true',
@.retention = 72
go
exec sp_addpublication_snapshot
@.publication = N'two_way_pub_test1',
@.frequency_type = 4,
@.frequency_interval = 1,
@.frequency_relative_interval = 0,
@.frequency_recurrence_factor = 1,
@.frequency_subday = 1,
@.frequency_subday_interval = 0,
@.active_start_date = 0,
@.active_end_date = 0,
@.active_start_time_of_day = 233000,
@.active_end_time_of_day = 0
go
-- Adding the transactional articles.
exec sp_addarticle
@.publication = N'two_way_pub_test1',
@.article = N'two_way_test1',
@.source_owner = N'dbo',
@.source_object = N'two_way_test1',
@.destination_table = N'two_way_test1',
@.type = N'logbased',
@.creation_script = null,
@.description = null,
@.pre_creation_cmd = N'drop',
@.schema_option = 0x00000000000000F1,
@.status = 16,
@.vertical_partition = N'false',
@.ins_cmd = N'CALL sp_ins_two_way_test2',
@.del_cmd = N'CALL sp_del_two_way_test2',
@.upd_cmd = N'CALL sp_upd_two_way_test2',
@.filter = null,
@.sync_object = null,
@.identityrangemanagementoption = 'manual'
go
-- In the database test2
use test2
go
-- Adding the transactional publication.
exec sp_addpublication
@.publication = N'two_way_pub_test2',
@.restricted = N'false',
@.sync_method = N'native',
@.repl_freq = N'continuous',
@.description = N'Transactional publication of database test2',
@.status = N'active',
@.allow_push = N'true',
@.allow_pull = N'true',
@.allow_anonymous = N'false',
@.enabled_for_internet = N'false',
@.independent_agent = N'false',
@.immediate_sync = N'false',
@.allow_sync_tran = N'true',
@.autogen_sync_procs = N'true',
@.retention = 72
go
exec sp_addpublication_snapshot
@.publication = N'two_way_pub_test2',
@.frequency_type = 4,
@.frequency_interval = 1,
@.frequency_relative_interval = 0,
@.frequency_recurrence_factor = 1,
@.frequency_subday = 1,
@.frequency_subday_interval = 0,
@.active_start_date = 0,
@.active_end_date = 0,
@.active_start_time_of_day = 233000,
@.active_end_time_of_day = 0
go
-- Adding the transactional articles.
exec sp_addarticle
@.publication = N'two_way_pub_test2',
@.article = N'two_way_test2',
@.source_owner = N'dbo',
@.source_object = N'two_way_test2',
@.destination_table = N'two_way_test2',
@.type = N'logbased',
@.creation_script = null,
@.description = null,
@.pre_creation_cmd = N'drop',
@.schema_option = 0x00000000000000F1,
@.status = 16,
@.vertical_partition = N'false',
@.ins_cmd = N'CALL sp_ins_two_way_test1',
@.del_cmd = N'CALL sp_del_two_way_test1',
@.upd_cmd = N'CALL sp_upd_two_way_test1',
@.filter = null,
@.sync_object = null,
@.identityrangemanagementoption = 'manual'
go
use test1
go
exec sp_addsubscription
@.publication = N'two_way_pub_test1',
@.article = N'all',
@.subscriber = 'WAKKO',
@.destination_db = N'test2',
@.sync_type = N'none',
@.status = N'active',
@.update_mode = N'sync tran',
@.loopback_detection = 'true'
go
-- Adding the transactional subscription in test2.
use test2
go
exec sp_addsubscription
@.publication = N'two_way_pub_test2',
@.article = N'all',
@.subscriber = 'WAKKO',
@.destination_db = N'test1',
@.sync_type = N'none',
@.status = N'active',
@.update_mode = N'sync tran',
@.loopback_detection = 'true'
go
Mike
Mentor
Solid Quality Learning
http://www.solidqualitylearning.com
"Hilary Cotter" <hilary.cotter@.gmail.com> wrote in message
news:%238PEHtKFGHA.516@.TK2MSFTNGP15.phx.gbl...
> Bi-directional transactional replication requires no schema changes.
> --
> Hilary Cotter
> Looking for a SQL Server replication book?
> http://www.nwsu.com/0974973602.html
> Looking for a FAQ on Indexing Services/SQL FTS
> http://www.indexserverfaq.com
> "Michael Hotek" <mike@.solidqualitylearning.com> wrote in message
> news:uJnQlhtEGHA.3200@.tk2msftngp13.phx.gbl...
>> Log shipping can be as little as 3 - 5 minutes of latency. Clustering is
>> only going to protect you from hardware failure.
>> Any replication method which sends data in both directions will require a
>> schema change, there is absolutely no way around this.
>> Database Mirroring in 2005 is the only technology which exists which will
>> allow you to maintain 2 databases synchronized with each other, in real
>> time, without requiring a schema change. BUT, the mirror database is
>> inaccessible and you will incur a performance penalty in your
>> applications when you do this.
>> --
>> Mike
>> Mentor
>> Solid Quality Learning
>> http://www.solidqualitylearning.com
>>
>> "Paul Ibison" <Paul.Ibison@.Pygmalion.Com> wrote in message
>> news:eZw9LhsEGHA.3384@.TK2MSFTNGP12.phx.gbl...
>> Jeff,
>> log-shipping and transactional replication will both have latency. On my
>> systems this can often be minimised in the case of transactional
>> replication to < 10 secs, but the data will usually be out of sync. You
>> might be interested in investigating synchronous database mirroring, or
>> clustering to ensure high-availability.
>> Cheers,
>> Paul Ibison SQL Server MVP, www.replicationanswers.com
>> (recommended sql server 2000 replication book:
>> http://www.nwsu.com/0974973602p.html)
>>
>>
>>
>|||Your repro is for an immediate updating publication, which does require
schema modifications. Right click on the publication select properties and
in the updateable tab it will clearly say immediate on SQL 2000, on SQL 2005
the Subscriptions Option tab will say Allow Immediate Updating
Subscriptions - true. The subscription is not an immedate updating
subscription however. This kb article is misleading and I will file a doc
bug on it.
Notice that before the snapshot runs your table has
create table two_way_test1
(pkcol INTEGER PRIMARY KEY NOT NULL,
intcol INTEGER IDENTITY(1,1) NOT FOR REPLICATION,
charcol CHAR(100))
a total of 3 columns. After it has run it has
CREATE TABLE [dbo].[two_way_test1](
[pkcol] [int] NOT NULL,
[intcol] [int] IDENTITY(1,1) NOT FOR REPLICATION NOT NULL,
[charcol] [char](100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[msrepl_tran_version] [uniqueidentifier] NOT NULL CONSTRAINT
[MSrepl_tran_version_default_F6FB929A_624E_4666_919D_08145657CCE8_2073058421] DEFAULT (newid()), PRIMARY KEY CLUSTERED ([pkcol] ASC )WITH (IGNORE_DUP_KEY = OFF) ON [PRIMARY]) ON [PRIMARY] GO4 columns.Please check the tables after the several inserts, updates and deletes haveflowed. You will notice that there are no additional columns.I am sorry you feel this way about the MVP program. I always strive foraccuracy, and normally repro everything before I make a post.The below is tested on SQL 2000 and 2005.Note that for SQL 2005 the proc names will have to be of the form'sp_MSins_dbotbl_IssueTracker_I', instead of the truncated versions thatthey are for SQL 2000.create database bidi1GOcreate database bidi2GOUSE BIDI1goCREATE TABLE [dbo].[tbl_IssueTracker_IssueCategories] ([category_id] [int] IDENTITY (10000, 1) NOT FOR REPLICATION NOT NULL ,[name] [varchar] (100) NOT NULL ,[parent_category_id] [int] NOT NULL) ON [PRIMARY]GOALTER TABLE [dbo].[tbl_IssueTracker_IssueCategories] WITH NOCHECK ADDCONSTRAINT [PK_tbl_IssueTracker_IssueCategories] PRIMARY KEY CLUSTERED([category_id]) ON [PRIMARY]GOsp_replicationdboption 'bidi1','publish','true'GOsp_addpublication 'bidi1',@.status='active'GOsp_addpublication_snapshot 'bidi1'GOsp_addarticle'bidi1','tbl_IssueTracker_IssueCategories','tbl_IssueTracker_IssueCategories'GOsp_addsubscription 'bidi1', @.SUBSCRIBER=@.@.servername, @.destination_db='bidi2',@.loopback_detection = 'true', @.sync_type= noneGOif exists (select * from sysobjects where type = 'P' and name ='sp_MSins_tbl_IssueTracker_Issu') drop proc[sp_MSins_tbl_IssueTracker_Issu]gocreate procedure [sp_MSins_tbl_IssueTracker_Issu] @.c1 int,@.c2varchar(100),@.c3 intASBEGINinsert into [tbl_IssueTracker_IssueCategories]([category_id], [name], [parent_category_id] )values (@.c1, @.c2, @.c3 )ENDgoif exists (select * from sysobjects where type = 'P' and name ='sp_MSupd_tbl_IssueTracker_Issu') drop proc[sp_MSupd_tbl_IssueTracker_Issu]gocreate procedure [sp_MSupd_tbl_IssueTracker_Issu] @.c1 int,@.c2 varchar(100),@.c3 int, @.pkc1 intasif @.c1 = @.pkc1beginupdate [tbl_IssueTracker_IssueCategories] set [name] =@.c2,[parent_category_id] = @.c3where [category_id] = @.pkc1endelsebeginupdate [tbl_IssueTracker_IssueCategories] set --[category_id] = @.c1,[name] = @.c2,[parent_category_id] = @.c3where [category_id] = @.pkc1endgoif exists (select * from sysobjects where type = 'P' and name ='sp_MSdel_tbl_IssueTracker_Issu') drop proc[sp_MSdel_tbl_IssueTracker_Issu]gocreate procedure [sp_MSdel_tbl_IssueTracker_Issu] @.pkc1 intasdelete [tbl_IssueTracker_IssueCategories]where [category_id] = @.pkc1gouse bidi2GOCREATE TABLE [dbo].[tbl_IssueTracker_IssueCategories] ([category_id] [int] IDENTITY (20000, 1) NOT FOR REPLICATION NOT NULL ,[name] [varchar] (100) NOT NULL ,[parent_category_id] [int] NOT NULL) ON [PRIMARY]GOALTER TABLE [dbo].[tbl_IssueTracker_IssueCategories] WITH NOCHECK ADDCONSTRAINT [PK_tbl_IssueTracker_IssueCategories] PRIMARY KEY CLUSTERED([category_id]) ON [PRIMARY]GOsp_replicationdboption 'bidi2','publish','true'GOsp_addpublication 'bidi1',@.status='active'GOsp_addpublication_snapshot 'bidi1'GOsp_addarticle'bidi1','tbl_IssueTracker_IssueCategories','tbl_IssueTracker_IssueCategories'GOsp_addsubscription 'bidi1', @.SUBSCRIBER=@.@.servername, @.destination_db='bidi1',@.loopback_detection = 'true', @.sync_type= noneGOif exists (select * from sysobjects where type = 'P' and name ='sp_MSins_tbl_IssueTracker_Issu') drop proc[sp_MSins_tbl_IssueTracker_Issu]gocreate procedure [sp_MSins_tbl_IssueTracker_Issu] @.c1 int,@.c2varchar(100),@.c3 intASBEGINinsert into [tbl_IssueTracker_IssueCategories]([category_id], [name], [parent_category_id] )values (@.c1, @.c2, @.c3 )ENDgoif exists (select * from sysobjects where type = 'P' and name ='sp_MSupd_tbl_IssueTracker_Issu') drop proc[sp_MSupd_tbl_IssueTracker_Issu]gocreate procedure [sp_MSupd_tbl_IssueTracker_Issu] @.c1 int,@.c2 varchar(100),@.c3 int,@.pkc1 intasif @.c1 = @.pkc1beginupdate [tbl_IssueTracker_IssueCategories] set [name] =@.c2,[parent_category_id] = @.c3where [category_id] = @.pkc1endelsebeginupdate [tbl_IssueTracker_IssueCategories] set --[category_id] = @.c1,[name] = @.c2,[parent_category_id] = @.c3where [category_id] = @.pkc1endgoif exists (select * from sysobjects where type = 'P' and name ='sp_MSdel_tbl_IssueTracker_Issu') drop proc[sp_MSdel_tbl_IssueTracker_Issu]gocreate procedure [sp_MSdel_tbl_IssueTracker_Issu] @.pkc1 intasdelete [tbl_IssueTracker_IssueCategories]where [category_id] = @.pkc1gouse bidi1GOinsert into tbl_IssueTracker_IssueCategories(name, parent_category_id)values('test',2)insert into bidi2.dbo.tbl_IssueTracker_IssueCategories(name,parent_category_id) values('test',2)insert into tbl_IssueTracker_IssueCategories(name, parent_category_id)values('test',2)insert into bidi2.dbo.tbl_IssueTracker_IssueCategories(name,parent_category_id) values('test',2)insert into tbl_IssueTracker_IssueCategories(name, parent_category_id)values('test',2)insert into bidi2.dbo.tbl_IssueTracker_IssueCategories(name,parent_category_id) values('test',2)GO--note here that there are 3 columns stillselect * from tbl_IssueTracker_IssueCategoriesselect * from bidi2.dbo.tbl_IssueTracker_IssueCategoriesGOupdate tbl_IssueTracker_IssueCategories set name='publisher' wherecategory_id>20000update bidi2.dbo.tbl_IssueTracker_IssueCategories setname='subscriber' where category_id<20000GOselect * from tbl_IssueTracker_IssueCategoriesselect * from bidi2.dbo.tbl_IssueTracker_IssueCategoriesGOdelete from tbl_IssueTracker_IssueCategories where name='subscriber'delete from bidi2.dbo.tbl_IssueTracker_IssueCategories wherename='publisher'GOselect * from tbl_IssueTracker_IssueCategoriesselect * from bidi2.dbo.tbl_IssueTracker_IssueCategoriesGO--Hilary CotterLooking for a SQL Server replication book?http://www.nwsu.com/0974973602.htmlLooking for a FAQ on Indexing Services/SQL FTShttp://www.indexserverfaq.com"Michael Hotek" <mike@.solidqualitylearning.com> wrote in messagenews:uTc%23afMFGHA.1736@.TK2MSFTNGP14.phx.gbl...> Hilary,> I would suggest that you exit these newsgroups right now. No schemachange is about as false of a statement as you can ever possibly get. Itrequires that you enable the @.loopback_detection parameter and the loopbackdetection algorithm uses the additional column to determine the originatorof the change. The MVP program has sunk really low when it has members whoknowingly post blatantly false answers.>> The create table statement that I used to initially generate the table is:> create table two_way_test1>> (pkcol INTEGER PRIMARY KEY NOT NULL,>> intcol INTEGER IDENTITY(1,1) NOT FOR REPLICATION,>> charcol CHAR(100))>> The table structure after the snapshot completed was as follows:> USE [test1]>> GO>> /****** Object: Table [dbo].[two_way_test1] Script Date: 01/08/200620:43:26 ******/>> SET ANSI_NULLS ON>> GO>> SET QUOTED_IDENTIFIER ON>> GO>> SET ANSI_PADDING ON>> GO>> CREATE TABLE [dbo].[two_way_test1](>> [pkcol] [int] NOT NULL,>> [intcol] [int] IDENTITY(1,1) NOT FOR REPLICATION NOT NULL,>> [charcol] [char](100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,>> [msrepl_tran_version] [uniqueidentifier] NOT NULL CONSTRAINT[MSrepl_tran_version_default_F6FB929A_624E_4666_919D_08145657CCE8_2073058421] DEFAULT (newid()),>> PRIMARY KEY CLUSTERED>> ([pkcol] ASC>> )WITH (IGNORE_DUP_KEY = OFF) ON [PRIMARY]) ON [PRIMARY]>> GO>> SET ANSI_PADDING OFF>> GO>> USE [test1]>> GO>> ALTER TABLE [dbo].[two_way_test1] WITH NOCHECK ADD CONSTRAINT[checkprimcol] CHECK NOT FOR REPLICATION (([pkcol]>=(1) AND[pkcol]<=(1000)))>> GO>> ALTER TABLE [dbo].[two_way_test1] CHECK CONSTRAINT [checkprimcol]>> Maybe you can explain how adding a column to a table doesn't qualify asmodifying the schema?>> http://support.microsoft.com/default.aspx?scid=820675>> Here is the code I used to implement bi-directional, transactionalreplication. (Straight out of the KB article.)>> use master>> go>> create database test1>> go>> create database test2>> go>> use test1>> go>> create table two_way_test1>> (>> pkcol INTEGER PRIMARY KEY NOT NULL,>> intcol INTEGER IDENTITY(1,1) NOT FOR REPLICATION,>> charcol CHAR(100)>> )>> use test2>> go>> create table two_way_test2>> (>> pkcol INTEGER PRIMARY KEY NOT NULL,>> intcol INTEGER IDENTITY(1000000000,1) NOT FOR REPLICATION,>> charcol CHAR(100)>> )>> go>> -- Constraint to enforce a range of values between 1 and 1000 in databasetest1>> use test1>> go>> alter table>> two_way_test1>> with nocheck>> add constraint>> checkprimcol check NOT FOR REPLICATION>> (>> pkcol BETWEEN 1 and 1000>> )>> go>> use test2>> go>> -- Constraint to enforce a range of values between 1001 and 2000 in thedatabase test2>> alter table>> two_way_test2>> with nocheck>> add constraint>> checkprimcol check NOT FOR REPLICATION>> (>> pkcol BETWEEN 1001 and 2000>> )>> go>> use test1>> go>> -- INSERT Stored Procedure>> create procedure sp_ins_two_way_test1>> @.pkcol int,>> @.intcol int,>> @.charcol char(100),>> @.timestampcol timestamp,>> @.rowidcol uniqueidentifier>> as>> insert into two_way_test1>> (>> pkcol,>> intcol,>> charcol>> )>> values>> (>> @.pkcol,>> @.intcol,>> @.charcol>> )>> go>> --UPDATE Stored Procedure>> create procedure sp_upd_two_way_test1>> @.pkcol int,>> @.intcol int,>> @.charcol char(100),>> @.timestampcol timestamp,>> @.rowidcol uniqueidentifier,>> @.old_pkcol int>> as>> declare @.x int>> declare @.y int>> declare @.z char(100)>> select>> @.x=pkcol,>> @.y=intcol,>> @.z=charcol>> from>> two_way_test1>> where>> pkcol = @.pkcol>> delete>> two_way_test1>> where>> pkcol=@.pkcol>> insert into two_way_test1>> (>> pkcol,>> intcol,>> charcol>> )>> values>> (>> case isnull(@.pkcol,0) when 0 then @.x else @.pkcol end,>> case isnull(@.intcol,0) when 0 then @.y else @.intcol end,>> case isnull(@.charcol,'N') when 'N' then @.z else @.charcol end>> )>> go>> -- DELETE Stored Procedure>> create procedure sp_del_two_way_test1>> @.old_pkcol int>> as>> delete>> two_way_test1>> where>> pkcol = @.old_pkcol>> go>> use test2>> go>> -- INSERT Stored Procedure>> create procedure sp_ins_two_way_test2>> @.pkcol int,>> @.intcol int,>> @.charcol char(100),>> @.timestampcol timestamp,>> @.rowidcol uniqueidentifier>> as>> insert into two_way_test2>> (>> pkcol,>> intcol,>> charcol>> )>> values>> (>> @.pkcol,>> @.intcol,>> @.charcol>> )>> go>> --UPDATE Stored Procedure>> create procedure sp_upd_two_way_test2>> @.pkcol int,>> @.intcol int,>> @.charcol char(100),>> @.timestampcol timestamp,>> @.rowidcol uniqueidentifier,>> @.old_pkcol int>> as>> declare @.x int>> declare @.y int>> declare @.z char(100)>> select>> @.x=pkcol,>> @.y=intcol,>> @.z=charcol>> from>> two_way_test2>> where>> pkcol = @.pkcol>> delete>> two_way_test2>> where>> pkcol=@.pkcol>> insert into two_way_test2>> (>> pkcol,>> intcol,>> charcol>> )>> values>> (>> case isnull(@.pkcol,0) when 0 then @.x else @.pkcol end,>> case isnull(@.intcol,0) when 0 then @.y else @.intcol end,>> case isnull(@.charcol,'N') when 'N' then @.z else @.charcol end>> )>> go>> -- DELETE Stored Procedure>> create procedure sp_del_two_way_test2>> @.old_pkcol int>> as>> delete>> two_way_test2>> where>> pkcol = @.old_pkcol>> go>> use master>> go>> exec sp_replicationdboption N'test1', N'publish', true>> go>> exec sp_replicationdboption N'test2', N'publish', true>> go>> --In the database test1.>> use test1>> go>> -- Adding the transactional publication.>> exec sp_addpublication>> @.publication = N'two_way_pub_test1',>> @.restricted = N'false',>> @.sync_method = N'native',>> @.repl_freq = N'continuous',>> @.description = N'Transactional publication of database test1.',>> @.status = N'active',>> @.allow_push = N'true',>> @.allow_pull = N'true',>> @.allow_anonymous = N'false',>> @.enabled_for_internet = N'false',>> @.independent_agent = N'false',>> @.immediate_sync = N'false',>> @.allow_sync_tran = N'true',>> @.autogen_sync_procs = N'true',>> @.retention = 72>> go>> exec sp_addpublication_snapshot>> @.publication = N'two_way_pub_test1',>> @.frequency_type = 4,>> @.frequency_interval = 1,>> @.frequency_relative_interval = 0,>> @.frequency_recurrence_factor = 1,>> @.frequency_subday = 1,>> @.frequency_subday_interval = 0,>> @.active_start_date = 0,>> @.active_end_date = 0,>> @.active_start_time_of_day = 233000,>> @.active_end_time_of_day = 0>> go>> -- Adding the transactional articles.>> exec sp_addarticle>> @.publication = N'two_way_pub_test1',>> @.article = N'two_way_test1',>> @.source_owner = N'dbo',>> @.source_object = N'two_way_test1',>> @.destination_table = N'two_way_test1',>> @.type = N'logbased',>> @.creation_script = null,>> @.description = null,>> @.pre_creation_cmd = N'drop',>> @.schema_option = 0x00000000000000F1,>> @.status = 16,>> @.vertical_partition = N'false',>> @.ins_cmd = N'CALL sp_ins_two_way_test2',>> @.del_cmd = N'CALL sp_del_two_way_test2',>> @.upd_cmd = N'CALL sp_upd_two_way_test2',>> @.filter = null,>> @.sync_object = null,>> @.identityrangemanagementoption = 'manual'>> go>> -- In the database test2>> use test2>> go>> -- Adding the transactional publication.>> exec sp_addpublication>> @.publication = N'two_way_pub_test2',>> @.restricted = N'false',>> @.sync_method = N'native',>> @.repl_freq = N'continuous',>> @.description = N'Transactional publication of database test2',>> @.status = N'active',>> @.allow_push = N'true',>> @.allow_pull = N'true',>> @.allow_anonymous = N'false',>> @.enabled_for_internet = N'false',>> @.independent_agent = N'false',>> @.immediate_sync = N'false',>> @.allow_sync_tran = N'true',>> @.autogen_sync_procs = N'true',>> @.retention = 72>> go>> exec sp_addpublication_snapshot>> @.publication = N'two_way_pub_test2',>> @.frequency_type = 4,>> @.frequency_interval = 1,>> @.frequency_relative_interval = 0,>> @.frequency_recurrence_factor = 1,>> @.frequency_subday = 1,>> @.frequency_subday_interval = 0,>> @.active_start_date = 0,>> @.active_end_date = 0,>> @.active_start_time_of_day = 233000,>> @.active_end_time_of_day = 0>> go>> -- Adding the transactional articles.>> exec sp_addarticle>> @.publication = N'two_way_pub_test2',>> @.article = N'two_way_test2',>> @.source_owner = N'dbo',>> @.source_object = N'two_way_test2',>> @.destination_table = N'two_way_test2',>> @.type = N'logbased',>> @.creation_script = null,>> @.description = null,>> @.pre_creation_cmd = N'drop',>> @.schema_option = 0x00000000000000F1,>> @.status = 16,>> @.vertical_partition = N'false',>> @.ins_cmd = N'CALL sp_ins_two_way_test1',>> @.del_cmd = N'CALL sp_del_two_way_test1',>> @.upd_cmd = N'CALL sp_upd_two_way_test1',>> @.filter = null,>> @.sync_object = null,>> @.identityrangemanagementoption = 'manual'>> go>> use test1>> go>> exec sp_addsubscription>> @.publication = N'two_way_pub_test1',>> @.article = N'all',>> @.subscriber = 'WAKKO',>> @.destination_db = N'test2',>> @.sync_type = N'none',>> @.status = N'active',>> @.update_mode = N'sync tran',>> @.loopback_detection = 'true'>> go>> -- Adding the transactional subscription in test2.>> use test2>> go>> exec sp_addsubscription>> @.publication = N'two_way_pub_test2',>> @.article = N'all',>> @.subscriber = 'WAKKO',>> @.destination_db = N'test1',>> @.sync_type = N'none',>> @.status = N'active',>> @.update_mode = N'sync tran',>> @.loopback_detection = 'true'>> go>> --> Mike> Mentor> Solid Quality Learning> http://www.solidqualitylearning.com>> "Hilary Cotter" <hilary.cotter@.gmail.com> wrote in messagenews:%238PEHtKFGHA.516@.TK2MSFTNGP15.phx.gbl...>> Bi-directional transactional replication requires no schema changes.>> -->> Hilary Cotter>> Looking for a SQL Server replication book?>> http://www.nwsu.com/0974973602.html>> Looking for a FAQ on Indexing Services/SQL FTS>> http://www.indexserverfaq.com>> "Michael Hotek" <mike@.solidqualitylearning.com> wrote in messagenews:uJnQlhtEGHA.3200@.tk2msftngp13.phx.gbl...>> Log shipping can be as little as 3 - 5 minutes of latency. Clusteringis only going to protect you from hardware failure.>> Any replication method which sends data in both directions will requirea schema change, there is absolutely no way around this.>> Database Mirroring in 2005 is the only technology which exists whichwill allow you to maintain 2 databases synchronized with each other, in realtime, without requiring a schema change. BUT, the mirror database isinaccessible and you will incur a performance penalty in your applicationswhen you do this.>> -->> Mike>> Mentor>> Solid Quality Learning>> http://www.solidqualitylearning.com>>> "Paul Ibison" <Paul.Ibison@.Pygmalion.Com> wrote in messagenews:eZw9LhsEGHA.3384@.TK2MSFTNGP12.phx.gbl...>> Jeff,>> log-shipping and transactional replication will both have latency. Onmy systems this can often be minimised in the case of transactionalreplication to < 10 secs, but the data will usually be out of sync. Youmight be interested in investigating synchronous database mirroring, orclustering to ensure high-availability.>> Cheers,>> Paul Ibison SQL Server MVP, www.replicationanswers.com>> (recommended sql server 2000 replication book:>> http://www.nwsu.com/0974973602p.html)>>>>>>>>>|||ouch, this got mangled in the text editor. I have attached it as a script.
Also find a copy a http://www.indexserverfaq.com/bidireproscript.sql
Your repro is for an immediate updating publication, which does require
schema modifications. Right click on the publication select properties and
in the updateable tab it will clearly say immediate on SQL 2000, on SQL 2005
the Subscriptions Option tab will say Allow Immediate Updating
Subscriptions - true. The subscription is not an immediate updating
subscription however. This kb article is misleading and I will file a doc
bug on it.
Notice that before the snapshot runs your table has
create table two_way_test1
(pkcol INTEGER PRIMARY KEY NOT NULL,
intcol INTEGER IDENTITY(1,1) NOT FOR REPLICATION,
charcol CHAR(100))
a total of 3 columns. After it has run it has
CREATE TABLE [dbo].[two_way_test1](
[pkcol] [int] NOT NULL,
[intcol] [int] IDENTITY(1,1) NOT FOR REPLICATION NOT NULL,
[charcol] [char](100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[msrepl_tran_version] [uniqueidentifier] NOT NULL CONSTRAINT
[MSrepl_tran_version_default_F6FB929A_624E_4666_919D_08145657CCE8_2073058421]DEFAULT (newid()),PRIMARY KEY CLUSTERED ([pkcol] ASC )WITH (IGNORE_DUP_KEY = OFF) ON[PRIMARY]) ON [PRIMARY] GO4 columns.Please check the tables after the several inserts, updates and deletes haveflowed. You will notice that there are no additional columns.I am sorry you feel this way about the MVP program. I always strive foraccuracy, and normally repro everything before I make a post.The attached is tested on SQL 2000 and 2005. Note that for SQL 2005 the procnames will have to be of the form'sp_MSins_dbotbl_IssueTracker_I', insteadof the truncated versions that they are for SQL 2000.--Hilary CotterLooking for a SQL Server replication book?http://www.nwsu.com/0974973602.htmlLooking for a FAQ on Indexing Services/SQL FTShttp://www.indexserverfaq.com"Hilary Cotter" <hilary.cotter@.gmail.com> wrote in messagenews:upE6T6NFGHA.2648@.TK2MSFTNGP11.phx.gbl...> Your repro is for an immediate updating publication, which does require> schema modifications. Right click on the publication select properties and> in the updateable tab it will clearly say immediate on SQL 2000, on SQL2005> the Subscriptions Option tab will say Allow Immediate Updating> Subscriptions - true. The subscription is not an immedate updating> subscription however. This kb article is misleading and I will file a doc> bug on it.>> Notice that before the snapshot runs your table has>> create table two_way_test1>> (pkcol INTEGER PRIMARY KEY NOT NULL,>> intcol INTEGER IDENTITY(1,1) NOT FOR REPLICATION,>> charcol CHAR(100))>> a total of 3 columns. After it has run it has>> CREATE TABLE [dbo].[two_way_test1](>> [pkcol] [int] NOT NULL,>> [intcol] [int] IDENTITY(1,1) NOT FOR REPLICATION NOT NULL,>> [charcol] [char](100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,>> [msrepl_tran_version] [uniqueidentifier] NOT NULL CONSTRAINT>[MSrepl_tran_version_default_F6FB929A_624E_4666_919D_08145657CCE8_2073058421] DEFAULT (newid()), PRIMARY KEY CLUSTERED ([pkcol] ASC )WITH(IGNORE_DUP_KEY = OFF) ON [PRIMARY]) ON [PRIMARY] GO4 columns.Please checkthe tables after the several inserts, updates and deletes haveflowed. Youwill notice that there are no additional columns.I am sorry you feel thisway about the MVP program. I always strive foraccuracy, and normally reproeverything before I make a post.The below is tested on SQL 2000 and2005.Note that for SQL 2005 the proc names will have to be of theform'sp_MSins_dbotbl_IssueTracker_I', instead of the truncated versionsthatthey are for SQL 2000.create database bidi1GOcreate database bidi2GOUSEBIDI1goCREATE TABLE [dbo].[tbl_IssueTracker_IssueCategories] ([category_id][int] IDENTITY (10000, 1) NOT FOR REPLICATION NOT NULL ,[name] [varchar](100) NOT NULL ,[parent_category_id] [int] NOT NULL) ON [PRIMARY]GOALTERTABLE [dbo].[tbl_IssueTracker_IssueCategories] WITH NOCHECK ADDCONSTRAINT[PK_tbl_IssueTracker_IssueCategories] PRIMARY KEY CLUSTERED([category_id])ON [PRIMARY]GOsp_replicationdboption'bidi1','publish','true'GOsp_addpublication'bidi1',@.status='active'GOsp_addpublication_snapshot'bidi1'GOsp_addarticle'bidi1','tbl_IssueTracker_IssueCategories','tbl_IssueTracker_IssueCategories'GOsp_addsubscription 'bidi1',@.SUBSCRIBER=@.@.servername, @.destination_db='bidi2',@.loopback_detection ='true', @.sync_type= noneGOif exists (select * from sysobjects where type ='P' and name ='sp_MSins_tbl_IssueTracker_Issu') dropproc[sp_MSins_tbl_IssueTracker_Issu]gocreate procedure[sp_MSins_tbl_IssueTracker_Issu] @.c1 int,@.c2varchar(100),@.c3intASBEGINinsert into [tbl_IssueTracker_IssueCategories]([category_id],[name], [parent_category_id] )values (@.c1, @.c2, @.c3 )ENDgoif exists (select* from sysobjects where type = 'P' and name='sp_MSupd_tbl_IssueTracker_Issu') dropproc[sp_MSupd_tbl_IssueTracker_Issu]gocreate procedure[sp_MSupd_tbl_IssueTracker_Issu] @.c1 int,@.c2 varchar(100),@.c3 int, @.pkc1intasif @.c1 = @.pkc1beginupdate [tbl_IssueTracker_IssueCategories] set [name]=@.c2,[parent_category_id] = @.c3where [category_id] = @.pkc1endelsebeginupdate[tbl_IssueTracker_IssueCategories] set --[category_id] = @.c1,[name] =@.c2,[parent_category_id] = @.c3where [category_id] = @.pkc1endgoif exists(select * from sysobjects where type = 'P' and name='sp_MSdel_tbl_IssueTracker_Issu') dropproc[sp_MSdel_tbl_IssueTracker_Issu]gocreate procedure[sp_MSdel_tbl_IssueTracker_Issu] @.pkc1 intasdelete[tbl_IssueTracker_IssueCategories]where [category_id] = @.pkc1gousebidi2GOCREATE TABLE [dbo].[tbl_IssueTracker_IssueCategories] ([category_id][int] IDENTITY (20000, 1) NOT FOR REPLICATION NOT NULL ,[name] [varchar](100) NOT NULL ,[parent_category_id] [int] NOT NULL) ON [PRIMARY]GOALTERTABLE [dbo].[tbl_IssueTracker_IssueCategories] WITH NOCHECK ADDCONSTRAINT[PK_tbl_IssueTracker_IssueCategories] PRIMARY KEY CLUSTERED([category_id])ON [PRIMARY]GOsp_replicationdboption'bidi2','publish','true'GOsp_addpublication'bidi1',@.status='active'GOsp_addpublication_snapshot'bidi1'GOsp_addarticle'bidi1','tbl_IssueTracker_IssueCategories','tbl_IssueTracker_IssueCategories'GOsp_addsubscription 'bidi1',@.SUBSCRIBER=@.@.servername, @.destination_db='bidi1',@.loopback_detection ='true', @.sync_type= noneGOif exists (select * from sysobjects where type ='P' and name ='sp_MSins_tbl_IssueTracker_Issu') dropproc[sp_MSins_tbl_IssueTracker_Issu]gocreate procedure[sp_MSins_tbl_IssueTracker_Issu] @.c1 int,@.c2varchar(100),@.c3intASBEGINinsert into [tbl_IssueTracker_IssueCategories]([category_id],[name], [parent_category_id] )values (@.c1, @.c2, @.c3 )ENDgoif exists (select* from sysobjects where type = 'P' and name='sp_MSupd_tbl_IssueTracker_Issu') dropproc[sp_MSupd_tbl_IssueTracker_Issu]gocreate procedure[sp_MSupd_tbl_IssueTracker_Issu] @.c1 int,@.c2 varchar(100),@.c3 int,@.pkc1intasif @.c1 = @.pkc1beginupdate [tbl_IssueTracker_IssueCategories] set [name]=@.c2,[parent_category_id] = @.c3where [category_id] = @.pkc1endelsebeginupdate[tbl_IssueTracker_IssueCategories] set --[category_id] = @.c1,[name] =@.c2,[parent_category_id] = @.c3where [category_id] = @.pkc1endgoif exists(select * from sysobjects where type = 'P' and name='sp_MSdel_tbl_IssueTracker_Issu') dropproc[sp_MSdel_tbl_IssueTracker_Issu]gocreate procedure[sp_MSdel_tbl_IssueTracker_Issu] @.pkc1 intasdelete[tbl_IssueTracker_IssueCategories]where [category_id] = @.pkc1gousebidi1GOinsert into tbl_IssueTracker_IssueCategories(name,parent_category_id)values('test',2)insert intobidi2.dbo.tbl_IssueTracker_IssueCategories(name,parent_category_id)values('test',2)insert into tbl_IssueTracker_IssueCategories(name,parent_category_id)values('test',2)insert intobidi2.dbo.tbl_IssueTracker_IssueCategories(name,parent_category_id)values('test',2)insert into tbl_IssueTracker_IssueCategories(name,parent_category_id)values('test',2)insert intobidi2.dbo.tbl_IssueTracker_IssueCategories(name,parent_category_id)values('test',2)GO--note here that there are 3 columns stillselect * fromtbl_IssueTracker_IssueCategoriesselect * frombidi2.dbo.tbl_IssueTracker_IssueCategoriesGOupdatetbl_IssueTracker_IssueCategories set name='publisher'wherecategory_id>20000update bidi2.dbo.tbl_IssueTracker_IssueCategoriessetname='subscriber' where category_id<20000GOselect * fromtbl_IssueTracker_IssueCategoriesselect * frombidi2.dbo.tbl_IssueTracker_IssueCategoriesGOdelete fromtbl_IssueTracker_IssueCategories where name='subscriber'delete frombidi2.dbo.tbl_IssueTracker_IssueCategories wherename='publisher'GOselect *from tbl_IssueTracker_IssueCategoriesselect * frombidi2.dbo.tbl_IssueTracker_IssueCategoriesGO--Hilary CotterLooking for a SQLServer replication book?http://www.nwsu.com/0974973602.htmlLooking for a FAQon Indexing Services/SQL FTShttp://www.indexserverfaq.com"Michael Hotek"<mike@.solidqualitylearning.com> wrote inmessagenews:uTc%23afMFGHA.1736@.TK2MSFTNGP14.phx.gbl...> Hilary,> I wouldsuggest that you exit these newsgroups right now. No schemachange is aboutas false of a statement as you can ever possibly get. Itrequires that youenable the @.loopback_detection parameter and the loopbackdetection algorithmuses the additional column to determine the originatorof the change. TheMVP program has sunk really low when it has members whoknowingly postblatantly false answers.>> The create table statement that I used toinitially generate the table is:> create table two_way_test1>> (pkcolINTEGER PRIMARY KEY NOT NULL,>> intcol INTEGER IDENTITY(1,1) NOT FORREPLICATION,>> charcol CHAR(100))>> The table structure after the snapshotcompleted was as follows:> USE [test1]>> GO>> /****** Object: Table[dbo].[two_way_test1] Script Date: 01/08/200620:43:26 ******/>> SETANSI_NULLS ON>> GO>> SET QUOTED_IDENTIFIER ON>> GO>> SET ANSI_PADDING ON>>GO>> CREATE TABLE [dbo].[two_way_test1](>> [pkcol] [int] NOT NULL,>>[intcol] [int] IDENTITY(1,1) NOT FOR REPLICATION NOT NULL,>> [charcol][char](100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,>>[msrepl_tran_version] [uniqueidentifier] NOT NULLCONSTRAINT[MSrepl_tran_version_default_F6FB929A_624E_4666_919D_08145657CCE8_2073058421] DEFAULT (newid()),>> PRIMARY KEY CLUSTERED>> ([pkcol]ASC>> )WITH (IGNORE_DUP_KEY = OFF) ON [PRIMARY]) ON [PRIMARY]>> GO>> SETANSI_PADDING OFF>> GO>> USE [test1]>> GO>> ALTER TABLE [dbo].[two_way_test1]WITH NOCHECK ADD CONSTRAINT[checkprimcol] CHECK NOT FOR REPLICATION(([pkcol]>=(1) AND[pkcol]<=(1000)))>> GO>> ALTER TABLE [dbo].[two_way_test1]CHECK CONSTRAINT [checkprimcol]>> Maybe you can explain how adding a columnto a table doesn't qualify asmodifying the schema?>>http://support.microsoft.com/default.aspx?scid=820675>> Here is the code Iused to implement bi-directional, transactionalreplication. (Straight outof the KB article.)>> use master>> go>> create database test1>> go>> createdatabase test2>> go>> use test1>> go>> create table two_way_test1>> (>>pkcol INTEGER PRIMARY KEY NOT NULL,>> intcol INTEGER IDENTITY(1,1) NOT FORREPLICATION,>> charcol CHAR(100)>> )>> use test2>> go>> create tabletwo_way_test2>> (>> pkcol INTEGER PRIMARY KEY NOT NULL,>> intcol INTEGERIDENTITY(1000000000,1) NOT FOR REPLICATION,>> charcol CHAR(100)>> )>>go>> -- Constraint to enforce a range of values between 1 and 1000 indatabasetest1>> use test1>> go>> alter table>> two_way_test1>> withnocheck>> add constraint>> checkprimcol check NOT FOR REPLICATION>> (>>pkcol BETWEEN 1 and 1000>> )>> go>> use test2>> go>> -- Constraint toenforce a range of values between 1001 and 2000 in thedatabase test2>> altertable>> two_way_test2>> with nocheck>> add constraint>> checkprimcol checkNOT FOR REPLICATION>> (>> pkcol BETWEEN 1001 and 2000>> )>> go>> use test1>>go>> -- INSERT Stored Procedure>> create procedure sp_ins_two_way_test1>>@.pkcol int,>> @.intcol int,>> @.charcol char(100),>> @.timestampcoltimestamp,>> @.rowidcol uniqueidentifier>> as>> insert into two_way_test1>>(>> pkcol,>> intcol,>> charcol>> )>> values>> (>> @.pkcol,>> @.intcol,>>@.charcol>> )>> go>> --UPDATE Stored Procedure>> create proceduresp_upd_two_way_test1>> @.pkcol int,>> @.intcol int,>> @.charcol char(100),>>@.timestampcol timestamp,>> @.rowidcol uniqueidentifier,>> @.old_pkcol int>>as>> declare @.x int>> declare @.y int>> declare @.z char(100)>> select>>@.x=pkcol,>> @.y=intcol,>> @.z=charcol>> from>> two_way_test1>> where>> pkcol =@.pkcol>> delete>> two_way_test1>> where>> pkcol=@.pkcol>> insert intotwo_way_test1>> (>> pkcol,>> intcol,>> charcol>> )>> values>> (>> caseisnull(@.pkcol,0) when 0 then @.x else @.pkcol end,>> case isnull(@.intcol,0)when 0 then @.y else @.intcol end,>> case isnull(@.charcol,'N') when 'N' then@.z else @.charcol end>> )>> go>> -- DELETE Stored Procedure>> createprocedure sp_del_two_way_test1>> @.old_pkcol int>> as>> delete>>two_way_test1>> where>> pkcol = @.old_pkcol>> go>> use test2>> go>> -- INSERTStored Procedure>> create procedure sp_ins_two_way_test2>> @.pkcol int,>>@.intcol int,>> @.charcol char(100),>> @.timestampcol timestamp,>> @.rowidcoluniqueidentifier>> as>> insert into two_way_test2>> (>> pkcol,>> intcol,>>charcol>> )>> values>> (>> @.pkcol,>> @.intcol,>> @.charcol>> )>> go>> --UPDATEStored Procedure>> create procedure sp_upd_two_way_test2>> @.pkcol int,>>@.intcol int,>> @.charcol char(100),>> @.timestampcol timestamp,>> @.rowidcoluniqueidentifier,>> @.old_pkcol int>> as>> declare @.x int>> declare @.y int>>declare @.z char(100)>> select>> @.x=pkcol,>> @.y=intcol,>> @.z=charcol>> from>>two_way_test2>> where>> pkcol = @.pkcol>> delete>> two_way_test2>>where>> pkcol=@.pkcol>> insert into two_way_test2>> (>> pkcol,>>intcol,>> charcol>> )>> values>> (>> case isnull(@.pkcol,0) when 0 then @.xelse @.pkcol end,>> case isnull(@.intcol,0) when 0 then @.y else @.intcol end,>>case isnull(@.charcol,'N') when 'N' then @.z else @.charcol end>> )>> go>> --DELETE Stored Procedure>> create procedure sp_del_two_way_test2>> @.old_pkcolint>> as>> delete>> two_way_test2>> where>> pkcol = @.old_pkcol>> go>> usemaster>> go>> exec sp_replicationdboption N'test1', N'publish', true>> go>>exec sp_replicationdboption N'test2', N'publish', true>> go>> --In thedatabase test1.>> use test1>> go>> -- Adding the transactionalpublication.>> exec sp_addpublication>> @.publication =N'two_way_pub_test1',>> @.restricted = N'false',>> @.sync_method =N'native',>> @.repl_freq = N'continuous',>> @.description = N'Transactionalpublication of database test1.',>> @.status = N'active',>> @.allow_push =N'true',>> @.allow_pull = N'true',>> @.allow_anonymous = N'false',>>@.enabled_for_internet = N'false',>> @.independent_agent = N'false',>>@.immediate_sync = N'false',>> @.allow_sync_tran = N'true',>>@.autogen_sync_procs = N'true',>> @.retention = 72>> go>> execsp_addpublication_snapshot>> @.publication = N'two_way_pub_test1',>>@.frequency_type = 4,>> @.frequency_interval = 1,>>@.frequency_relative_interval = 0,>> @.frequency_recurrence_factor = 1,>>@.frequency_subday = 1,>> @.frequency_subday_interval = 0,>>@.active_start_date = 0,>> @.active_end_date = 0,>> @.active_start_time_of_day= 233000,>> @.active_end_time_of_day = 0>> go>> -- Adding the transactionalarticles.>> exec sp_addarticle>> @.publication = N'two_way_pub_test1',>>@.article = N'two_way_test1',>> @.source_owner = N'dbo',>> @.source_object =N'two_way_test1',>> @.destination_table = N'two_way_test1',>> @.type =N'logbased',>> @.creation_script = null,>> @.description = null,>>@.pre_creation_cmd = N'drop',>> @.schema_option = 0x00000000000000F1,>>@.status = 16,>> @.vertical_partition = N'false',>> @.ins_cmd = N'CALLsp_ins_two_way_test2',>> @.del_cmd = N'CALL sp_del_two_way_test2',>> @.upd_cmd= N'CALL sp_upd_two_way_test2',>> @.filter = null,>> @.sync_object = null,>>@.identityrangemanagementoption = 'manual'>> go>> -- In the database test2>>use test2>> go>> -- Adding the transactional publication.>> execsp_addpublication>> @.publication = N'two_way_pub_test2',>> @.restricted =N'false',>> @.sync_method = N'native',>> @.repl_freq = N'continuous',>>@.description = N'Transactional publication of database test2',>> @.status =N'active',>> @.allow_push = N'true',>> @.allow_pull = N'true',>>@.allow_anonymous = N'false',>> @.enabled_for_internet = N'false',>>@.independent_agent = N'false',>> @.immediate_sync = N'false',>>@.allow_sync_tran = N'true',>> @.autogen_sync_procs = N'true',>> @.retention =72>> go>> exec sp_addpublication_snapshot>> @.publication =N'two_way_pub_test2',>> @.frequency_type = 4,>> @.frequency_interval = 1,>>@.frequency_relative_interval = 0,>> @.frequency_recurrence_factor = 1,>>@.frequency_subday = 1,>> @.frequency_subday_interval = 0,>>@.active_start_date = 0,>> @.active_end_date = 0,>> @.active_start_time_of_day= 233000,>> @.active_end_time_of_day = 0>> go>> -- Adding the transactionalarticles.>> exec sp_addarticle>> @.publication = N'two_way_pub_test2',>>@.article = N'two_way_test2',>> @.source_owner = N'dbo',>> @.source_object =N'two_way_test2',>> @.destination_table = N'two_way_test2',>> @.type =N'logbased',>> @.creation_script = null,>> @.description = null,>>@.pre_creation_cmd = N'drop',>> @.schema_option = 0x00000000000000F1,>>@.status = 16,>> @.vertical_partition = N'false',>> @.ins_cmd = N'CALLsp_ins_two_way_test1',>> @.del_cmd = N'CALL sp_del_two_way_test1',>> @.upd_cmd= N'CALL sp_upd_two_way_test1',>> @.filter = null,>> @.sync_object = null,>>@.identityrangemanagementoption = 'manual'>> go>> use test1>> go>> execsp_addsubscription>> @.publication = N'two_way_pub_test1',>> @.article =N'all',>> @.subscriber = 'WAKKO',>> @.destination_db = N'test2',>> @.sync_type= N'none',>> @.status = N'active',>> @.update_mode = N'sync tran',>>@.loopback_detection = 'true'>> go>> -- Adding the transactional subscriptionin test2.>> use test2>> go>> exec sp_addsubscription>> @.publication =N'two_way_pub_test2',>> @.article = N'all',>> @.subscriber = 'WAKKO',>>@.destination_db = N'test1',>> @.sync_type = N'none',>> @.status = N'active',>>@.update_mode = N'sync tran',>> @.loopback_detection = 'true'>> go>> -->Mike> Mentor> Solid Quality Learning> http://www.solidqualitylearning.com>>"Hilary Cotter" <hilary.cotter@.gmail.com> wrote inmessagenews:%238PEHtKFGHA.516@.TK2MSFTNGP15.phx.gbl...>> Bi-directionaltransactional replication requires no schema changes.>> -->> HilaryCotter>> Looking for a SQL Server replication book?>>http://www.nwsu.com/0974973602.html>> Looking for a FAQ on IndexingServices/SQL FTS>> http://www.indexserverfaq.com>> "Michael Hotek"<mike@.solidqualitylearning.com> wrote inmessagenews:uJnQlhtEGHA.3200@.tk2msftngp13.phx.gbl...>> Log shipping can beas little as 3 - 5 minutes of latency. Clusteringis only going to protectyou from hardware failure.>> Any replication method which sends data inboth directions will requirea schema change, there is absolutely no wayaround this.>> Database Mirroring in 2005 is the only technology whichexists whichwill allow you to maintain 2 databases synchronized with eachother, in realtime, without requiring a schema change. BUT, the mirrordatabase isinaccessible and you will incur a performance penalty in yourapplicationswhen you do this.>> -->> Mike>> Mentor>> Solid QualityLearning>> http://www.solidqualitylearning.com>>> "Paul Ibison"<Paul.Ibison@.Pygmalion.Com> wrote inmessagenews:eZw9LhsEGHA.3384@.TK2MSFTNGP12.phx.gbl...>> Jeff,>>log-shipping and transactional replication will both have latency. Onmysystems this can often be minimised in the case of transactionalreplicationto < 10 secs, but the data will usually be out of sync. Youmight beinterested in investigating synchronous database mirroring, orclustering toensure high-availability.>> Cheers,>> Paul Ibison SQL Server MVP,www.replicationanswers.com>> (recommended sql server 2000 replicationbook:>>http://www.nwsu.com/0974973602p.html)>>>>>>>>>>
begin 666 bidireproscript.sql
M8W)E871E(&1A=&%B87-E(&)I9&DQ#0I'3PT*8W)E871E(&1A=&%B87-E(&)I
M9&DR#0I'3PT*55-%($))1$DQ#0IG;PT*#0I#4D5!5$4@.5$%"3$4@.6V1B;UTN
M6W1B;%])<W-U951R86-K97)?27-S=65#871E9V]R:65S72 H#0I;8V%T96=O
M<GE?:61=(%MI;G1=($E$14Y42519("@.Q,# P,"P@.,2D@.3D]4($9/4B!215!,
M24-!5$E/3B @.3D]4($Y53$P@.+ T*6VYA;65=(%MV87)C:&%R72 H,3 P*2 @.
M3D]4($Y53$P@.+ T*6W!A<F5N=%]C871E9V]R>5]I9%T@.6VEN=%T@.3D]4($Y5
M3$P-"BD@.3TX@.6U!224U!4EE=#0I'3PT*#0I!3%1%4B!404),12!;9&)O72Y;
M=&)L7TES<W5E5')A8VME<E])<W-U94-A=&5G;W)I97-=(%=)5$@.@.3D]#2$5#
M2R!!1$0-"D-/3E-44D%)3E0@.6U!+7W1B;%])<W-U951R86-K97)?27-S=65#
M871E9V]R:65S72!04DE-05)9($M%62 @.0TQ54U1%4D5$#0HH#0I;8V%T96=O
M<GE?:61=#0HI("!/3B!;4%))34%265T-"D=/#0IS<%]R97!L:6-A=&EO;F1B
M;W!T:6]N("=B:61I,2<L)W!U8FQI<V@.G+"=T<G5E)PT*1T\-"G-P7V%D9'!U
M8FQI8V%T:6]N("=B:61I,2<L0'-T871U<STG86-T:79E)PT*1T\-"G-P7V%D
M9'!U8FQI8V%T:6]N7W-N87!S:&]T("=B:61I,2<-"D=/#0IS<%]A9&1A<G1I
M8VQE("=B:61I,2<L)W1B;%])<W-U951R86-K97)?27-S=65#871E9V]R:65S
M)RPG=&)L7TES<W5E5')A8VME<E])<W-U94-A=&5G;W)I97,G#0I'3PT*<W!?
M861D<W5B<V-R:7!T:6]N("=B:61I,2<L($!354)30U))0D52/4! <V5R=F5R
M;F%M92P@.0&1E<W1I;F%T:6]N7V1B(#TG8FED:3(G+$!L;V]P8F%C:U]D971E
M8W1I;VX@./2 @.)W1R=64G+"! <WEN8U]T>7!E/2!N;VYE#0I'3PT*:68@.97AI
M<W1S("AS96QE8W0@.*B!F<F]M('-Y<V]B:F5C=',@.=VAE<F4@.='EP92 ]("=0
M)R!A;F0@.;F%M92 ]( T*)W-P7TU3:6YS7W1B;%])<W-U951R86-K97)?27-S
M=2<I("!D<F]P('!R;V,@.#0IG;PT*8W)E871E('!R;V-E9'5R92!;<W!?35-I
M;G-?=&)L7TES<W5E5')A8VME<E])<W-U72! 8S$@.:6YT+$!C,B -"G9A<F-H
M87(H,3 P*2Q 8S,@.:6YT#0H-"D%3#0I"14=)3@.T*#0II;G-E<G0@.:6YT;R!;
M=&)L7TES<W5E5')A8VME<E])<W-U94-A=&5G;W)I97-=* T*6V-A=&5G;W)Y
M7VED72P@.6VYA;65=+"!;<&%R96YT7V-A=&5G;W)Y7VED70T*("D-"@.T*=F%L
M=65S("@.-"D!C,2P@.0&,R+"! 8S,-"B I#0H-"D5.1 T*9V\-"@.T*:68@.97AI
M<W1S("AS96QE8W0@.*B!F<F]M('-Y<V]B:F5C=',@.=VAE<F4@.='EP92 ]("=0
M)R!A;F0@.;F%M92 ]( T*)W-P7TU3=7!D7W1B;%])<W-U951R86-K97)?27-S
M=2<I("!D<F]P('!R;V,@.#0I;<W!?35-U<&1?=&)L7TES<W5E5')A8VME<E])
M<W-U70T*9V\-"F-R96%T92!P<F]C961U<F4@.6W-P7TU3=7!D7W1B;%])<W-U
M951R86-K97)?27-S=5T-"B! 8S$@.:6YT+$!C,B!V87)C:&%R*#$P,"DL0&,S
M(&EN="P@.0'!K8S$@.:6YT#0IA<PT*:68@.0&,Q(#T@.0'!K8S$-"F)E9VEN#0IU
M<&1A=&4@.6W1B;%])<W-U951R86-K97)?27-S=65#871E9V]R:65S72!S970@.
M6VYA;65=(#T@.#0I 8S(L6W!A<F5N=%]C871E9V]R>5]I9%T@./2! 8S,-"G=H
M97)E(%MC871E9V]R>5]I9%T@./2! <&MC,0T*96YD#0IE;'-E#0IB96=I;@.T*
M=7!D871E(%MT8FQ?27-S=654<F%C:V5R7TES<W5E0V%T96=O<FEE<UT@.<V5T
M("TM6V-A=&5G;W)Y7VED72 ]($!C,2P-"EMN86UE72 ]($!C,BQ;<&%R96YT
M7V-A=&5G;W)Y7VED72 ]($!C,PT*=VAE<F4@.6V-A=&5G;W)Y7VED72 ]($!P
M:V,Q#0IE;F0-"F=O#0H-"FEF(&5X:7-T<R H<V5L96-T("H@.9G)O;2!S>7-O
M8FIE8W1S('=H97)E('1Y<&4@./2 G4"<@.86YD(&YA;64@./2 -"B=S<%]-4V1E
M;%]T8FQ?27-S=654<F%C:V5R7TES<W4G*2 @.9')O<"!P<F]C( T*6W-P7TU3
M9&5L7W1B;%])<W-U951R86-K97)?27-S=5T-"F=O#0IC<F5A=&4@.<')O8V5D
M=7)E(%MS<%]-4V1E;%]T8FQ?27-S=654<F%C:V5R7TES<W5=($!P:V,Q(&EN
M= T*87,-"F1E;&5T92!;=&)L7TES<W5E5')A8VME<E])<W-U94-A=&5G;W)I
M97-=#0IW:&5R92!;8V%T96=O<GE?:61=(#T@.0'!K8S$-"F=O#0IU<V4@.8FED
M:3(-"D=/#0H-"D-214%412!404),12!;9&)O72Y;=&)L7TES<W5E5')A8VME
M<E])<W-U94-A=&5G;W)I97-=("@.-"EMC871E9V]R>5]I9%T@.6VEN=%T@.241%
M3E1)5%D@.*#(P,# P+" Q*2!.3U0@.1D]2(%)%4$Q)0T%424].("!.3U0@.3E5,
M3" L#0I;;F%M95T@.6W9A<F-H87)=("@.Q,# I("!.3U0@.3E5,3" L#0I;<&%R
M96YT7V-A=&5G;W)Y7VED72!;:6YT72!.3U0@.3E5,3" -"@.T**2!/3B!;4%))
M34%265T-"D=/#0H-"D%,5$52(%1!0DQ%(%MD8F]=+EMT8FQ?27-S=654<F%C
M:V5R7TES<W5E0V%T96=O<FEE<UT@.5TE42"!.3T-(14-+($%$1 T*0T].4U12
M04E.5"!;4$M?=&)L7TES<W5E5')A8VME<E])<W-U94-A=&5G;W)I97-=(%!2
M24U!4ED@.2T59("!#3%535$52140-"B@.-"EMC871E9V]R>5]I9%T-"BD@.($].
M(%M04DE-05)970T*1T\-"@.T*<W!?<F5P;&EC871I;VYD8F]P=&EO;B G8FED
M:3(G+"=P=6)L:7-H)RPG=')U92<-"D=/#0IS<%]A9&1P=6)L:6-A=&EO;B G
M8FED:3$G+$!S=&%T=7,])V%C=&EV92<-"D=/#0IS<%]A9&1P=6)L:6-A=&EO
M;E]S;F%P<VAO=" G8FED:3$G#0I'3PT*<W!?861D87)T:6-L92 -"B=B:61I
M,2<L)W1B;%])<W-U951R86-K97)?27-S=65#871E9V]R:65S)RPG=&)L7TES
M<W5E5')A8VME<E])<W-U94-A=&5G;W)I97,G#0I'3PT*<W!?861D<W5B<V-R
M:7!T:6]N("=B:61I,2<L($!354)30U))0D52/4! <V5R=F5R;F%M92P@.0&1E
M<W1I;F%T:6]N7V1B#0H])V)I9&DQ)RQ ;&]O<&)A8VM?9&5T96-T:6]N(#T@.
M("=T<G5E)RP@.0'-Y;F-?='EP93T@.;F]N90T*1T\-"@.T*:68@.97AI<W1S("AS
M96QE8W0@.*B!F<F]M('-Y<V]B:F5C=',@.=VAE<F4@.='EP92 ]("=0)R!A;F0@.
M;F%M92 ]( T*)W-P7TU3:6YS7V1B;W1B;%])<W-U951R86-K97)?22<I("!D
M<F]P('!R;V,@.#0I;<W!?35-I;G-?=&)L7TES<W5E5')A8VME<E])<W-U70T*
M9V\-"F-R96%T92!P<F]C961U<F4@.<W!?35-I;G-?9&)O=&)L7TES<W5E5')A
M8VME<E])($!C,2!I;G0L0&,R( T*=F%R8VAA<B@.Q,# I+$!C,R!I;G0-"@.T*
M05,-"D)%1TE.#0H-"FEN<V5R="!I;G1O(%MT8FQ?27-S=654<F%C:V5R7TES
M<W5E0V%T96=O<FEE<UTH#0I;8V%T96=O<GE?:61=+"!;;F%M95TL(%MP87)E
M;G1?8V%T96=O<GE?:61=#0H@.*0T*#0IV86QU97,@.* T*0&,Q+"! 8S(L($!C
M,R -"B I#0H-"D5.1 T*9V\-"@.T*:68@.97AI<W1S("AS96QE8W0@.*B!F<F]M
M('-Y<V]B:F5C=',@.=VAE<F4@.='EP92 ]("=0)R!A;F0@.;F%M92 ]( T*)W-P
M7TU3=7!D7W1B;%])<W-U951R86-K97)?27-S=2<I("!D<F]P('!R;V,@.#0I;
M<W!?35-U<&1?=&)L7TES<W5E5')A8VME<E])<W-U70T*9V\-"F-R96%T92!P
M<F]C961U<F4@.6W-P7TU3=7!D7W1B;%])<W-U951R86-K97)?27-S=5T-"B!
M8S$@.:6YT+$!C,B!V87)C:&%R*#$P,"DL0&,S(&EN="Q <&MC,2!I;G0-"F%S
M#0II9B! 8S$@./2! <&MC,0T*8F5G:6X-"G5P9&%T92!;=&)L7TES<W5E5')A
M8VME<E])<W-U94-A=&5G;W)I97-=('-E="!;;F%M95T@./2 -"D!C,BQ;<&%R
M96YT7V-A=&5G;W)Y7VED72 ]($!C,PT*=VAE<F4@.6V-A=&5G;W)Y7VED72 ]
M($!P:V,Q#0IE;F0-"F5L<V4-"F)E9VEN#0IU<&1A=&4@.6W1B;%])<W-U951R
M86-K97)?27-S=65#871E9V]R:65S72!S970@.+2U;8V%T96=O<GE?:61=(#T@.
M0&,Q+ T*6VYA;65=(#T@.0&,R+%MP87)E;G1?8V%T96=O<GE?:61=(#T@.0&,S
M#0IW:&5R92!;8V%T96=O<GE?:61=(#T@.0'!K8S$-"F5N9 T*9V\-"@.T*:68@.
M97AI<W1S("AS96QE8W0@.*B!F<F]M('-Y<V]B:F5C=',@.=VAE<F4@.='EP92 ]
M("=0)R!A;F0@.;F%M92 ]( T*)W-P7TU39&5L7W1B;%])<W-U951R86-K97)?
M27-S=2<I("!D<F]P('!R;V,@.#0I;<W!?35-D96Q?=&)L7TES<W5E5')A8VME
M<E])<W-U70T*9V\-"F-R96%T92!P<F]C961U<F4@.6W-P7TU39&5L7W1B;%])
M<W-U951R86-K97)?27-S=5T@.0'!K8S$@.:6YT#0IA<PT*9&5L971E(%MT8FQ?
M27-S=654<F%C:V5R7TES<W5E0V%T96=O<FEE<UT-"G=H97)E(%MC871E9V]R
M>5]I9%T@./2! <&MC,0T*9V\-"@.T*=7-E(&)I9&DQ#0I'3PT*:6YS97)T(&EN
M=&\@.=&)L7TES<W5E5')A8VME<E])<W-U94-A=&5G;W)I97,H;F%M92P@.<&%R
M96YT7V-A=&5G;W)Y7VED*2 -"G9A;'5E<R@.G=&5S="<L,BD-"FEN<V5R="!I
M;G1O(&)I9&DR+F1B;RYT8FQ?27-S=654<F%C:V5R7TES<W5E0V%T96=O<FEE
M<RAN86UE+" -"G!A<F5N=%]C871E9V]R>5]I9"D@.=F%L=65S*"=T97-T,B<L
M,BD-"FEN<V5R="!I;G1O('1B;%])<W-U951R86-K97)?27-S=65#871E9V]R
M:65S*&YA;64L('!A<F5N=%]C871E9V]R>5]I9"D@.#0IV86QU97,H)W1E<W0G
M+#(I#0II;G-E<G0@.:6YT;R!B:61I,BYD8F\N=&)L7TES<W5E5')A8VME<E])
M<W-U94-A=&5G;W)I97,H;F%M92P@.#0IP87)E;G1?8V%T96=O<GE?:60I('9A
M;'5E<R@.G=&5S=#(G+#(I#0II;G-E<G0@.:6YT;R!T8FQ?27-S=654<F%C:V5R
M7TES<W5E0V%T96=O<FEE<RAN86UE+"!P87)E;G1?8V%T96=O<GE?:60I( T*
M=F%L=65S*"=T97-T)RPR*0T*:6YS97)T(&EN=&\@.8FED:3(N9&)O+G1B;%])
M<W-U951R86-K97)?27-S=65#871E9V]R:65S*&YA;64L( T*<&%R96YT7V-A
M=&5G;W)Y7VED*2!V86QU97,H)W1E<W0R)RPR*0T*#0I'3PT*+2UN;W1E(&AE
M<F4@.=&AA="!T:&5R92!A<F4@.,R!C;VQU;6YS('-T:6QL#0IS96QE8W0@.*B!F
M<F]M('1B;%])<W-U951R86-K97)?27-S=65#871E9V]R:65S#0IS96QE8W0@.
M*B!F<F]M(&)I9&DR+F1B;RYT8FQ?27-S=654<F%C:V5R7TES<W5E0V%T96=O
M<FEE<PT*1T\-"G5P9&%T92!T8FQ?27-S=654<F%C:V5R7TES<W5E0V%T96=O
M<FEE<R!S970@.;F%M93TG<'5B;&ES:&5R)R!W:&5R92 -"F-A=&5G;W)Y7VED
M/C(P,# P#0IU<&1A=&4@.8FED:3(N9&)O+G1B;%])<W-U951R86-K97)?27-S
M=65#871E9V]R:65S('-E=" -"FYA;64])W-U8G-C<FEB97(G("!W:&5R92!C
M871E9V]R>5]I9#PR,# P, T*1T\-"G-E;&5C=" J(&9R;VT@.=&)L7TES<W5E
M5')A8VME<E])<W-U94-A=&5G;W)I97,-"G-E;&5C=" J(&9R;VT@.8FED:3(N
M9&)O+G1B;%])<W-U951R86-K97)?27-S=65#871E9V]R:65S#0I'3PT*9&5L
M971E(&9R;VT@.=&)L7TES<W5E5')A8VME<E])<W-U94-A=&5G;W)I97,@.=VAE
M<F4@.;F%M93TG<W5B<V-R:6)E<B<-"F1E;&5T92!F<F]M(&)I9&DR+F1B;RYT
M8FQ?27-S=654<F%C:V5R7TES<W5E0V%T96=O<FEE<R!W:&5R92 -"FYA;64]
M)W!U8FQI<VAE<B<-"D=/#0IS96QE8W0@.*B!F<F]M('1B;%])<W-U951R86-K
M97)?27-S=65#871E9V]R:65S#0IS96QE8W0@.*B!F<F]M(&)I9&DR+F1B;RYT
E8FQ?27-S=654<F%C:V5R7TES<W5E0V%T96=O<FEE<PT*1T\-"@.``
`
end|||I let my emotions get in the way due to a series of incorrect posts and
posted messages while I was still angry. That won't happen again. If I
offended anyone, I apologize.
EVERYTHING that I post to any venue, public or private represent my own
views and never represent the views of any other person or entity.
--
Mike
http://www.solidqualitylearning.com
Disclaimer: This communication is an original work and represents my sole
views on the subject. It does not represent the views of any other person
or entity either by inference or direct reference.
"Michael Hotek" <mike@.solidqualitylearning.com> wrote in message
news:uTc%23afMFGHA.1736@.TK2MSFTNGP14.phx.gbl...
> Hilary,
> I would suggest that you exit these newsgroups right now. No schema
> change is about as false of a statement as you can ever possibly get. It
> requires that you enable the @.loopback_detection parameter and the
> loopback detection algorithm uses the additional column to determine the
> originator of the change. The MVP program has sunk really low when it has
> members who knowingly post blatantly false answers.
> The create table statement that I used to initially generate the table is:
> create table two_way_test1
> (pkcol INTEGER PRIMARY KEY NOT NULL,
> intcol INTEGER IDENTITY(1,1) NOT FOR REPLICATION,
> charcol CHAR(100))
>
> The table structure after the snapshot completed was as follows:
> USE [test1]
> GO
> /****** Object: Table [dbo].[two_way_test1] Script Date: 01/08/2006
> 20:43:26 ******/
> SET ANSI_NULLS ON
> GO
> SET QUOTED_IDENTIFIER ON
> GO
> SET ANSI_PADDING ON
> GO
> CREATE TABLE [dbo].[two_way_test1](
> [pkcol] [int] NOT NULL,
> [intcol] [int] IDENTITY(1,1) NOT FOR REPLICATION NOT NULL,
> [charcol] [char](100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
> [msrepl_tran_version] [uniqueidentifier] NOT NULL CONSTRAINT
> [MSrepl_tran_version_default_F6FB929A_624E_4666_919D_08145657CCE8_2073058421]
> DEFAULT (newid()),
> PRIMARY KEY CLUSTERED
> ([pkcol] ASC
> )WITH (IGNORE_DUP_KEY = OFF) ON [PRIMARY]) ON [PRIMARY]
> GO
> SET ANSI_PADDING OFF
> GO
> USE [test1]
> GO
> ALTER TABLE [dbo].[two_way_test1] WITH NOCHECK ADD CONSTRAINT
> [checkprimcol] CHECK NOT FOR REPLICATION (([pkcol]>=(1) AND
> [pkcol]<=(1000)))
> GO
> ALTER TABLE [dbo].[two_way_test1] CHECK CONSTRAINT [checkprimcol]
> Maybe you can explain how adding a column to a table doesn't qualify as
> modifying the schema?
> http://support.microsoft.com/default.aspx?scid=820675
> Here is the code I used to implement bi-directional, transactional
> replication. (Straight out of the KB article.)
> use master
> go
> create database test1
> go
> create database test2
> go
> use test1
> go
> create table two_way_test1
> (
> pkcol INTEGER PRIMARY KEY NOT NULL,
> intcol INTEGER IDENTITY(1,1) NOT FOR REPLICATION,
> charcol CHAR(100)
> )
> use test2
> go
> create table two_way_test2
> (
> pkcol INTEGER PRIMARY KEY NOT NULL,
> intcol INTEGER IDENTITY(1000000000,1) NOT FOR REPLICATION,
> charcol CHAR(100)
> )
> go
> -- Constraint to enforce a range of values between 1 and 1000 in database
> test1
> use test1
> go
> alter table
> two_way_test1
> with nocheck
> add constraint
> checkprimcol check NOT FOR REPLICATION
> (
> pkcol BETWEEN 1 and 1000
> )
> go
>
> use test2
> go
> -- Constraint to enforce a range of values between 1001 and 2000 in the
> database test2
> alter table
> two_way_test2
> with nocheck
> add constraint
> checkprimcol check NOT FOR REPLICATION
> (
> pkcol BETWEEN 1001 and 2000
> )
> go
> use test1
> go
> -- INSERT Stored Procedure
> create procedure sp_ins_two_way_test1
> @.pkcol int,
> @.intcol int,
> @.charcol char(100),
> @.timestampcol timestamp,
> @.rowidcol uniqueidentifier
> as
> insert into two_way_test1
> (
> pkcol,
> intcol,
> charcol
> )
> values
> (
> @.pkcol,
> @.intcol,
> @.charcol
> )
> go
> --UPDATE Stored Procedure
> create procedure sp_upd_two_way_test1
> @.pkcol int,
> @.intcol int,
> @.charcol char(100),
> @.timestampcol timestamp,
> @.rowidcol uniqueidentifier,
> @.old_pkcol int
> as
> declare @.x int
> declare @.y int
> declare @.z char(100)
> select
> @.x=pkcol,
> @.y=intcol,
> @.z=charcol
> from
> two_way_test1
> where
> pkcol = @.pkcol
>
>
> delete
> two_way_test1
> where
> pkcol=@.pkcol
>
>
> insert into two_way_test1
> (
> pkcol,
> intcol,
> charcol
> )
> values
> (
> case isnull(@.pkcol,0) when 0 then @.x else @.pkcol end,
> case isnull(@.intcol,0) when 0 then @.y else @.intcol end,
> case isnull(@.charcol,'N') when 'N' then @.z else @.charcol end
> )
> go
> -- DELETE Stored Procedure
> create procedure sp_del_two_way_test1
> @.old_pkcol int
> as
> delete
> two_way_test1
> where
> pkcol = @.old_pkcol
> go
> use test2
> go
> -- INSERT Stored Procedure
> create procedure sp_ins_two_way_test2
> @.pkcol int,
> @.intcol int,
> @.charcol char(100),
> @.timestampcol timestamp,
> @.rowidcol uniqueidentifier
> as
> insert into two_way_test2
> (
> pkcol,
> intcol,
> charcol
> )
> values
> (
> @.pkcol,
> @.intcol,
> @.charcol
> )
> go
> --UPDATE Stored Procedure
> create procedure sp_upd_two_way_test2
> @.pkcol int,
> @.intcol int,
> @.charcol char(100),
> @.timestampcol timestamp,
> @.rowidcol uniqueidentifier,
> @.old_pkcol int
> as
> declare @.x int
> declare @.y int
> declare @.z char(100)
> select
> @.x=pkcol,
> @.y=intcol,
> @.z=charcol
> from
> two_way_test2
> where
> pkcol = @.pkcol
>
>
> delete
> two_way_test2
> where
> pkcol=@.pkcol
>
>
> insert into two_way_test2
> (
> pkcol,
> intcol,
> charcol
> )
> values
> (
> case isnull(@.pkcol,0) when 0 then @.x else @.pkcol end,
> case isnull(@.intcol,0) when 0 then @.y else @.intcol end,
> case isnull(@.charcol,'N') when 'N' then @.z else @.charcol end
> )
> go
>
> -- DELETE Stored Procedure
> create procedure sp_del_two_way_test2
> @.old_pkcol int
> as
> delete
> two_way_test2
> where
> pkcol = @.old_pkcol
> go
> use master
> go
> exec sp_replicationdboption N'test1', N'publish', true
> go
> exec sp_replicationdboption N'test2', N'publish', true
> go
> --In the database test1.
> use test1
> go
> -- Adding the transactional publication.
> exec sp_addpublication
> @.publication = N'two_way_pub_test1',
> @.restricted = N'false',
> @.sync_method = N'native',
> @.repl_freq = N'continuous',
> @.description = N'Transactional publication of database test1.',
> @.status = N'active',
> @.allow_push = N'true',
> @.allow_pull = N'true',
> @.allow_anonymous = N'false',
> @.enabled_for_internet = N'false',
> @.independent_agent = N'false',
> @.immediate_sync = N'false',
> @.allow_sync_tran = N'true',
> @.autogen_sync_procs = N'true',
> @.retention = 72
> go
> exec sp_addpublication_snapshot
> @.publication = N'two_way_pub_test1',
> @.frequency_type = 4,
> @.frequency_interval = 1,
> @.frequency_relative_interval = 0,
> @.frequency_recurrence_factor = 1,
> @.frequency_subday = 1,
> @.frequency_subday_interval = 0,
> @.active_start_date = 0,
> @.active_end_date = 0,
> @.active_start_time_of_day = 233000,
> @.active_end_time_of_day = 0
> go
> -- Adding the transactional articles.
> exec sp_addarticle
> @.publication = N'two_way_pub_test1',
> @.article = N'two_way_test1',
> @.source_owner = N'dbo',
> @.source_object = N'two_way_test1',
> @.destination_table = N'two_way_test1',
> @.type = N'logbased',
> @.creation_script = null,
> @.description = null,
> @.pre_creation_cmd = N'drop',
> @.schema_option = 0x00000000000000F1,
> @.status = 16,
> @.vertical_partition = N'false',
> @.ins_cmd = N'CALL sp_ins_two_way_test2',
> @.del_cmd = N'CALL sp_del_two_way_test2',
> @.upd_cmd = N'CALL sp_upd_two_way_test2',
> @.filter = null,
> @.sync_object = null,
> @.identityrangemanagementoption = 'manual'
> go
> -- In the database test2
> use test2
> go
> -- Adding the transactional publication.
> exec sp_addpublication
> @.publication = N'two_way_pub_test2',
> @.restricted = N'false',
> @.sync_method = N'native',
> @.repl_freq = N'continuous',
> @.description = N'Transactional publication of database test2',
> @.status = N'active',
> @.allow_push = N'true',
> @.allow_pull = N'true',
> @.allow_anonymous = N'false',
> @.enabled_for_internet = N'false',
> @.independent_agent = N'false',
> @.immediate_sync = N'false',
> @.allow_sync_tran = N'true',
> @.autogen_sync_procs = N'true',
> @.retention = 72
> go
> exec sp_addpublication_snapshot
> @.publication = N'two_way_pub_test2',
> @.frequency_type = 4,
> @.frequency_interval = 1,
> @.frequency_relative_interval = 0,
> @.frequency_recurrence_factor = 1,
> @.frequency_subday = 1,
> @.frequency_subday_interval = 0,
> @.active_start_date = 0,
> @.active_end_date = 0,
> @.active_start_time_of_day = 233000,
> @.active_end_time_of_day = 0
> go
> -- Adding the transactional articles.
> exec sp_addarticle
> @.publication = N'two_way_pub_test2',
> @.article = N'two_way_test2',
> @.source_owner = N'dbo',
> @.source_object = N'two_way_test2',
> @.destination_table = N'two_way_test2',
> @.type = N'logbased',
> @.creation_script = null,
> @.description = null,
> @.pre_creation_cmd = N'drop',
> @.schema_option = 0x00000000000000F1,
> @.status = 16,
> @.vertical_partition = N'false',
> @.ins_cmd = N'CALL sp_ins_two_way_test1',
> @.del_cmd = N'CALL sp_del_two_way_test1',
> @.upd_cmd = N'CALL sp_upd_two_way_test1',
> @.filter = null,
> @.sync_object = null,
> @.identityrangemanagementoption = 'manual'
> go
> use test1
> go
> exec sp_addsubscription
> @.publication = N'two_way_pub_test1',
> @.article = N'all',
> @.subscriber = 'WAKKO',
> @.destination_db = N'test2',
> @.sync_type = N'none',
> @.status = N'active',
> @.update_mode = N'sync tran',
> @.loopback_detection = 'true'
> go
> -- Adding the transactional subscription in test2.
> use test2
> go
> exec sp_addsubscription
> @.publication = N'two_way_pub_test2',
> @.article = N'all',
> @.subscriber = 'WAKKO',
> @.destination_db = N'test1',
> @.sync_type = N'none',
> @.status = N'active',
> @.update_mode = N'sync tran',
> @.loopback_detection = 'true'
> go
>
> --
> Mike
> Mentor
> Solid Quality Learning
> http://www.solidqualitylearning.com
>
> "Hilary Cotter" <hilary.cotter@.gmail.com> wrote in message
> news:%238PEHtKFGHA.516@.TK2MSFTNGP15.phx.gbl...
>> Bi-directional transactional replication requires no schema changes.
>> --
>> Hilary Cotter
>> Looking for a SQL Server replication book?
>> http://www.nwsu.com/0974973602.html
>> Looking for a FAQ on Indexing Services/SQL FTS
>> http://www.indexserverfaq.com
>> "Michael Hotek" <mike@.solidqualitylearning.com> wrote in message
>> news:uJnQlhtEGHA.3200@.tk2msftngp13.phx.gbl...
>> Log shipping can be as little as 3 - 5 minutes of latency. Clustering
>> is only going to protect you from hardware failure.
>> Any replication method which sends data in both directions will require
>> a schema change, there is absolutely no way around this.
>> Database Mirroring in 2005 is the only technology which exists which
>> will allow you to maintain 2 databases synchronized with each other, in
>> real time, without requiring a schema change. BUT, the mirror database
>> is inaccessible and you will incur a performance penalty in your
>> applications when you do this.
>> --
>> Mike
>> Mentor
>> Solid Quality Learning
>> http://www.solidqualitylearning.com
>>
>> "Paul Ibison" <Paul.Ibison@.Pygmalion.Com> wrote in message
>> news:eZw9LhsEGHA.3384@.TK2MSFTNGP12.phx.gbl...
>> Jeff,
>> log-shipping and transactional replication will both have latency. On
>> my systems this can often be minimised in the case of transactional
>> replication to < 10 secs, but the data will usually be out of sync. You
>> might be interested in investigating synchronous database mirroring, or
>> clustering to ensure high-availability.
>> Cheers,
>> Paul Ibison SQL Server MVP, www.replicationanswers.com
>> (recommended sql server 2000 replication book:
>> http://www.nwsu.com/0974973602p.html)
>>
>>
>>
>>
>|||Thank you for your apology.
--
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"Z" <z@.z.com> wrote in message news:OzP5mg$FGHA.3700@.TK2MSFTNGP15.phx.gbl...
>I let my emotions get in the way due to a series of incorrect posts and
>posted messages while I was still angry. That won't happen again. If I
>offended anyone, I apologize.
>
> EVERYTHING that I post to any venue, public or private represent my own
> views and never represent the views of any other person or entity.
> --
> Mike
> http://www.solidqualitylearning.com
> Disclaimer: This communication is an original work and represents my sole
> views on the subject. It does not represent the views of any other person
> or entity either by inference or direct reference.
> "Michael Hotek" <mike@.solidqualitylearning.com> wrote in message
> news:uTc%23afMFGHA.1736@.TK2MSFTNGP14.phx.gbl...
>> Hilary,
>> I would suggest that you exit these newsgroups right now. No schema
>> change is about as false of a statement as you can ever possibly get. It
>> requires that you enable the @.loopback_detection parameter and the
>> loopback detection algorithm uses the additional column to determine the
>> originator of the change. The MVP program has sunk really low when it
>> has members who knowingly post blatantly false answers.
>> The create table statement that I used to initially generate the table
>> is:
>> create table two_way_test1
>> (pkcol INTEGER PRIMARY KEY NOT NULL,
>> intcol INTEGER IDENTITY(1,1) NOT FOR REPLICATION,
>> charcol CHAR(100))
>>
>> The table structure after the snapshot completed was as follows:
>> USE [test1]
>> GO
>> /****** Object: Table [dbo].[two_way_test1] Script Date: 01/08/2006
>> 20:43:26 ******/
>> SET ANSI_NULLS ON
>> GO
>> SET QUOTED_IDENTIFIER ON
>> GO
>> SET ANSI_PADDING ON
>> GO
>> CREATE TABLE [dbo].[two_way_test1](
>> [pkcol] [int] NOT NULL,
>> [intcol] [int] IDENTITY(1,1) NOT FOR REPLICATION NOT NULL,
>> [charcol] [char](100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
>> [msrepl_tran_version] [uniqueidentifier] NOT NULL CONSTRAINT
>> [MSrepl_tran_version_default_F6FB929A_624E_4666_919D_08145657CCE8_2073058421]
>> DEFAULT (newid()),
>> PRIMARY KEY CLUSTERED
>> ([pkcol] ASC
>> )WITH (IGNORE_DUP_KEY = OFF) ON [PRIMARY]) ON [PRIMARY]
>> GO
>> SET ANSI_PADDING OFF
>> GO
>> USE [test1]
>> GO
>> ALTER TABLE [dbo].[two_way_test1] WITH NOCHECK ADD CONSTRAINT
>> [checkprimcol] CHECK NOT FOR REPLICATION (([pkcol]>=(1) AND
>> [pkcol]<=(1000)))
>> GO
>> ALTER TABLE [dbo].[two_way_test1] CHECK CONSTRAINT [checkprimcol]
>> Maybe you can explain how adding a column to a table doesn't qualify as
>> modifying the schema?
>> http://support.microsoft.com/default.aspx?scid=820675
>> Here is the code I used to implement bi-directional, transactional
>> replication. (Straight out of the KB article.)
>> use master
>> go
>> create database test1
>> go
>> create database test2
>> go
>> use test1
>> go
>> create table two_way_test1
>> (
>> pkcol INTEGER PRIMARY KEY NOT NULL,
>> intcol INTEGER IDENTITY(1,1) NOT FOR REPLICATION,
>> charcol CHAR(100)
>> )
>> use test2
>> go
>> create table two_way_test2
>> (
>> pkcol INTEGER PRIMARY KEY NOT NULL,
>> intcol INTEGER IDENTITY(1000000000,1) NOT FOR REPLICATION,
>> charcol CHAR(100)
>> )
>> go
>> -- Constraint to enforce a range of values between 1 and 1000 in database
>> test1
>> use test1
>> go
>> alter table
>> two_way_test1
>> with nocheck
>> add constraint
>> checkprimcol check NOT FOR REPLICATION
>> (
>> pkcol BETWEEN 1 and 1000
>> )
>> go
>>
>> use test2
>> go
>> -- Constraint to enforce a range of values between 1001 and 2000 in the
>> database test2
>> alter table
>> two_way_test2
>> with nocheck
>> add constraint
>> checkprimcol check NOT FOR REPLICATION
>> (
>> pkcol BETWEEN 1001 and 2000
>> )
>> go
>> use test1
>> go
>> -- INSERT Stored Procedure
>> create procedure sp_ins_two_way_test1
>> @.pkcol int,
>> @.intcol int,
>> @.charcol char(100),
>> @.timestampcol timestamp,
>> @.rowidcol uniqueidentifier
>> as
>> insert into two_way_test1
>> (
>> pkcol,
>> intcol,
>> charcol
>> )
>> values
>> (
>> @.pkcol,
>> @.intcol,
>> @.charcol
>> )
>> go
>> --UPDATE Stored Procedure
>> create procedure sp_upd_two_way_test1
>> @.pkcol int,
>> @.intcol int,
>> @.charcol char(100),
>> @.timestampcol timestamp,
>> @.rowidcol uniqueidentifier,
>> @.old_pkcol int
>> as
>> declare @.x int
>> declare @.y int
>> declare @.z char(100)
>> select
>> @.x=pkcol,
>> @.y=intcol,
>> @.z=charcol
>> from
>> two_way_test1
>> where
>> pkcol = @.pkcol
>>
>>
>> delete
>> two_way_test1
>> where
>> pkcol=@.pkcol
>>
>>
>> insert into two_way_test1
>> (
>> pkcol,
>> intcol,
>> charcol
>> )
>> values
>> (
>> case isnull(@.pkcol,0) when 0 then @.x else @.pkcol end,
>> case isnull(@.intcol,0) when 0 then @.y else @.intcol end,
>> case isnull(@.charcol,'N') when 'N' then @.z else @.charcol end
>> )
>> go
>> -- DELETE Stored Procedure
>> create procedure sp_del_two_way_test1
>> @.old_pkcol int
>> as
>> delete
>> two_way_test1
>> where
>> pkcol = @.old_pkcol
>> go
>> use test2
>> go
>> -- INSERT Stored Procedure
>> create procedure sp_ins_two_way_test2
>> @.pkcol int,
>> @.intcol int,
>> @.charcol char(100),
>> @.timestampcol timestamp,
>> @.rowidcol uniqueidentifier
>> as
>> insert into two_way_test2
>> (
>> pkcol,
>> intcol,
>> charcol
>> )
>> values
>> (
>> @.pkcol,
>> @.intcol,
>> @.charcol
>> )
>> go
>> --UPDATE Stored Procedure
>> create procedure sp_upd_two_way_test2
>> @.pkcol int,
>> @.intcol int,
>> @.charcol char(100),
>> @.timestampcol timestamp,
>> @.rowidcol uniqueidentifier,
>> @.old_pkcol int
>> as
>> declare @.x int
>> declare @.y int
>> declare @.z char(100)
>> select
>> @.x=pkcol,
>> @.y=intcol,
>> @.z=charcol
>> from
>> two_way_test2
>> where
>> pkcol = @.pkcol
>>
>>
>> delete
>> two_way_test2
>> where
>> pkcol=@.pkcol
>>
>>
>> insert into two_way_test2
>> (
>> pkcol,
>> intcol,
>> charcol
>> )
>> values
>> (
>> case isnull(@.pkcol,0) when 0 then @.x else @.pkcol end,
>> case isnull(@.intcol,0) when 0 then @.y else @.intcol end,
>> case isnull(@.charcol,'N') when 'N' then @.z else @.charcol end
>> )
>> go
>>
>> -- DELETE Stored Procedure
>> create procedure sp_del_two_way_test2
>> @.old_pkcol int
>> as
>> delete
>> two_way_test2
>> where
>> pkcol = @.old_pkcol
>> go
>> use master
>> go
>> exec sp_replicationdboption N'test1', N'publish', true
>> go
>> exec sp_replicationdboption N'test2', N'publish', true
>> go
>> --In the database test1.
>> use test1
>> go
>> -- Adding the transactional publication.
>> exec sp_addpublication
>> @.publication = N'two_way_pub_test1',
>> @.restricted = N'false',
>> @.sync_method = N'native',
>> @.repl_freq = N'continuous',
>> @.description = N'Transactional publication of database test1.',
>> @.status = N'active',
>> @.allow_push = N'true',
>> @.allow_pull = N'true',
>> @.allow_anonymous = N'false',
>> @.enabled_for_internet = N'false',
>> @.independent_agent = N'false',
>> @.immediate_sync = N'false',
>> @.allow_sync_tran = N'true',
>> @.autogen_sync_procs = N'true',
>> @.retention = 72
>> go
>> exec sp_addpublication_snapshot
>> @.publication = N'two_way_pub_test1',
>> @.frequency_type = 4,
>> @.frequency_interval = 1,
>> @.frequency_relative_interval = 0,
>> @.frequency_recurrence_factor = 1,
>> @.frequency_subday = 1,
>> @.frequency_subday_interval = 0,
>> @.active_start_date = 0,
>> @.active_end_date = 0,
>> @.active_start_time_of_day = 233000,
>> @.active_end_time_of_day = 0
>> go
>> -- Adding the transactional articles.
>> exec sp_addarticle
>> @.publication = N'two_way_pub_test1',
>> @.article = N'two_way_test1',
>> @.source_owner = N'dbo',
>> @.source_object = N'two_way_test1',
>> @.destination_table = N'two_way_test1',
>> @.type = N'logbased',
>> @.creation_script = null,
>> @.description = null,
>> @.pre_creation_cmd = N'drop',
>> @.schema_option = 0x00000000000000F1,
>> @.status = 16,
>> @.vertical_partition = N'false',
>> @.ins_cmd = N'CALL sp_ins_two_way_test2',
>> @.del_cmd = N'CALL sp_del_two_way_test2',
>> @.upd_cmd = N'CALL sp_upd_two_way_test2',
>> @.filter = null,
>> @.sync_object = null,
>> @.identityrangemanagementoption = 'manual'
>> go
>> -- In the database test2
>> use test2
>> go
>> -- Adding the transactional publication.
>> exec sp_addpublication
>> @.publication = N'two_way_pub_test2',
>> @.restricted = N'false',
>> @.sync_method = N'native',
>> @.repl_freq = N'continuous',
>> @.description = N'Transactional publication of database test2',
>> @.status = N'active',
>> @.allow_push = N'true',
>> @.allow_pull = N'true',
>> @.allow_anonymous = N'false',
>> @.enabled_for_internet = N'false',
>> @.independent_agent = N'false',
>> @.immediate_sync = N'false',
>> @.allow_sync_tran = N'true',
>> @.autogen_sync_procs = N'true',
>> @.retention = 72
>> go
>> exec sp_addpublication_snapshot
>> @.publication = N'two_way_pub_test2',
>> @.frequency_type = 4,
>> @.frequency_interval = 1,
>> @.frequency_relative_interval = 0,
>> @.frequency_recurrence_factor = 1,
>> @.frequency_subday = 1,
>> @.frequency_subday_interval = 0,
>> @.active_start_date = 0,
>> @.active_end_date = 0,
>> @.active_start_time_of_day = 233000,
>> @.active_end_time_of_day = 0
>> go
>> -- Adding the transactional articles.
>> exec sp_addarticle
>> @.publication = N'two_way_pub_test2',
>> @.article = N'two_way_test2',
>> @.source_owner = N'dbo',
>> @.source_object = N'two_way_test2',
>> @.destination_table = N'two_way_test2',
>> @.type = N'logbased',
>> @.creation_script = null,
>> @.description = null,
>> @.pre_creation_cmd = N'drop',
>> @.schema_option = 0x00000000000000F1,
>> @.status = 16,
>> @.vertical_partition = N'false',
>> @.ins_cmd = N'CALL sp_ins_two_way_test1',
>> @.del_cmd = N'CALL sp_del_two_way_test1',
>> @.upd_cmd = N'CALL sp_upd_two_way_test1',
>> @.filter = null,
>> @.sync_object = null,
>> @.identityrangemanagementoption = 'manual'
>> go
>> use test1
>> go
>> exec sp_addsubscription
>> @.publication = N'two_way_pub_test1',
>> @.article = N'all',
>> @.subscriber = 'WAKKO',
>> @.destination_db = N'test2',
>> @.sync_type = N'none',
>> @.status = N'active',
>> @.update_mode = N'sync tran',
>> @.loopback_detection = 'true'
>> go
>> -- Adding the transactional subscription in test2.
>> use test2
>> go
>> exec sp_addsubscription
>> @.publication = N'two_way_pub_test2',
>> @.article = N'all',
>> @.subscriber = 'WAKKO',
>> @.destination_db = N'test1',
>> @.sync_type = N'none',
>> @.status = N'active',
>> @.update_mode = N'sync tran',
>> @.loopback_detection = 'true'
>> go
>>
>> --
>> Mike
>> Mentor
>> Solid Quality Learning
>> http://www.solidqualitylearning.com
>>
>> "Hilary Cotter" <hilary.cotter@.gmail.com> wrote in message
>> news:%238PEHtKFGHA.516@.TK2MSFTNGP15.phx.gbl...
>> Bi-directional transactional replication requires no schema changes.
>> --
>> Hilary Cotter
>> Looking for a SQL Server replication book?
>> http://www.nwsu.com/0974973602.html
>> Looking for a FAQ on Indexing Services/SQL FTS
>> http://www.indexserverfaq.com
>> "Michael Hotek" <mike@.solidqualitylearning.com> wrote in message
>> news:uJnQlhtEGHA.3200@.tk2msftngp13.phx.gbl...
>> Log shipping can be as little as 3 - 5 minutes of latency. Clustering
>> is only going to protect you from hardware failure.
>> Any replication method which sends data in both directions will require
>> a schema change, there is absolutely no way around this.
>> Database Mirroring in 2005 is the only technology which exists which
>> will allow you to maintain 2 databases synchronized with each other, in
>> real time, without requiring a schema change. BUT, the mirror database
>> is inaccessible and you will incur a performance penalty in your
>> applications when you do this.
>> --
>> Mike
>> Mentor
>> Solid Quality Learning
>> http://www.solidqualitylearning.com
>>
>> "Paul Ibison" <Paul.Ibison@.Pygmalion.Com> wrote in message
>> news:eZw9LhsEGHA.3384@.TK2MSFTNGP12.phx.gbl...
>> Jeff,
>> log-shipping and transactional replication will both have latency. On
>> my systems this can often be minimised in the case of transactional
>> replication to < 10 secs, but the data will usually be out of sync.
>> You might be interested in investigating synchronous database
>> mirroring, or clustering to ensure high-availability.
>> Cheers,
>> Paul Ibison SQL Server MVP, www.replicationanswers.com
>> (recommended sql server 2000 replication book:
>> http://www.nwsu.com/0974973602p.html)
>>
>>
>>
>>
>>
>|||Mike,
just looking at this thread again, and I notice that Jeff is talking about
several (admittedly incompatible) technologies, but I think that you may
have missed this part: "Since the servers won't catch each other up, then I
think my best choice is to only have one server collect that data and have
SQL Server do the data replication". So, for this scenario, Jeff needs one
directional replication only. I was thinking that in this latter case his
aim is to provide HA, so all the methods I mentioned could be of relevance:
Log shipping,
TR
Mirroring
Clustering
The latter definitely contradicts Jeff's original thinking, but his latter
aim seems more directed towards HA.
Cheers,
Paul Ibison|||I didn't miss it. It's central to the thinking. Yes, but that's one of the
real tricks in HA planning. Handling the failback. Each has pros and cons,
which is why I teach a 5 day class that really starts to scratch the
surface.
When dealing with applications like this one, I tend to lean more heavily to
replication based on experience with working on several like this. The
reason for that is because you essentially have a machine feeding data to a
SQL Server. When the SQL Server can't log the data, you impact the
operation of the machine. Clustering hits it directly, because the server
becomes unavailable during the failover and the machine has to then
reconnect (being offline during this process). Log Shipping requires manual
intervention to failover and also takes the machine offline during failover.
If you are running SQL Server 2005, database mirroring is very promising
because of the redirection and the speed.
But none of those can beat a replication method which is configured to send
data in both directions for this particular case. The reasons are as
follows:
1. You can immediate switch the machine logging from one instance to another
in the event of an outage
2. You can immediately switch it back without losing processing
3. The replication architecture can resynchronize itself
4. You get hardware redundancy along with data redundancy
Of course, there are several other factors in here such as planning the
location of distribution (for transactional based methods) as well as
planning for potentially lost transactions (committed but not replicated).
You also have to evaluate potential synch issues as well. There really is
no straightforward answer.
--
Mike
http://www.solidqualitylearning.com
Disclaimer: This communication is an original work and represents my sole
views on the subject. It does not represent the views of any other person
or entity either by inference or direct reference.
"Paul Ibison" <Paul.Ibison@.Pygmalion.Com> wrote in message
news:ujS5bHEGGHA.2036@.TK2MSFTNGP14.phx.gbl...
> Mike,
> just looking at this thread again, and I notice that Jeff is talking about
> several (admittedly incompatible) technologies, but I think that you may
> have missed this part: "Since the servers won't catch each other up, then
> I think my best choice is to only have one server collect that data and
> have SQL Server do the data replication". So, for this scenario, Jeff
> needs one directional replication only. I was thinking that in this latter
> case his aim is to provide HA, so all the methods I mentioned could be of
> relevance:
> Log shipping,
> TR
> Mirroring
> Clustering
> The latter definitely contradicts Jeff's original thinking, but his latter
> aim seems more directed towards HA.
> Cheers,
> Paul Ibison
>|||OK - I realize that any answer to this type of query will be somewhat
simplistic. It sounds like you're leaning towards queued updating
subscribers in this scenario which I probably would also implement
(depending on presence of BLOBS, ability to change schema etc).
Cheers,
Paul Ibison
"Michael Hotek" <mike@.solidqualitylearning.com> wrote in message
news:eD6mm4HGGHA.1388@.TK2MSFTNGP11.phx.gbl...
>I didn't miss it. It's central to the thinking. Yes, but that's one of
>the real tricks in HA planning. Handling the failback. Each has pros and
>cons, which is why I teach a 5 day class that really starts to scratch the
>surface.
> When dealing with applications like this one, I tend to lean more heavily
> to replication based on experience with working on several like this. The
> reason for that is because you essentially have a machine feeding data to
> a SQL Server. When the SQL Server can't log the data, you impact the
> operation of the machine. Clustering hits it directly, because the server
> becomes unavailable during the failover and the machine has to then
> reconnect (being offline during this process). Log Shipping requires
> manual intervention to failover and also takes the machine offline during
> failover. If you are running SQL Server 2005, database mirroring is very
> promising because of the redirection and the speed.
> But none of those can beat a replication method which is configured to
> send data in both directions for this particular case. The reasons are as
> follows:
> 1. You can immediate switch the machine logging from one instance to
> another in the event of an outage
> 2. You can immediately switch it back without losing processing
> 3. The replication architecture can resynchronize itself
> 4. You get hardware redundancy along with data redundancy
> Of course, there are several other factors in here such as planning the
> location of distribution (for transactional based methods) as well as
> planning for potentially lost transactions (committed but not replicated).
> You also have to evaluate potential synch issues as well. There really is
> no straightforward answer.
> --
> Mike
> http://www.solidqualitylearning.com
> Disclaimer: This communication is an original work and represents my sole
> views on the subject. It does not represent the views of any other person
> or entity either by inference or direct reference.
> "Paul Ibison" <Paul.Ibison@.Pygmalion.Com> wrote in message
> news:ujS5bHEGGHA.2036@.TK2MSFTNGP14.phx.gbl...
>> Mike,
>> just looking at this thread again, and I notice that Jeff is talking
>> about several (admittedly incompatible) technologies, but I think that
>> you may have missed this part: "Since the servers won't catch each other
>> up, then I think my best choice is to only have one server collect that
>> data and have SQL Server do the data replication". So, for this scenario,
>> Jeff needs one directional replication only. I was thinking that in this
>> latter case his aim is to provide HA, so all the methods I mentioned
>> could be of relevance:
>> Log shipping,
>> TR
>> Mirroring
>> Clustering
>> The latter definitely contradicts Jeff's original thinking, but his
>> latter aim seems more directed towards HA.
>> Cheers,
>> Paul Ibison
>|||Paul and myself also do tailored hard core replication training. Paul is in
the UK, and I am in the States.
--
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"Michael Hotek" <mike@.solidqualitylearning.com> wrote in message
news:eD6mm4HGGHA.1388@.TK2MSFTNGP11.phx.gbl...
>I didn't miss it. It's central to the thinking. Yes, but that's one of
>the real tricks in HA planning. Handling the failback. Each has pros and
>cons, which is why I teach a 5 day class that really starts to scratch the
>surface.
> When dealing with applications like this one, I tend to lean more heavily
> to replication based on experience with working on several like this. The
> reason for that is because you essentially have a machine feeding data to
> a SQL Server. When the SQL Server can't log the data, you impact the
> operation of the machine. Clustering hits it directly, because the server
> becomes unavailable during the failover and the machine has to then
> reconnect (being offline during this process). Log Shipping requires
> manual intervention to failover and also takes the machine offline during
> failover. If you are running SQL Server 2005, database mirroring is very
> promising because of the redirection and the speed.
> But none of those can beat a replication method which is configured to
> send data in both directions for this particular case. The reasons are as
> follows:
> 1. You can immediate switch the machine logging from one instance to
> another in the event of an outage
> 2. You can immediately switch it back without losing processing
> 3. The replication architecture can resynchronize itself
> 4. You get hardware redundancy along with data redundancy
> Of course, there are several other factors in here such as planning the
> location of distribution (for transactional based methods) as well as
> planning for potentially lost transactions (committed but not replicated).
> You also have to evaluate potential synch issues as well. There really is
> no straightforward answer.
> --
> Mike
> http://www.solidqualitylearning.com
> Disclaimer: This communication is an original work and represents my sole
> views on the subject. It does not represent the views of any other person
> or entity either by inference or direct reference.
> "Paul Ibison" <Paul.Ibison@.Pygmalion.Com> wrote in message
> news:ujS5bHEGGHA.2036@.TK2MSFTNGP14.phx.gbl...
>> Mike,
>> just looking at this thread again, and I notice that Jeff is talking
>> about several (admittedly incompatible) technologies, but I think that
>> you may have missed this part: "Since the servers won't catch each other
>> up, then I think my best choice is to only have one server collect that
>> data and have SQL Server do the data replication". So, for this scenario,
>> Jeff needs one directional replication only. I was thinking that in this
>> latter case his aim is to provide HA, so all the methods I mentioned
>> could be of relevance:
>> Log shipping,
>> TR
>> Mirroring
>> Clustering
>> The latter definitely contradicts Jeff's original thinking, but his
>> latter aim seems more directed towards HA.
>> Cheers,
>> Paul Ibison
>|||It has something to do with answering this question, how? There are dozens
of people who do replication training, but that wasn't the question.
--
Mike
http://www.solidqualitylearning.com
Disclaimer: This communication is an original work and represents my sole
views on the subject. It does not represent the views of any other person
or entity either by inference or direct reference.
"Hilary Cotter" <hilary.cotter@.gmail.com> wrote in message
news:OiicsJKGGHA.2696@.TK2MSFTNGP14.phx.gbl...
> Paul and myself also do tailored hard core replication training. Paul is
> in the UK, and I am in the States.
> --
> Hilary Cotter
> Looking for a SQL Server replication book?
> http://www.nwsu.com/0974973602.html
> Looking for a FAQ on Indexing Services/SQL FTS
> http://www.indexserverfaq.com
> "Michael Hotek" <mike@.solidqualitylearning.com> wrote in message
> news:eD6mm4HGGHA.1388@.TK2MSFTNGP11.phx.gbl...
>>I didn't miss it. It's central to the thinking. Yes, but that's one of
>>the real tricks in HA planning. Handling the failback. Each has pros and
>>cons, which is why I teach a 5 day class that really starts to scratch the
>>surface.
>> When dealing with applications like this one, I tend to lean more heavily
>> to replication based on experience with working on several like this.
>> The reason for that is because you essentially have a machine feeding
>> data to a SQL Server. When the SQL Server can't log the data, you impact
>> the operation of the machine. Clustering hits it directly, because the
>> server becomes unavailable during the failover and the machine has to
>> then reconnect (being offline during this process). Log Shipping
>> requires manual intervention to failover and also takes the machine
>> offline during failover. If you are running SQL Server 2005, database
>> mirroring is very promising because of the redirection and the speed.
>> But none of those can beat a replication method which is configured to
>> send data in both directions for this particular case. The reasons are
>> as follows:
>> 1. You can immediate switch the machine logging from one instance to
>> another in the event of an outage
>> 2. You can immediately switch it back without losing processing
>> 3. The replication architecture can resynchronize itself
>> 4. You get hardware redundancy along with data redundancy
>> Of course, there are several other factors in here such as planning the
>> location of distribution (for transactional based methods) as well as
>> planning for potentially lost transactions (committed but not
>> replicated). You also have to evaluate potential synch issues as well.
>> There really is no straightforward answer.
>> --
>> Mike
>> http://www.solidqualitylearning.com
>> Disclaimer: This communication is an original work and represents my sole
>> views on the subject. It does not represent the views of any other
>> person or entity either by inference or direct reference.
>> "Paul Ibison" <Paul.Ibison@.Pygmalion.Com> wrote in message
>> news:ujS5bHEGGHA.2036@.TK2MSFTNGP14.phx.gbl...
>> Mike,
>> just looking at this thread again, and I notice that Jeff is talking
>> about several (admittedly incompatible) technologies, but I think that
>> you may have missed this part: "Since the servers won't catch each other
>> up, then I think my best choice is to only have one server collect that
>> data and have SQL Server do the data replication". So, for this
>> scenario, Jeff needs one directional replication only. I was thinking
>> that in this latter case his aim is to provide HA, so all the methods I
>> mentioned could be of relevance:
>> Log shipping,
>> TR
>> Mirroring
>> Clustering
>> The latter definitely contradicts Jeff's original thinking, but his
>> latter aim seems more directed towards HA.
>> Cheers,
>> Paul Ibison
>>
>|||Leaning toward one of two different options: transactional with queued
updating or bi-directional transactional. In this case, both would provide
a way for the application to continue to log changes while the primary was
offline. Since the replication infrastructure is already set up to send
changes back, when the primary comes back online, it will be automatically
re-synched by the replication engine. The user can then decide when to
switch the logging from the application back to the primary.
--
Mike
http://www.solidqualitylearning.com
Disclaimer: This communication is an original work and represents my sole
views on the subject. It does not represent the views of any other person
or entity either by inference or direct reference.
"Paul Ibison" <Paul.Ibison@.Pygmalion.Com> wrote in message
news:uL5OQQIGGHA.3856@.TK2MSFTNGP12.phx.gbl...
> OK - I realize that any answer to this type of query will be somewhat
> simplistic. It sounds like you're leaning towards queued updating
> subscribers in this scenario which I probably would also implement
> (depending on presence of BLOBS, ability to change schema etc).
> Cheers,
> Paul Ibison
> "Michael Hotek" <mike@.solidqualitylearning.com> wrote in message
> news:eD6mm4HGGHA.1388@.TK2MSFTNGP11.phx.gbl...
>>I didn't miss it. It's central to the thinking. Yes, but that's one of
>>the real tricks in HA planning. Handling the failback. Each has pros and
>>cons, which is why I teach a 5 day class that really starts to scratch the
>>surface.
>> When dealing with applications like this one, I tend to lean more heavily
>> to replication based on experience with working on several like this.
>> The reason for that is because you essentially have a machine feeding
>> data to a SQL Server. When the SQL Server can't log the data, you impact
>> the operation of the machine. Clustering hits it directly, because the
>> server becomes unavailable during the failover and the machine has to
>> then reconnect (being offline during this process). Log Shipping
>> requires manual intervention to failover and also takes the machine
>> offline during failover. If you are running SQL Server 2005, database
>> mirroring is very promising because of the redirection and the speed.
>> But none of those can beat a replication method which is configured to
>> send data in both directions for this particular case. The reasons are
>> as follows:
>> 1. You can immediate switch the machine logging from one instance to
>> another in the event of an outage
>> 2. You can immediately switch it back without losing processing
>> 3. The replication architecture can resynchronize itself
>> 4. You get hardware redundancy along with data redundancy
>> Of course, there are several other factors in here such as planning the
>> location of distribution (for transactional based methods) as well as
>> planning for potentially lost transactions (committed but not
>> replicated). You also have to evaluate potential synch issues as well.
>> There really is no straightforward answer.
>> --
>> Mike
>> http://www.solidqualitylearning.com
>> Disclaimer: This communication is an original work and represents my sole
>> views on the subject. It does not represent the views of any other
>> person or entity either by inference or direct reference.
>> "Paul Ibison" <Paul.Ibison@.Pygmalion.Com> wrote in message
>> news:ujS5bHEGGHA.2036@.TK2MSFTNGP14.phx.gbl...
>> Mike,
>> just looking at this thread again, and I notice that Jeff is talking
>> about several (admittedly incompatible) technologies, but I think that
>> you may have missed this part: "Since the servers won't catch each other
>> up, then I think my best choice is to only have one server collect that
>> data and have SQL Server do the data replication". So, for this
>> scenario, Jeff needs one directional replication only. I was thinking
>> that in this latter case his aim is to provide HA, so all the methods I
>> mentioned could be of relevance:
>> Log shipping,
>> TR
>> Mirroring
>> Clustering
>> The latter definitely contradicts Jeff's original thinking, but his
>> latter aim seems more directed towards HA.
>> Cheers,
>> Paul Ibison
>>
>

No comments:

Post a Comment