This was one interesting excercise we did where we moved the objects from the VAR layer to the BUS layer for one of the live implementations. The challenge was that we wanted to retain all the data and all the references.
1. To begin with we had two installations once was the dev instance where the objects were in the bus layer and the other was the live instance where the objects were in the var layer.
2. We know that while the no sequence of the objects starts from 30000 onwards in the var layer it starts from 20000 onwards in the bus layer. So once we resoted the bus layer (aod) the existing object id's in the 30,000 series became irrelevant. So we deleted all the entries in the sql dictionary table between 30000 and 40000, backup the records from the dev environment where tableid was between 20000 and 30000and restored it on the live.
3. We also know that when the table names are more than 30 chars the synchronize routine replaces the last 4 chars of the table with the objectid. So we manully looked for such objects and changed the tableid suffix in such objects from the 30000 series to the 20000 series.
4. Once we did the above we were getting a duplicate record while entering data into most of the table it was no long before we realized that the systemsequences table contains the last record id and is used to generate the record if for the new record. In this case as the object ids changed a corresponding entry for them was required in the systemsequences table with the new object ids. I wrote a script to generate a script for the right updates.
select ' insert into systemsequences
( ID,NEXTVAL,MINVAL,MAXVAL,CYCLE,NAME,TABID,DATAAREAID,RECVERSION,RECID ) '
+ 'select '
+ ' -1, isnull( max( recid ) + 1, 1 ) , 1, 9223372036854775807, 0, ''SEQNO'', ' + cast( SD.TableID as varchar) + ',''dat'', 1, -1'
+ ' from ' + SD.SQLName
from sqldictionary SD
left join systemsequences SS
on SS.tabid = SD.tableID
where 1=1
and SS.Name is null
and SD.tableid between 20000 and 30000
and SD.fieldid = 0
This blog is dedicated to all my technical learnings and findings. As they say use all the brains you have and borrow all the brains you can, so this is my share of lending my learnings to all you guys out there. I would like to acknowledge here that some parts of these posts would be reproduced as a part of my web-browsing mainly because having it all in one place is far more convenient.
Sunday, September 26, 2010
Thursday, September 09, 2010
Remote scan during linked table Update
Faced this issue while building an interface from a SQL Server to a remote MYSQL database. We had a set of insert, update statements to synchronize a set of table however the update statements were taking forever to execute. On analyzing the execution plan we got to know that there was a remote scan being performed for each update which was slowing down the update as for each row update a remote scan for 6500 rows was being performed. On investigating further i found that this is a documented behaviour and what MSDN says is:
For linked server DELETEs or UPDATEs, SQL Server retrieves data from the table, performs any filtering that is necessary, and then performs the deletes or updates through the OLEDB rowset. This processing can result in a round-trip to the remote server for each row that is to be deleted or updated
SQL Server 2000 adds the ability to send a DELETE or UPDATE to a linked server as a single SQL statement; however, this feature only covers linked servers to another SQL Server 2000 or SQL Server 7.0 instance
Please refer knowledge base article
http://support.microsoft.com/kb/309182
So the only option we have to update without any performance overhead on the remote server is if we use stored procedures for insert and udpates so that the actual filtering and update is actually performed on the remote server rather then in a rowset at the local server.
For linked server DELETEs or UPDATEs, SQL Server retrieves data from the table, performs any filtering that is necessary, and then performs the deletes or updates through the OLEDB rowset. This processing can result in a round-trip to the remote server for each row that is to be deleted or updated
SQL Server 2000 adds the ability to send a DELETE or UPDATE to a linked server as a single SQL statement; however, this feature only covers linked servers to another SQL Server 2000 or SQL Server 7.0 instance
Please refer knowledge base article
http://support.microsoft.com/kb/309182
So the only option we have to update without any performance overhead on the remote server is if we use stored procedures for insert and udpates so that the actual filtering and update is actually performed on the remote server rather then in a rowset at the local server.
Subscribe to:
Posts (Atom)