The problem arose where we are trying to restore the live production databases into a test environment
Historically DAX has had its own users when it was integrated to windows security model the windows user's SID (SID is a unique identifier for a user or a group in Windows) was used to act like a user.
SIDs are specific for a particular domain, i.e. domain admin accounts thus in two different domains the same user will have different SIDs.
The problem is that when the DAX database is moved from the production environment to a test environment it carries the SID of the production environment in the security tables which is different from a SID of an account that is being used in the test environment.
The solution is as below:
* find out the SID of the user you use to run a DAX client after DB migration
o you can use whoami tool shipped with Windows - just run whoami /user in a console and you'll see a long string like S-1-5-21-15052... next to your login;
o if you use WinXP or an earlier Windows version you'll have to install Windows XP Support Tools to get the tool and afair it has a slightly different command line params... you'll need to use /sid option instead of /user
* run a tool like Ms SQL Management Studio and connect to the restored DAX database
* navigate to the UserInfo table, open it in a grid and update the SID value for user "admin" with the one you've found out previously
* start DAX client - you should login as an admin now
This blog is dedicated to all my technical learnings and findings. As they say use all the brains you have and borrow all the brains you can, so this is my share of lending my learnings to all you guys out there. I would like to acknowledge here that some parts of these posts would be reproduced as a part of my web-browsing mainly because having it all in one place is far more convenient.
Sunday, June 13, 2010
Thursday, May 27, 2010
SQL Server sp_execute
We commonly come across this statement when profiling SQL Server i have wondered a lot about what it is and there was some explanation i found for it in the BOL.
The ODBC API defines prepared execution as a way to reduce the parsing and compiling overhead associated with repeatedly executing a Transact-SQL statement. The application builds a character string containing an SQL statement and then executes it in two stages. It calls SQLPrepare once to have the statement parsed and compiled into an execution plan by the database engine. It then calls SQLExecute for each execution of the prepared execution plan. This saves the parsing and compiling overhead on each execution. Prepared execution is commonly used by applications to repeatedly execute the same, parameterized SQL statement.
thus sp_execute is a system stored procedure used with "prepared" statements from a client. The number you see is an internal pointer to the execution plan on the server. The values following that number are the parameters for a particular invocation of the prepared statement
Now when we see sp_execute in the profile and we need to know the text behind it then first thing we can do it to check if there is an "sp_prepare" with the same number in the thread, if there is then you include sql_handle in the trace and you look can up the text for the cache entry using the following statement
select text
from sys.dm_exec_requests
cross apply sys.dm_exec_sql_text(plan_handle)
where session_id = [spid]
you can also enable Prepare SQL event in the profiler to get the handle of the execution plan
I found this problem where a prepared sql was taking much longer to execute when i executed the same sql in the management studio it was actually fast so i am wondering what could make a unprepared sql faster then a prepared sql?
I thought maybe the statistics are out so created the statistics for this table
with a FULLSCAN option and that really helped still not convinced why? maybe i will have to study the execution plan in both the cases and spot the difference i believe that different execution plans are being used in these cases.
The ODBC API defines prepared execution as a way to reduce the parsing and compiling overhead associated with repeatedly executing a Transact-SQL statement. The application builds a character string containing an SQL statement and then executes it in two stages. It calls SQLPrepare once to have the statement parsed and compiled into an execution plan by the database engine. It then calls SQLExecute for each execution of the prepared execution plan. This saves the parsing and compiling overhead on each execution. Prepared execution is commonly used by applications to repeatedly execute the same, parameterized SQL statement.
thus sp_execute is a system stored procedure used with "prepared" statements from a client. The number you see is an internal pointer to the execution plan on the server. The values following that number are the parameters for a particular invocation of the prepared statement
Now when we see sp_execute in the profile and we need to know the text behind it then first thing we can do it to check if there is an "sp_prepare" with the same number in the thread, if there is then you include sql_handle in the trace and you look can up the text for the cache entry using the following statement
select text
from sys.dm_exec_requests
cross apply sys.dm_exec_sql_text(plan_handle)
where session_id = [spid]
you can also enable Prepare SQL event in the profiler to get the handle of the execution plan
I found this problem where a prepared sql was taking much longer to execute when i executed the same sql in the management studio it was actually fast so i am wondering what could make a unprepared sql faster then a prepared sql?
I thought maybe the statistics are out so created the statistics for this table
with a FULLSCAN option and that really helped still not convinced why? maybe i will have to study the execution plan in both the cases and spot the difference i believe that different execution plans are being used in these cases.
Saturday, May 22, 2010
Bring a Standby Database online
When Transaction Log Shipment is setup a copy of the transaction database in maintained in read-only and standby mode on a pre-configured server.
If for any reason the primary server fails and the standby server needs to be promoted use the following command to make the database available
RESTORE DATABASE [databasename] WITH RECOVERY
If for any reason the primary server fails and the standby server needs to be promoted use the following command to make the database available
RESTORE DATABASE [databasename] WITH RECOVERY
Dynamics AX Application file types
There are lots of files with different file extensions in the Application\Appl\Std folder. Got these details from Harish Mohanbabu's blog quite a handy information
Yes. There are quite a lot of files in Axapta Application\Appl\Standard folder. Please note that all these files will be updated whenever a new version is released. Most important them are -
1. .aod - Acronym for Application Object Data file. Each Application object layer is saved in a separate file called
"Ax< layer >.aod". For example, Axsys.aod for the SYS layer, Axusr.aod for the USR layer and so on.
2. label files : If we look at our label files in the AOT we see 4 different extensions for our labels
· ALI Axapta Label Index
· ALC Axapta Label Comments
· ALT Axapta Label Temp, Store
· ALD Axapta Label Dictionary
The ALD file is readable/editable with a text editor. General speaking you only need the ALD file. When the AOS is restarted the ALI and ALC will be generated on the fly. (or updated when the time stamp of the ALD file is bigger than the timestamp of the ALC or ALI file)
Next, a developer creates new labels. These labels will be stored in the ALT file. Not yet in the ALD file. When the final AOS will stop. AX will update the ALD file this way. It will copy the ALD file to an ALB file. Next the changes in the ALT file will be stored in the ALB file. Finally this ALB file is placed back in the ALD file and the ALB file will be deleted. (HINT: make the ALD file read only and you will see it your self)
When your AOS has creased the changes are not stored in the ALD file. Even when you start and stop the AOS again the file is not updated. To solve this issue start an AX client search for the label in the Label editor. Next stop your client and the AOS. Now the label file is updated.
3. .udb - Acronym for Axapta User Database. As the acronym indicates, Axapta stores its user details in this file. Also this file is responsible for allocating session ID to users. If this file is deleted, then Axapta would regenerate whenever the system is started.
Tip : Some times in Axapta 3-tier installations, when you look at online users form, it could return false information. For example, let us assume 5 users are currently online. But this form, might show 7 or 8 users online. Or some times, though you have enough licenses, you might receive "maximum users reached" error message from Axapta.
In such cases, the solution would be deleting the "axdat.udb" file. But before deleting it, make sure that all users are exited and AOS is properly shut down. Once this is done and the file is deleted, as mentioned above, Axapta would regenerate the "Axdat.udb" file. Please note that you might not even come across this problem. But to be on the safer side, particularly on 3-tier installations, it is always advisable to regularly delete "axdat.udb" file.
But in some special conditions, online user form may be empty in Axapta ver 2.5. This condition may happen if your license has an expiry date. Even though your license may not have expired, you might still come across this condition. In such cases please go to Technet website and do a search with this key words "The online user form is empty". You would come across an Export file. Import that export file in your installation. That would solve the problem.
4. .ktd - Acronym for Kernel Text Data.
Tip : Some times when you upgrade your installation (say for example you have installed a new SP), you might not be able to see the new features of the installed SP after the upgrade is complete.
The solution for this would be manually copying the .ktd files from Client\Bin\ directory to Axapta Application\Bin directory. The reason being - when Service pack is installed, for some reasons files in Axapta Application\bin directory are not updated. Only the files that are there in Client\Bin are updated.
5. .aoi - Acronym for Application Object Index. As the name indicates this is an index file for the Application objects.
Tip : Some times you may get funny error messages. Though you might have hard disk space, sometimes the following error might popup -
Error in file: ...\standard\axapd.aoi while reading in record ...
Error code: 38 = hard disk is full
For all the error messages involving axapd.aoi, the solution would be deleting the "axapd.aoi" file from Axapta Application\Appl\Standard directory. When you restart the system again, Axapta would rebuild this index file.
If the problem persists even after deleting the file, then check whether "Open application files in exclusive mode" is enabled (Axapta Configuration Utility --> Under General Tab). If enabled, then disable it. This should solve the problem.
6. What is "Master.aoc" file and how can I create it ?
aoc stands for Axapta Object Cache. This functionality was introduced in Axapta ver 2.5 mainly intended for best performance possible.
When running a 3-Tier Navision Axapta 2.5 client, we will cache different objects as we use the application. All objects are cached in the memory on the client machine. When the client is shutdown, the cache objects are written to disk, in a file with the extension .aoc (Axapta Object Cache). The next time a user starts the client, the executable ax32.exe checks for the .aoc file and if one exists it is loaded into memory. This gives us a performance gain, as we do not have to cache these objects again.
How to create a preconfigured cache file?
This is very simple. We configure a 3-tier client and go through the parts of the application all clients will use. Ex - General Ledger, Sales Order and Accounts Receivable. Doing this will create an .aoc file, with the following naming convention - "ax_AOS [Instance name]@[Hostname]_[username].aoc". In my case, I got a file like this -
ax_Annai@Himalaya_Harish.aoc. This file, if used as the preconfigured cache file, must be renamed to "master.aoc". Please note that this naming convention of the .aoc file would be different if you have configured an Object Server Cluster. (Ex - ax[cluster name][username].aoc)
When installing the Navision Axapta 2.5 client software, the setup.exe program will look for the master.aoc file in the directory where setup.exe is located. After completing the installation of the client, the setup.exe program will, if a master.aoc file exists, copy this file to the \Log directory, located in the default directory structure of the installed client.
When a user starts the client software for the first time, the executable as32.exe, will look for a cache file that belongs to the user, who is currently logged in. If one exists then this cache file is loaded into memory. If none exists, then the ax32.exe will look in the \Log directory for the master.aoc file. If one exists, the objects in this cache file would be loaded into memory.
Yes. There are quite a lot of files in Axapta Application\Appl\Standard folder. Please note that all these files will be updated whenever a new version is released. Most important them are -
1. .aod - Acronym for Application Object Data file. Each Application object layer is saved in a separate file called
"Ax< layer >.aod". For example, Axsys.aod for the SYS layer, Axusr.aod for the USR layer and so on.
2. label files : If we look at our label files in the AOT we see 4 different extensions for our labels
· ALI Axapta Label Index
· ALC Axapta Label Comments
· ALT Axapta Label Temp, Store
· ALD Axapta Label Dictionary
The ALD file is readable/editable with a text editor. General speaking you only need the ALD file. When the AOS is restarted the ALI and ALC will be generated on the fly. (or updated when the time stamp of the ALD file is bigger than the timestamp of the ALC or ALI file)
Next, a developer creates new labels. These labels will be stored in the ALT file. Not yet in the ALD file. When the final AOS will stop. AX will update the ALD file this way. It will copy the ALD file to an ALB file. Next the changes in the ALT file will be stored in the ALB file. Finally this ALB file is placed back in the ALD file and the ALB file will be deleted. (HINT: make the ALD file read only and you will see it your self)
When your AOS has creased the changes are not stored in the ALD file. Even when you start and stop the AOS again the file is not updated. To solve this issue start an AX client search for the label in the Label editor. Next stop your client and the AOS. Now the label file is updated.
3. .udb - Acronym for Axapta User Database. As the acronym indicates, Axapta stores its user details in this file. Also this file is responsible for allocating session ID to users. If this file is deleted, then Axapta would regenerate whenever the system is started.
Tip : Some times in Axapta 3-tier installations, when you look at online users form, it could return false information. For example, let us assume 5 users are currently online. But this form, might show 7 or 8 users online. Or some times, though you have enough licenses, you might receive "maximum users reached" error message from Axapta.
In such cases, the solution would be deleting the "axdat.udb" file. But before deleting it, make sure that all users are exited and AOS is properly shut down. Once this is done and the file is deleted, as mentioned above, Axapta would regenerate the "Axdat.udb" file. Please note that you might not even come across this problem. But to be on the safer side, particularly on 3-tier installations, it is always advisable to regularly delete "axdat.udb" file.
But in some special conditions, online user form may be empty in Axapta ver 2.5. This condition may happen if your license has an expiry date. Even though your license may not have expired, you might still come across this condition. In such cases please go to Technet website and do a search with this key words "The online user form is empty". You would come across an Export file. Import that export file in your installation. That would solve the problem.
4. .ktd - Acronym for Kernel Text Data.
Tip : Some times when you upgrade your installation (say for example you have installed a new SP), you might not be able to see the new features of the installed SP after the upgrade is complete.
The solution for this would be manually copying the .ktd files from Client\Bin\ directory to Axapta Application\Bin directory. The reason being - when Service pack is installed, for some reasons files in Axapta Application\bin directory are not updated. Only the files that are there in Client\Bin are updated.
5. .aoi - Acronym for Application Object Index. As the name indicates this is an index file for the Application objects.
Tip : Some times you may get funny error messages. Though you might have hard disk space, sometimes the following error might popup -
Error in file: ...\standard\axapd.aoi while reading in record ...
Error code: 38 = hard disk is full
For all the error messages involving axapd.aoi, the solution would be deleting the "axapd.aoi" file from Axapta Application\Appl\Standard directory. When you restart the system again, Axapta would rebuild this index file.
If the problem persists even after deleting the file, then check whether "Open application files in exclusive mode" is enabled (Axapta Configuration Utility --> Under General Tab). If enabled, then disable it. This should solve the problem.
6. What is "Master.aoc" file and how can I create it ?
aoc stands for Axapta Object Cache. This functionality was introduced in Axapta ver 2.5 mainly intended for best performance possible.
When running a 3-Tier Navision Axapta 2.5 client, we will cache different objects as we use the application. All objects are cached in the memory on the client machine. When the client is shutdown, the cache objects are written to disk, in a file with the extension .aoc (Axapta Object Cache). The next time a user starts the client, the executable ax32.exe checks for the .aoc file and if one exists it is loaded into memory. This gives us a performance gain, as we do not have to cache these objects again.
How to create a preconfigured cache file?
This is very simple. We configure a 3-tier client and go through the parts of the application all clients will use. Ex - General Ledger, Sales Order and Accounts Receivable. Doing this will create an .aoc file, with the following naming convention - "ax_AOS [Instance name]@[Hostname]_[username].aoc". In my case, I got a file like this -
ax_Annai@Himalaya_Harish.aoc. This file, if used as the preconfigured cache file, must be renamed to "master.aoc". Please note that this naming convention of the .aoc file would be different if you have configured an Object Server Cluster. (Ex - ax[cluster name][username].aoc)
When installing the Navision Axapta 2.5 client software, the setup.exe program will look for the master.aoc file in the directory where setup.exe is located. After completing the installation of the client, the setup.exe program will, if a master.aoc file exists, copy this file to the \Log directory, located in the default directory structure of the installed client.
When a user starts the client software for the first time, the executable as32.exe, will look for a cache file that belongs to the user, who is currently logged in. If one exists then this cache file is loaded into memory. If none exists, then the ax32.exe will look in the \Log directory for the master.aoc file. If one exists, the objects in this cache file would be loaded into memory.
Friday, May 21, 2010
Dynamics AX unpack type mismatch
I had this strange problem where in when a form loaded it gave an error on the unpack method stating type mismatch. The code was find and was not able to find any issues with it until i realized that i had done a xpo import recently which means the pack was done on the previous version and now the system was trying an unpack on a new version after the import.
in this new version there was a new parameter added which was not packed thus i had to clear the cached data and the issue was resolved.
The cached data could be cleared using the link
Tools
in this new version there was a new parameter added which was not packed thus i had to clear the cached data and the issue was resolved.
The cached data could be cleared using the link
Tools
Wednesday, May 19, 2010
Trigger to Stop Database Drop
CREATE TRIGGER [ddl_trig_Prevent_Drop_Database]
ON ALL SERVER
FOR DROP_DATABASE
AS
--log attempt to drop database
DECLARE @db VARCHAR(209)
SET @db = (SELECT 'Database Dropped Attempted by ' + CONVERT(nvarchar(100), ORIGINAL_LOGIN()) +
' executing command: '+ EVENTDATA().value('(/EVENT_INSTANCE/TSQLCommand/CommandText)[1]','VARCHAR(229)'))
RAISERROR(@db, 16, 1)WITH LOG
--prevent drop database
ROLLBACK
GO
SET ANSI_NULLS OFF
GO
SET QUOTED_IDENTIFIER OFF
GO
--turn on trigger
ENABLE TRIGGER [ddl_trig_Prevent_Drop_Database] ON ALL SERVER
--test trigger
CREATE DATABASE test1
DROP DATABASE test1
Msg 50000, Level 16, State 1, Procedure ddl_trig_Prevent_Drop_Database, Line 11
Database Dropped Attempted by TestSQLUser executing command: DROP DATABASE test1
Msg 3609, Level 16, State 2, Line 1
The transaction ended in the trigger. The batch has been aborted.
--turn off trigger
DISABLE TRIGGER [ddl_trig_Prevent_Drop_Database] ON ALL SERVER
GO
/****** Object: DdlTrigger [ddl_trig_Prevent_Drop_Database] Script Date: 01/11/2010 19:22:28 ******/
IF EXISTS (SELECT * FROM master.sys.server_triggers WHERE parent_class_desc = 'SERVER' AND name = N'ddl_trig_Prevent_Drop_Database')
DROP TRIGGER [ddl_trig_Prevent_Drop_Database] ON ALL SERVER
GO
--cleanup current errorlog
sp_cycle_errorlog
ON ALL SERVER
FOR DROP_DATABASE
AS
--log attempt to drop database
DECLARE @db VARCHAR(209)
SET @db = (SELECT 'Database Dropped Attempted by ' + CONVERT(nvarchar(100), ORIGINAL_LOGIN()) +
' executing command: '+ EVENTDATA().value('(/EVENT_INSTANCE/TSQLCommand/CommandText)[1]','VARCHAR(229)'))
RAISERROR(@db, 16, 1)WITH LOG
--prevent drop database
ROLLBACK
GO
SET ANSI_NULLS OFF
GO
SET QUOTED_IDENTIFIER OFF
GO
--turn on trigger
ENABLE TRIGGER [ddl_trig_Prevent_Drop_Database] ON ALL SERVER
--test trigger
CREATE DATABASE test1
DROP DATABASE test1
Msg 50000, Level 16, State 1, Procedure ddl_trig_Prevent_Drop_Database, Line 11
Database Dropped Attempted by TestSQLUser executing command: DROP DATABASE test1
Msg 3609, Level 16, State 2, Line 1
The transaction ended in the trigger. The batch has been aborted.
--turn off trigger
DISABLE TRIGGER [ddl_trig_Prevent_Drop_Database] ON ALL SERVER
GO
/****** Object: DdlTrigger [ddl_trig_Prevent_Drop_Database] Script Date: 01/11/2010 19:22:28 ******/
IF EXISTS (SELECT * FROM master.sys.server_triggers WHERE parent_class_desc = 'SERVER' AND name = N'ddl_trig_Prevent_Drop_Database')
DROP TRIGGER [ddl_trig_Prevent_Drop_Database] ON ALL SERVER
GO
--cleanup current errorlog
sp_cycle_errorlog
List of Actively running commands by SPID in SQL Server
This following code snippet would list all the actively running commands by SPID
select session_id, Text
from sys.dm_exec_requests r
cross apply sys.dm_exec_sql_text(sql_handle) t
select session_id, Text
from sys.dm_exec_requests r
cross apply sys.dm_exec_sql_text(sql_handle) t
SQL Server Transaction Log
DBCC LOG([,{0|1|2|3|4}])
0 - Basic Log Information (default)
1 - Lengthy Info
2 - Very Length Info
3 - Detailed
4 - Full
Example:
DBCC log (MY_DB, 4)
And it displays the following transaction log information:
* Current LSN
* Operation (string starts with LOP_XXXXX)
* Context (string starts with LCX_XXXX)
* Transaction ID
* Tag Bits (Hex)
* Log Record Fixed Length (int)
* Log Record Length (int)
* Previous LSN
* Flag Bits (Hex)
* Description (string)
* Log Record (Hex)
There is another command which is used to read the transaction log ::fn_dblog. I found an interesting hack by Paul S Randal on how to find which user dropped an object using the transaction log.
SELECT [Transaction Id], [Begin Time], [UID], [SPID]
FROM ::fn_dblog (NULL, NULL)
WHERE [Transaction Name] = 'DROPOBJ'
The (NULL, NULL) is the starting LSN and ending LSN to process - NULL means process everything available.
Now, this only shows us that a table was dropped, not which table it was. There's no way to get the name of the table that was dropped, only the object ID - so you'll need to have some other way to determine what the table ID is if there are multiple table drops and only one of them is malignant.
For SQL Server 2000, the code to find which object ID we're talking about is as follows (dropping the Transacation Id into the WHERE clause):
SELECT DISTINCT [Object Name] FROM ::fn_dblog (NULL, NULL)
WHERE [Transaction Id] = '0000:000000e0'
AND [Context] = 'LCX_IAM';
GO
Object Name
--------------------
(2009058193)
The object ID in parentheses is the ID of the table that was dropped.
For SQL Server 2005 and 2008, the code is as follows (with the same Transaction Id substitution):
SELECT TOP (1) [Lock Information] FROM ::fn_dblog (NULL, NULL)
WHERE [Transaction Id] = '0000:00000587'
AND [Lock Information] LIKE '%SCH_M OBJECT%';
GO
Lock Information
--------------------------------------------
ACQUIRE_LOCK_SCH_M OBJECT: 8:2073058421:0
The 8:2073058421 is the database ID and object ID of the table that was dropped.
PS If you find the you don't get enough info from ::fn_dblog, try turning on trace flag 2537. It allows the function to look at *all* possible log, not just the active log.
0 - Basic Log Information (default)
1 - Lengthy Info
2 - Very Length Info
3 - Detailed
4 - Full
Example:
DBCC log (MY_DB, 4)
And it displays the following transaction log information:
* Current LSN
* Operation (string starts with LOP_XXXXX)
* Context (string starts with LCX_XXXX)
* Transaction ID
* Tag Bits (Hex)
* Log Record Fixed Length (int)
* Log Record Length (int)
* Previous LSN
* Flag Bits (Hex)
* Description (string)
* Log Record (Hex)
There is another command which is used to read the transaction log ::fn_dblog. I found an interesting hack by Paul S Randal on how to find which user dropped an object using the transaction log.
SELECT [Transaction Id], [Begin Time], [UID], [SPID]
FROM ::fn_dblog (NULL, NULL)
WHERE [Transaction Name] = 'DROPOBJ'
The (NULL, NULL) is the starting LSN and ending LSN to process - NULL means process everything available.
Now, this only shows us that a table was dropped, not which table it was. There's no way to get the name of the table that was dropped, only the object ID - so you'll need to have some other way to determine what the table ID is if there are multiple table drops and only one of them is malignant.
For SQL Server 2000, the code to find which object ID we're talking about is as follows (dropping the Transacation Id into the WHERE clause):
SELECT DISTINCT [Object Name] FROM ::fn_dblog (NULL, NULL)
WHERE [Transaction Id] = '0000:000000e0'
AND [Context] = 'LCX_IAM';
GO
Object Name
--------------------
(2009058193)
The object ID in parentheses is the ID of the table that was dropped.
For SQL Server 2005 and 2008, the code is as follows (with the same Transaction Id substitution):
SELECT TOP (1) [Lock Information] FROM ::fn_dblog (NULL, NULL)
WHERE [Transaction Id] = '0000:00000587'
AND [Lock Information] LIKE '%SCH_M OBJECT%';
GO
Lock Information
--------------------------------------------
ACQUIRE_LOCK_SCH_M OBJECT: 8:2073058421:0
The 8:2073058421 is the database ID and object ID of the table that was dropped.
PS If you find the you don't get enough info from ::fn_dblog, try turning on trace flag 2537. It allows the function to look at *all* possible log, not just the active log.
Install DAX without Domain Controller
The DAX installer compares current user's domain with the local machine netbios name. If they match the installer thinks that it has been run under a local user and throws an error
«You are logged on with a local computer account. You must be logged on with a domain account to run Microsoft Dynamics AX Setup».
To avoid this check change local machine netbios name in the registry (HKLM\System\CurrentControlSet\Control\ComputerName\ActiveComputerName) AND after that change UserDnsDomain environment variable to the same value prior to running the setup.
You can run cmd.exe, set the environment variable and run the setup from the command line so that it can "see" the changed value. After installation change ActiveComputerName in the registry back to its original value.
«You are logged on with a local computer account. You must be logged on with a domain account to run Microsoft Dynamics AX Setup».
To avoid this check change local machine netbios name in the registry (HKLM\System\CurrentControlSet\Control\ComputerName\ActiveComputerName) AND after that change UserDnsDomain environment variable to the same value prior to running the setup.
You can run cmd.exe, set the environment variable and run the setup from the command line so that it can "see" the changed value. After installation change ActiveComputerName in the registry back to its original value.
Saturday, May 15, 2010
Transaction log not truncating
I landed into this strange problem where i was not able to truncate the log of the database i tried the backup log and then shrinkfile a no of times with no luck.
Then i remembered that the database was being replicated using transactional log shipments so maybe there was some issue there. Thus started to look for any open transactions which might be holding the transaction logs used the command
DBCC OPENTRAN(db_name)
Transaction information for database 'BATEELNAV5'.
Replicated Transaction Information:
Oldest distributed LSN : (0:0:0)
Oldest non-distributed LSN : (28941:199:1)
DBCC execution completed. If DBCC printed error messages, contact your system administrator.
Got a message which confirmed that there was a replication transaction which was holding the database I got rid of the same using
use
go
sp_repldone @xactid = NULL, @xact_segno = NULL, @numtrans = 0, @time = 0, @reset = 1
go
I was then able to truncate the log successfully.
Then i remembered that the database was being replicated using transactional log shipments so maybe there was some issue there. Thus started to look for any open transactions which might be holding the transaction logs used the command
DBCC OPENTRAN(db_name)
Transaction information for database 'BATEELNAV5'.
Replicated Transaction Information:
Oldest distributed LSN : (0:0:0)
Oldest non-distributed LSN : (28941:199:1)
DBCC execution completed. If DBCC printed error messages, contact your system administrator.
Got a message which confirmed that there was a replication transaction which was holding the database I got rid of the same using
use
go
sp_repldone @xactid = NULL, @xact_segno = NULL, @numtrans = 0, @time = 0, @reset = 1
go
I was then able to truncate the log successfully.
Monday, April 19, 2010
AX 2009 Document Management FilePath
We are doing this project where it is required that the attachments are stored into a folder on the HDD now thatz a default configurable parameter in AX the complication is that in this case we are expecting the attachment size to be huge so much so that it exceeds the storage capacity of the server.
Well in the modern world no one takes a decision of making a change in the system to accomodate for such a scenario as the easiest option is to increase the storage space but in this case it seems that we do not have an option to further expand the storage on the server and once the server disk is filled up the only option is to either get a new server which has more slots to push more HDD's or to backup the files onto a different server.
AX allows us to configure a folder path for each file type but the issue is that it only maintains the file name in the transaction tables and the path is maintained in the parameter table which means if the path is changed at a later date AX will not be able to trace the old attachements coz when those were made the path was different
I found where my attachments for a customer were being stored and realized that the path was being referred to from the parameters table and the full file name with path was being build in a function in the docuref table called completeFileName
select DV.FileName, DV.FileType, DR.* from docuref DR
inner join docuvalue DV
on DR.ValueRecId = DV.RecId
where RefTableId = 77
and PartyId = 191
so the solution in this case is that we decide to store the file path with the file when the attachments are made so that the system remembers multiple paths for the files.
Well in the modern world no one takes a decision of making a change in the system to accomodate for such a scenario as the easiest option is to increase the storage space but in this case it seems that we do not have an option to further expand the storage on the server and once the server disk is filled up the only option is to either get a new server which has more slots to push more HDD's or to backup the files onto a different server.
AX allows us to configure a folder path for each file type but the issue is that it only maintains the file name in the transaction tables and the path is maintained in the parameter table which means if the path is changed at a later date AX will not be able to trace the old attachements coz when those were made the path was different
I found where my attachments for a customer were being stored and realized that the path was being referred to from the parameters table and the full file name with path was being build in a function in the docuref table called completeFileName
select DV.FileName, DV.FileType, DR.* from docuref DR
inner join docuvalue DV
on DR.ValueRecId = DV.RecId
where RefTableId = 77
and PartyId = 191
so the solution in this case is that we decide to store the file path with the file when the attachments are made so that the system remembers multiple paths for the files.
Saturday, April 10, 2010
Create Clustered Index Script
I had this requirement where we identified that a no of tables in AXapta were missing the clustered index which was resulting in some performance issues. It was then recommended by Microsoft to create clustered indexes on all these tables this is what we did.
--list of all tables missing an clustered index
select SO.name, SI.*
from sysobjects SO
left join sysindexes SI
on SI.id = SO.id
and SI.indid = 1
where SI.indid is null
and SO.type = 'U'
--script to generate the script
select 'create clustered index [IDX_' + SO.name + '] on [' + SO.name + '] (recid, dataareaid )'
from sysobjects SO
left join sysindexes SI
on SI.id = SO.id
and SI.indid = 1
where SI.indid is null
and SO.type = 'U'
another important query is to list all the columns in a index in this case clustered index
--list of columns in the clustered index
select OBJECT_NAME(SI.id) TableName, SI.name IndexName, SIK.indid IndexId, SC.name ColName
from sysindexes SI
inner join sysindexkeys SIK
on SI.id = SIK.id
and SI.indid = SIK.indid
inner join syscolumns SC
on SC.id = SIK.id
and SC.colid = SIK.colid
where SIK.id = OBJECT_ID( 'ContactMapping' )
and SIK.indid = 1
order by SIK.indid, SC.colorder
--list of all tables missing an clustered index
select SO.name, SI.*
from sysobjects SO
left join sysindexes SI
on SI.id = SO.id
and SI.indid = 1
where SI.indid is null
and SO.type = 'U'
--script to generate the script
select 'create clustered index [IDX_' + SO.name + '] on [' + SO.name + '] (recid, dataareaid )'
from sysobjects SO
left join sysindexes SI
on SI.id = SO.id
and SI.indid = 1
where SI.indid is null
and SO.type = 'U'
another important query is to list all the columns in a index in this case clustered index
--list of columns in the clustered index
select OBJECT_NAME(SI.id) TableName, SI.name IndexName, SIK.indid IndexId, SC.name ColName
from sysindexes SI
inner join sysindexkeys SIK
on SI.id = SIK.id
and SI.indid = SIK.indid
inner join syscolumns SC
on SC.id = SIK.id
and SC.colid = SIK.colid
where SIK.id = OBJECT_ID( 'ContactMapping' )
and SIK.indid = 1
order by SIK.indid, SC.colorder
Thursday, April 08, 2010
Space used by each Table in SQL
SQL 2005 Version
================
BEGIN try
DECLARE @table_name VARCHAR(500) ;
DECLARE @schema_name VARCHAR(500) ;
DECLARE @tab1 TABLE(
tablename VARCHAR (500) collate database_default
, schemaname VARCHAR(500) collate database_default
);
DECLARE @temp_table TABLE (
tablename sysname
, row_count INT
, reserved VARCHAR(50) collate database_default
, data VARCHAR(50) collate database_default
, index_size VARCHAR(50) collate database_default
, unused VARCHAR(50) collate database_default
);
INSERT INTO @tab1
SELECT t1.name
, t2.name
FROM sys.tables t1
INNER JOIN sys.schemas t2 ON ( t1.schema_id = t2.schema_id );
DECLARE c1 CURSOR FOR
SELECT t2.name + '.' + t1.name
FROM sys.tables t1
INNER JOIN sys.schemas t2 ON ( t1.schema_id = t2.schema_id );
OPEN c1;
FETCH NEXT FROM c1 INTO @table_name;
WHILE @@FETCH_STATUS = 0
BEGIN
SET @table_name = REPLACE(@table_name, '[','');
SET @table_name = REPLACE(@table_name, ']','');
-- make sure the object exists before calling sp_spacedused
IF EXISTS(SELECT OBJECT_ID FROM sys.objects WHERE OBJECT_ID = OBJECT_ID(@table_name))
BEGIN
INSERT INTO @temp_table EXEC sp_spaceused @table_name, false ;
END
FETCH NEXT FROM c1 INTO @table_name;
END;
CLOSE c1;
DEALLOCATE c1;
SELECT t1.*
, t2.schemaname
FROM @temp_table t1
INNER JOIN @tab1 t2 ON (t1.tablename = t2.tablename )
ORDER BY schemaname,tablename;
END try
BEGIN catch
SELECT -100 AS l1
, ERROR_NUMBER() AS tablename
, ERROR_SEVERITY() AS row_count
, ERROR_STATE() AS reserved
, ERROR_MESSAGE() AS data
, 1 AS index_size, 1 AS unused, 1 AS schemaname
END catch
Previous Versions of SQL
========================
DECLARE @table_name VARCHAR(500)
DECLARE @schema_name VARCHAR(500)
DECLARE @tab1 TABLE(
tablename VARCHAR (500) collate database_default
,schemaname VARCHAR(500) collate database_default
)
CREATE TABLE #temp_Table (
tablename sysname
,row_count INT
,reserved VARCHAR(50) collate database_default
,data VARCHAR(50) collate database_default
,index_size VARCHAR(50) collate database_default
,unused VARCHAR(50) collate database_default
)
INSERT INTO @tab1
SELECT Table_Name, Table_Schema
FROM information_schema.tables
WHERE TABLE_TYPE = 'BASE TABLE'
DECLARE c1 CURSOR FOR
SELECT Table_Schema + '.' + Table_Name
FROM information_schema.tables t1
WHERE TABLE_TYPE = 'BASE TABLE'
OPEN c1
FETCH NEXT FROM c1 INTO @table_name
WHILE @@FETCH_STATUS = 0
BEGIN
SET @table_name = REPLACE(@table_name, '[','');
SET @table_name = REPLACE(@table_name, ']','');
-- make sure the object exists before calling sp_spacedused
IF EXISTS(SELECT id FROM sysobjects WHERE id = OBJECT_ID(@table_name))
BEGIN
INSERT INTO #temp_Table EXEC sp_spaceused @table_name, false;
END
FETCH NEXT FROM c1 INTO @table_name
END
CLOSE c1
DEALLOCATE c1
SELECT t1.*
,t2.schemaname
FROM #temp_Table t1
INNER JOIN @tab1 t2 ON (t1.tablename = t2.tablename )
ORDER BY schemaname,t1.tablename;
DROP TABLE #temp_Table
================
BEGIN try
DECLARE @table_name VARCHAR(500) ;
DECLARE @schema_name VARCHAR(500) ;
DECLARE @tab1 TABLE(
tablename VARCHAR (500) collate database_default
, schemaname VARCHAR(500) collate database_default
);
DECLARE @temp_table TABLE (
tablename sysname
, row_count INT
, reserved VARCHAR(50) collate database_default
, data VARCHAR(50) collate database_default
, index_size VARCHAR(50) collate database_default
, unused VARCHAR(50) collate database_default
);
INSERT INTO @tab1
SELECT t1.name
, t2.name
FROM sys.tables t1
INNER JOIN sys.schemas t2 ON ( t1.schema_id = t2.schema_id );
DECLARE c1 CURSOR FOR
SELECT t2.name + '.' + t1.name
FROM sys.tables t1
INNER JOIN sys.schemas t2 ON ( t1.schema_id = t2.schema_id );
OPEN c1;
FETCH NEXT FROM c1 INTO @table_name;
WHILE @@FETCH_STATUS = 0
BEGIN
SET @table_name = REPLACE(@table_name, '[','');
SET @table_name = REPLACE(@table_name, ']','');
-- make sure the object exists before calling sp_spacedused
IF EXISTS(SELECT OBJECT_ID FROM sys.objects WHERE OBJECT_ID = OBJECT_ID(@table_name))
BEGIN
INSERT INTO @temp_table EXEC sp_spaceused @table_name, false ;
END
FETCH NEXT FROM c1 INTO @table_name;
END;
CLOSE c1;
DEALLOCATE c1;
SELECT t1.*
, t2.schemaname
FROM @temp_table t1
INNER JOIN @tab1 t2 ON (t1.tablename = t2.tablename )
ORDER BY schemaname,tablename;
END try
BEGIN catch
SELECT -100 AS l1
, ERROR_NUMBER() AS tablename
, ERROR_SEVERITY() AS row_count
, ERROR_STATE() AS reserved
, ERROR_MESSAGE() AS data
, 1 AS index_size, 1 AS unused, 1 AS schemaname
END catch
Previous Versions of SQL
========================
DECLARE @table_name VARCHAR(500)
DECLARE @schema_name VARCHAR(500)
DECLARE @tab1 TABLE(
tablename VARCHAR (500) collate database_default
,schemaname VARCHAR(500) collate database_default
)
CREATE TABLE #temp_Table (
tablename sysname
,row_count INT
,reserved VARCHAR(50) collate database_default
,data VARCHAR(50) collate database_default
,index_size VARCHAR(50) collate database_default
,unused VARCHAR(50) collate database_default
)
INSERT INTO @tab1
SELECT Table_Name, Table_Schema
FROM information_schema.tables
WHERE TABLE_TYPE = 'BASE TABLE'
DECLARE c1 CURSOR FOR
SELECT Table_Schema + '.' + Table_Name
FROM information_schema.tables t1
WHERE TABLE_TYPE = 'BASE TABLE'
OPEN c1
FETCH NEXT FROM c1 INTO @table_name
WHILE @@FETCH_STATUS = 0
BEGIN
SET @table_name = REPLACE(@table_name, '[','');
SET @table_name = REPLACE(@table_name, ']','');
-- make sure the object exists before calling sp_spacedused
IF EXISTS(SELECT id FROM sysobjects WHERE id = OBJECT_ID(@table_name))
BEGIN
INSERT INTO #temp_Table EXEC sp_spaceused @table_name, false;
END
FETCH NEXT FROM c1 INTO @table_name
END
CLOSE c1
DEALLOCATE c1
SELECT t1.*
,t2.schemaname
FROM #temp_Table t1
INNER JOIN @tab1 t2 ON (t1.tablename = t2.tablename )
ORDER BY schemaname,t1.tablename;
DROP TABLE #temp_Table
Wednesday, April 07, 2010
Inconsistencies in G/L Entry posting
The transaction cannot be completed because it will cause inconsistencies in the G/L Entry table. Check where and how the CONSISTENT function is used in the transaction to find the reason for the error. Contact your system manager if you need assistance. Parts of the program mark tables as inconsistent during very comprehensive tasks, such as posting. This prevent data from being updated incorrectly.
1. Rounding errors could be one of the issues making a line inconsistent.
Finding the transaction which cause the issue ? there is a solution got this from Rashed Amini courtesy mibuso
1. create the following CU
OBJECT Codeunit 50000 Single Instance CU
{
OBJECT-PROPERTIES
{
Date=10/11/07;
Time=[ 2:50:02 PM];
Modified=Yes;
Version List=MOD01;
}
PROPERTIES
{
SingleInstance=Yes;
OnRun=BEGIN
IF NOT StoreToTemp THEN BEGIN
StoreToTemp := TRUE;
END ELSE
FORM.RUNMODAL(0,TempGLEntry);
END;
}
CODE
{
VAR
TempGLEntry@1000000000 : TEMPORARY Record 17;
StoreToTemp@1000000001 : Boolean;
PROCEDURE InsertGL@1000000000(GLEntry@1000000000 : Record 17);
BEGIN
IF StoreToTemp THEN BEGIN
TempGLEntry := GLEntry;
IF NOT TempGLEntry.INSERT THEN BEGIN
TempGLEntry.DELETEALL;
TempGLEntry.INSERT;
END;
END;
END;
BEGIN
END.
}
}
2. And in CU 12 I add the following Code in function FinishCodeunit
FinishCodeunit()
WITH GenJnlLine DO BEGIN
IF GLEntryTmp.FIND('-') THEN BEGIN
REPEAT
GLEntry := GLEntryTmp;
IF GLSetup."Additional Reporting Currency" = '' THEN BEGIN
GLEntry."Additional-Currency Amount" := 0;
GLEntry."Add.-Currency Debit Amount" := 0;
GLEntry."Add.-Currency Credit Amount" := 0;
END;
GLEntry.INSERT;
//MOD01 Start
SingleCU.InsertGL(GLEntry);
//MOD01 End
IF NOT InsertFAAllocDim(GLEntry."Entry No.") THEN
3. Once you've made the changes. You run the SinleInstanceCU Once.
4. Then do what ever you do to get the consistency error.
5. Then Run the SingleInstanceCU again.
6. You'll see a list of GL lines. You will see why the transaction is not balanced.
1. Rounding errors could be one of the issues making a line inconsistent.
Finding the transaction which cause the issue ? there is a solution got this from Rashed Amini courtesy mibuso
1. create the following CU
OBJECT Codeunit 50000 Single Instance CU
{
OBJECT-PROPERTIES
{
Date=10/11/07;
Time=[ 2:50:02 PM];
Modified=Yes;
Version List=MOD01;
}
PROPERTIES
{
SingleInstance=Yes;
OnRun=BEGIN
IF NOT StoreToTemp THEN BEGIN
StoreToTemp := TRUE;
END ELSE
FORM.RUNMODAL(0,TempGLEntry);
END;
}
CODE
{
VAR
TempGLEntry@1000000000 : TEMPORARY Record 17;
StoreToTemp@1000000001 : Boolean;
PROCEDURE InsertGL@1000000000(GLEntry@1000000000 : Record 17);
BEGIN
IF StoreToTemp THEN BEGIN
TempGLEntry := GLEntry;
IF NOT TempGLEntry.INSERT THEN BEGIN
TempGLEntry.DELETEALL;
TempGLEntry.INSERT;
END;
END;
END;
BEGIN
END.
}
}
2. And in CU 12 I add the following Code in function FinishCodeunit
FinishCodeunit()
WITH GenJnlLine DO BEGIN
IF GLEntryTmp.FIND('-') THEN BEGIN
REPEAT
GLEntry := GLEntryTmp;
IF GLSetup."Additional Reporting Currency" = '' THEN BEGIN
GLEntry."Additional-Currency Amount" := 0;
GLEntry."Add.-Currency Debit Amount" := 0;
GLEntry."Add.-Currency Credit Amount" := 0;
END;
GLEntry.INSERT;
//MOD01 Start
SingleCU.InsertGL(GLEntry);
//MOD01 End
IF NOT InsertFAAllocDim(GLEntry."Entry No.") THEN
3. Once you've made the changes. You run the SinleInstanceCU Once.
4. Then do what ever you do to get the consistency error.
5. Then Run the SingleInstanceCU again.
6. You'll see a list of GL lines. You will see why the transaction is not balanced.
Saturday, April 03, 2010
CONTEXT_INFO Connection wide variable
Need for a connection wide variable at the database level arose when I was creating an interface between Navision and MSCRM
The requirement was that when an Item was created in Navision it was pushed to CRM and when any changes were made in CRM they were reflected back in Navision we used triggers and things were fine but due to a two way interface it was required that the trigger should ignore any updates which were being made during the synch process and only the entries which were being made by the user on the front end should be captured for sync.
This is what we did. The procedure which was updating the CRM item table set the context_info as shown below
--context_info is used in the target tables triggers to ignore updates
--else it results in endless updates
declare @CONTEXT_INFO varbinary(128)
select @CONTEXT_INFO = CAST ('SynchronizeNAV' as varbinary(128) )
set CONTEXT_INFO @CONTEXT_INFO
In the trigger on the Item table in CRM we checked if the update/ insert was being made by the user or by the sync procedure as below
declare @sourceProc varchar(20)
select @sourceProc = cast ( context_info() as varchar)
set @sourceProc = ltrim( rtrim( @sourceProc ) )
if ( @sourceProc = 'SynchronizeCRM' )
return
one thing to ensure while using the context_info is that once the use is done it should be reset to null else it will continue to exist with the set value
thus at the end of the process we user
set context_info null
The requirement was that when an Item was created in Navision it was pushed to CRM and when any changes were made in CRM they were reflected back in Navision we used triggers and things were fine but due to a two way interface it was required that the trigger should ignore any updates which were being made during the synch process and only the entries which were being made by the user on the front end should be captured for sync.
This is what we did. The procedure which was updating the CRM item table set the context_info as shown below
--context_info is used in the target tables triggers to ignore updates
--else it results in endless updates
declare @CONTEXT_INFO varbinary(128)
select @CONTEXT_INFO = CAST ('SynchronizeNAV' as varbinary(128) )
set CONTEXT_INFO @CONTEXT_INFO
In the trigger on the Item table in CRM we checked if the update/ insert was being made by the user or by the sync procedure as below
declare @sourceProc varchar(20)
select @sourceProc = cast ( context_info() as varchar)
set @sourceProc = ltrim( rtrim( @sourceProc ) )
if ( @sourceProc = 'SynchronizeCRM' )
return
one thing to ensure while using the context_info is that once the use is done it should be reset to null else it will continue to exist with the set value
thus at the end of the process we user
set context_info null
Friday, April 02, 2010
Thursday, April 01, 2010
Get a list of Objects modified after a given date from AOT
1. Goto AOT - Expand "System documentation" - Expand "Tables" - Tablebrowse "UtilElements" or Tablebrowse - "UtilIdElements"
2. Right click in field "Utillevel" - select Filter By Field - type - usr - press ok
3. Right click in field "RecordType" - select Filter By Field - type - SharedProject - press ok
4. Right click in field "CreatedDate" - select Filter By Field - type - 01-Jan-2010..
2. Right click in field "Utillevel" - select Filter By Field - type - usr - press ok
3. Right click in field "RecordType" - select Filter By Field - type - SharedProject - press ok
4. Right click in field "CreatedDate" - select Filter By Field - type - 01-Jan-2010..
Oracle Linked Server
Following are the steps involved in creating a linked server and working with stored procedures on the Oracle Database
1. Install the Oracle Client software on the server
2. Create a linked server as follows
EXEC sp_addlinkedserver
'OracleLinkedServer', 'Oracle',
'MSDAORA', 'OracleServer'
The name of the linked server is Oracle-LinkedServer.
The second parameter, product name (Oracle),is optional.The third parameter specifies the OLE DB provider.
The third parameter MSDAORA is the name of the Microsoft OLE DB Provider for Oracle.
Thr fourth parameter is the data source name of the Oracle Server (This is the TNS name of Oracle server which should work when pinged using a utility called TNSping and the servername this utility is installed with the client software of Oracle.)
3. Add Login information for the linked server
EXEC sp_addlinkedsrvlogin '
OracleLinkedServer ', false,
'SQLuser', 'OracleUser',
'OraclePwd'
The first parameter, Oracle Linked Server, specifies the name of the linked server system that you created.
The second parameter determines the name of the login to be used on the remote system.A value of True indicates that the current SQL Server login will be used to connect to the linked server. This requires that the logins on the two database servers match, which is typically not the case.A value of False means you'll supply the remote login.
The third parameter specifiesthe name of a SQL Server login that this remote login will map to.A value of NULL indicates that this remote login will be used for all connections to the linked Oracle server. If the Oracle system uses Windows authentication, you can use the keyword domain\ to specify a Windows login.
The fourth and fifth parameters supply login and password values for the Oracle system.
3. To be able to use stored procedures on Oracle the RPC paramter should be enabled on the server which is done as follows:-
sp_serveroption 'ORALINK', 'rpc out', 'true'
sp_serveroption 'ORALINK', 'rpc', 'true'
4. To access the tables on a linked server, use a four-part naming syntax: linked_server_name.catalog_ name.schema_name.table_name.
For example, to query the sample Oracle Scott database, you'd enter the statement
SELECT * FROM OracleLinkedServer..SCOTT.EMP
(Please mark the double dots after the server name which means that the catalogue or database name is not required and used by default this is because in oracle each database is treated as a server with a TNS name)
5. Test a stored procedure
declare @lstatus varchar(240)
set @lstatus = ''
execute ( 'BEGIN JNC_RMS_API.CHECK_CONNECTIVITY( ? ); END;', @lstatus output) at ORALINK
print @lstatus
Please ensure that the strings being sent for output parameters are initialized with a default value ( set @lstatus = '' ) else the procedures don't execute.
1. Install the Oracle Client software on the server
2. Create a linked server as follows
EXEC sp_addlinkedserver
'OracleLinkedServer', 'Oracle',
'MSDAORA', 'OracleServer'
The name of the linked server is Oracle-LinkedServer.
The second parameter, product name (Oracle),is optional.The third parameter specifies the OLE DB provider.
The third parameter MSDAORA is the name of the Microsoft OLE DB Provider for Oracle.
Thr fourth parameter is the data source name of the Oracle Server (This is the TNS name of Oracle server which should work when pinged using a utility called TNSping and the servername this utility is installed with the client software of Oracle.)
3. Add Login information for the linked server
EXEC sp_addlinkedsrvlogin '
OracleLinkedServer ', false,
'SQLuser', 'OracleUser',
'OraclePwd'
The first parameter, Oracle Linked Server, specifies the name of the linked server system that you created.
The second parameter determines the name of the login to be used on the remote system.A value of True indicates that the current SQL Server login will be used to connect to the linked server. This requires that the logins on the two database servers match, which is typically not the case.A value of False means you'll supply the remote login.
The third parameter specifiesthe name of a SQL Server login that this remote login will map to.A value of NULL indicates that this remote login will be used for all connections to the linked Oracle server. If the Oracle system uses Windows authentication, you can use the keyword domain\ to specify a Windows login.
The fourth and fifth parameters supply login and password values for the Oracle system.
3. To be able to use stored procedures on Oracle the RPC paramter should be enabled on the server which is done as follows:-
sp_serveroption 'ORALINK', 'rpc out', 'true'
sp_serveroption 'ORALINK', 'rpc', 'true'
4. To access the tables on a linked server, use a four-part naming syntax: linked_server_name.catalog_ name.schema_name.table_name.
For example, to query the sample Oracle Scott database, you'd enter the statement
SELECT * FROM OracleLinkedServer..SCOTT.EMP
(Please mark the double dots after the server name which means that the catalogue or database name is not required and used by default this is because in oracle each database is treated as a server with a TNS name)
5. Test a stored procedure
declare @lstatus varchar(240)
set @lstatus = ''
execute ( 'BEGIN JNC_RMS_API.CHECK_CONNECTIVITY( ? ); END;', @lstatus output) at ORALINK
print @lstatus
Please ensure that the strings being sent for output parameters are initialized with a default value ( set @lstatus = '' ) else the procedures don't execute.
Thursday, March 25, 2010
AX Workflow Error
![]() |
| From Blog |
1- Open cmd as administrator , Right click run as administrator
2- Go to folder C:\inetpub\AdminScripts>cscript adsutil.vbs get W3SVC/APPPOOLS/Enable32BitAppOnWin64
3- If this node is True just set it to False “C:\inetpub\AdminScripts>cscript adsutil.vbs set W3SVC/APPPOOLS/Enable32BitAppOnWin64 False”
4- Restart IIS iisreset and retry the installation.
Wednesday, November 11, 2009
Keyword Options for X++ select Statements Keyword
Description
firstfast
Fetches the first selected record faster than the remaining selected records.
firstonly
Returns only the first selected record.
forupdate
Selects records for updating.
nofetch
Specifies that the Dynamics AX runtime should not execute the statement immediately because the records are required only by some other operation.
forceplaceholders
Forces the Dynamics AX runtime to generate a query with placeholder field constraints. For example, the query generated for the preceding code example looks like this: select * from myTable where myField1=?. Database query plans are reused when this option is specified. This is the default option for select statements that do not join table records. This keyword cannot be used with the forceliterals keyword.
forceliterals
Forces the Dynamics AX runtime to generate a query with the specified field constraints. For example, the query generated for the preceding code example looks like this: select * from myTable where myField1='value'. Database query plans are not reused when this option is specified. This keyword cannot be used with the forceplaceholders keyword.
forceselectorder
Forces the Microsoft SQL Server query processor to access tables in the order in which they are specified in the query. (No effect on Oracle.)
forcenestedloop
Forces the SQL Server query processor to use a nested-loop algorithm for table join operations. Other join algorithms, such as hash-join and merge-join algorithms, are therefore not considered by the query processor.
reverse
Returns records in reverse of the select order.
firstfast
Fetches the first selected record faster than the remaining selected records.
firstonly
Returns only the first selected record.
forupdate
Selects records for updating.
nofetch
Specifies that the Dynamics AX runtime should not execute the statement immediately because the records are required only by some other operation.
forceplaceholders
Forces the Dynamics AX runtime to generate a query with placeholder field constraints. For example, the query generated for the preceding code example looks like this: select * from myTable where myField1=?. Database query plans are reused when this option is specified. This is the default option for select statements that do not join table records. This keyword cannot be used with the forceliterals keyword.
forceliterals
Forces the Dynamics AX runtime to generate a query with the specified field constraints. For example, the query generated for the preceding code example looks like this: select * from myTable where myField1='value'. Database query plans are not reused when this option is specified. This keyword cannot be used with the forceplaceholders keyword.
forceselectorder
Forces the Microsoft SQL Server query processor to access tables in the order in which they are specified in the query. (No effect on Oracle.)
forcenestedloop
Forces the SQL Server query processor to use a nested-loop algorithm for table join operations. Other join algorithms, such as hash-join and merge-join algorithms, are therefore not considered by the query processor.
reverse
Returns records in reverse of the select order.
Subscribe to:
Posts (Atom)
