Sunday, August 30, 2020

Getting rid of Unidentified Network

 Had a bad network experience for a few days. My network was slow on the machine and was i was losing the connection a no of times in a day. 

Luckily if found a quick fix for the problem. Start the cmd command on an elevated command prompt and enter the following command 

C:\>netsh winsock reset

After this command restart your computer and hopefully the problem would be resolved. 

Friday, May 15, 2020

X++ SysOperation Framework

Firstly, what is a framework:
To understand frameworks we first need to understand libraries. Libraries are a bunch of code that is pre-written and packaged to save our time. When we need to do a task, we just call the appropriate library and it does the job for us. We don’t need to know the details of how the functions inside the libraries work, we just need to know how to call them.

Frameworks are just like libraries in a way that they make our job easier, but we can't call frameworks in the same way as libraries. To use framework, we have to learn the framework, the framework gives us a structure to place and call our code, and not the other way round.

In simple terms framework is to structure what libraries is to code. Using library we reuse code, and using a framework we reuse a class structure.

When we work with X++ there are these set of framework classes that are used all over X++ development.

SysOperation framework:  
The SysOperation is used whenever there is a user interface which triggers a certain functionality. Its quite close to the MVC pattern and work on the similar principles of segregating code to remove dependencies. 

The Model : Data contract
Its the model class from the MVC pattern in which we define attributes we need for our operation, commonly set as parameters by the user in a dialog. A regular class is identified as a SysOperation Data Contract class by adding the DataContractAttribute attribute to its declaraion.

Additionally, if we want a set of methods to be available to us, we can also extend the SysOperationDataContractBase base class. With this class, we can define how our basic dialog will look like to the user. We can define labels, groups, sizes and types of the parameters.

The View : UI Builder
Its an optional class and is the view part from the MVC pattern. Generally AX creates the dialog for us with a standard view, however if are not happy with the standard view of we want to extend it we use the UI Builder class.

The Controller : Controller 
The controller orchestrates the whole operation. It holds information about the operation, such as if it should show a progress form, if it should show the dialog, and its execution mode - asynchronous or not. To create a controller class you should extend the SysOperationServiceController.

Service
While using the MVC we have to understand that not everything is a perfect MVC and as per OOP principles we have to ensure the dependencies between the classes is minimal. Technically one could put the business logic in the controller, however what if the same business logic has to be used outside the controller and without an interaction ? Hence, it a good idea to store the business logic outside the controller and hence we have the service classes. 

The service class stores the business logic. To create a service class we have to extend it from the SysOperationServiceBase class. When constructing your controller, you can indicate which class holds the operation that the controller will trigger.

Monday, April 27, 2020

Check for Localization

Localization needs can break our existing code. Sometimes its required to consider the localized configuration for a given region and then accordinlgy take some actions.

Given below is the example where we are expected to check if the current legal entity is the localized legal entity for India.

use the below macro in the declaration section of the object
#ISOCountryRegionCodes

Now the macro #isoIN would be available and can be used as follows:
SysCountryRegionCode::isLegalEntityInCountryRegion([#isoIN]);

Tuesday, April 07, 2020

Dimension Tables

Step 1: Lets take a simple scenario of creating 2 dimensions or Attributes.
  1. D1_Location
  2. D2_Department


Step 2: These attributes would then have values
    1.1  DXB
    1.2  IND

    2.1  SALES
    2.2  OPS
    2.3  ADMIN

Step 3: These dimensions can be combined to create attribute sets. A set would decide the dimensions involved and their sequence 
    3.1  SET1: In this set D1_Location dimension is first and D2_Department dimension is second in sequence.
          3.1.1  D1_Location
          3.1.2  D2_Department

    3.2   SET2: In this set the D2_Department dimension is first and D1_Location is second in sequence. 
          3.2.1  D2_Department
          3.2.2  D1_Location


Step 4: Based on the sets defined above a combination of attribute values could be created
  4.1    SET1
     4.1.1   DXB+SALES
     4.1.2   DXB+OPS
     4.1.3   DXB+ADMIN


  4.2  SET2
    4.2.1    IND+SALES
    4.2.2    IND+OPS
    4.2.3    IND+ADMIN


Firstly dimensions are of two types: -
Lookup Dimenions: These are lookup to an existing master in AX. The dimensions are stored in two tables namely, DimensionAttribute (for dimension name) and DimensionAttributeValue (for dimension values, There is a EntityInstance field in this table. That’s the relation to the value original table.) 
Custom : These are custom defined values and do not exist elsewhere within AX. These are stores in two custom tables FinancialTagCategory (for dimension name) and DimensionFinancialTag (for dimension values)


When the above structure has to be stored in AX tables, it is divided in two parts. The part 1 takes care of storing the schema and the part 2 takes care of storing the values.

Part 1 : the details about the dimensions are stored in
  1. DimensionAttribute = this tables is the dimension master (D1_Location and D2_Department). Each dimension has 1 record in this table. (Step 1)
  2. DimensionAttributeSet = this table maintains the dimension set. (Step 3)
  3. DimensionAttributeSetItem = this table is the child table for DimensionAttributeSet and stores the the individual attributes in a set (Step 3.1.1 to 3.2.2)

 Part 2 : The *Value counterparts for the above dimensions are :-
  1. DimensionAttributeValue : The individual values DXB, IND, SALES, OPS, ADMIN ( Step 1.1 to 2.3). If the values are looked up from another table, there is a EntityInstance field in this table which stores the RecID of the actual value from the original table when the dimensions created.
  2. DimensionAttributeValueSet : The values corresponding to each set. There is a hash value generated for each combination of values. (a hash is a numeric equivalent of a string).  
  3. DimensionAttributeValueSetItem : The individual values for each of the attribute of the set.
  4. FinancialTagCategory: This table stores record of custom financial dimension.
  5. DimensionFinancialTag: this table stores custom financial dimensions value.

The combination of Ledger Account with the DimensionAttributes is stored in a new set of tables referred as ValueGroup Tables. The nomenclature is justified as value group is a group that is created to store values (amounts).
  1. DimensionAttributeValueCombination: Stores combination of Ledger and DimensionAttributes
  2. DimensionAttributeValueGroup: Stores dimension group
  3. DimensionAttributeValueGroupCombination: Store relation of DimensionAttributeValueGroup and DimensionAttributeValueCombination
  4. DimensionAttributeLevelValue: Stores dimension value of ledger dimension

Consider the following SQL statement, will return each CombinationID, AttributeName, AttributeValue:

select DAVSI.DimensionAttributeValueSet, DA.Name, DAVSI.DisplayValue
from DimensionAttributeValueSetItem DAVSI
inner join DimensionAttributeValue DAV
    on DAV.RecID = DAVSI.DimensionAttributeValue
inner join DimensionAttribute DA
    on DA.RecID = DAV.DimensionAttribute


Consider the below SQL statement, will return each CombinationID for LedgerDimension, AttributeName,  AttributeValue:

select DAVGI.DimensionAttributeValueCombination, DA.Name, DALV.DisplayValue
from dimensionAttributeValueGroupCombination DAVGI
inner join dimensionAttributeLevelValue  DALV
    on DALV.DimensionAttributeValueGroup = DAVGI.DimensionAttributeValueGroup
inner join dimensionAttributeValue DAV
    on DAV.RECID = DALV.DimensionAttributeValue
inner join DimensionAttribute DA
    on DA.RecID = DAV.DimensionAttribute
order by DAVGI.DimensionAttributeValueCombination, DALV.Ordinal


Monday, April 06, 2020

Partially disable dimensionDefaultingController

The requirement being restricting the dimension selection on the dimension controller based on certain business rules.

I had a requirement where the default dimensions on the employee master had to be restricted to allow entry only for a subset of the total dimensions. As shown in the screen shot below the need was to restrict the selection of only D1_Division and D3_ConsGroup on the employee master and disable the rest for data entry


The code for the same has to be written on the Active method of the relevant DataSource on the form.


    DimensionAttributeSetStorage    dimAttrSetStorage;
    DimensionAttribute              dimAttribute;
    DimensionEnumeration            dimEnumeration;

    int ret;

    ret = super();

    dimensionDefaultingController.activated();    
    
    //The dimension controller to be locked to allow only certain dimensions to be entered.
    dimAttrSetStorage = new DimensionAttributeSetStorage();
    // D1_Division
    dimAttribute = DimensionAttribute::findByName('D1_Division');
    if(dimAttribute)
    {
        dimAttrSetStorage.addItem( dimAttribute.RecId, dimAttribute.HashKey, NoYes::Yes );
    }
    // D3_ConsGroup
    dimAttribute = DimensionAttribute::findByName('D3_ConsGroup');
    if(dimAttribute)
    {
        dimAttrSetStorage.addItem( dimAttribute.RecId, dimAttribute.HashKey, NoYes::Yes );
    }

    dimEnumeration = dimAttrSetStorage.save();
    dimensionDefaultingController.setEditability( true, dimEnumeration );


Wednesday, March 11, 2020

Use mapped network drive in SQL Server

To use a mapped drive in SQL server make sure that the mapping is done using the xp_cmdshell procedure.

Before the extended procedure can be used it has to b enabled as shown below

EXEC sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO

EXEC sp_configure 'xp_cmdshell',1
GO
RECONFIGURE
GO

thereafter map the drive so that SQL understands it

EXEC XP_CMDSHELL 'net use Z: \\192.168.100.36\nansql'

Tuesday, October 22, 2019

AX2012 cross company query challenge and workaround

Had an encounter with cross company query in AX2012 and listed below are the findings. When we issue the cross company clause there are a few important things that happen.

1. The sql query at the backend is converted into a cross join for most of the joins.
2. The sql query at the backend automatically created conditions for the partions and dataareaids to ensure that cross joins dont distort the data across companies.

Take an example of an select statement as below:-

SELECT crosscompany count(RecId) FROM TSTimesheetLine
join tsTimesheetTable
where TSTimesheetTable.TimesheetNbr == TSTimesheetLine.TimesheetNbr
    && tsTimesheetLine.ProjId != 'Z905'
join TSTimesheetLineWeek
where TSTimesheetLineWeek.TSTimesheetLine == TSTimesheetLine.RecId
&& ( ( ( TsTimesheetLineWeek.Hours[0] + TsTimesheetLineWeek.Hours[1] + TsTimesheetLineWeek.Hours[2] + TsTimesheetLineWeek.Hours[3] + TsTimesheetLineWeek.Hours[4] + TsTimesheetLineWeek.Hours[5] + TsTimesheetLineWeek.Hours[6] + TsTimesheetLineWeek.Hours[7] ) > 0 ) )
notExists join supportHrsView  
where supportHrsView.TsTimesheetLineRef == TSTimesheetLine.RecId 

Now, my requirement is to match the highlighted condition across the dataAreaId as the supportHrsView is a shared table for me. Basically i dont want the compiler to apply a dataAreaId condition on this table. 

This statement is translated to SQL as follows
SELECT COUNT(T1.RECID) 
FROM TSTIMESHEETLINE T1 
CROSS JOIN TSTIMESHEETTABLE T2 
CROSS JOIN TSTIMESHEETLINEWEEK T3 
WHERE (T1.PARTITION=@P1) 
AND ((T2.PARTITION=@P2) AND ((T2.TIMESHEETNBR=T1.TIMESHEETNBR AND (T2.DATAAREAID = T1.DATAAREAID) AND (T2.PARTITION = T1.PARTITION)) AND (T1.PROJID<>@P3))) 
AND ((T3.PARTITION=@P4) AND ((T3.TSTIMESHEETLINE=T1.RECID AND (T3.DATAAREAID = T1.DATAAREAID) AND (T3.PARTITION = T1.PARTITION)) AND ((((((((T3.HOURS+T3.HOURS)+T3.HOURS2_)+T3.HOURS3_)+T3.HOURS4_)+T3.HOURS5_)+T3.HOURS6_)+T3.HOURS7_)>@P5))) 
AND NOT (EXISTS 
(
SELECT 'x' FROM AFZSUPPORTHRSVIEW T4 
WHERE (
(T4.PARTITION=@P6) 
AND (T4.TSTIMESHEETLINEREF=T1.RECID AND (T4.DATAAREAID = T1.DATAAREAID) AND (T4.PARTITION = T1.PARTITION))
)
)

)

Please note the following facts:
1. All the joins in the query are translated to cross joins. 
2. There are 4 set of data in this and these have an alias as T1, T2, T3 and T4

As we know that cross joins results into cartisan product of the two tables it can result in a huge result set and hence the compiler takes care to ensure that the data is not mixed up between the different sets by enforcing a partition and dataarea id condition even though it is not explicitly provided in the select statement. 
1. Please note that there is a partition condition applied for each of the resultsets T1 to T4. 
2. Please note that there is a dataAreaId condition applied for each resultset where the dataAreaId for T1 is applied on T2, T3, and T4


Coming back to what my requirement is i want a way to ensure that the dataAreaId condition applied on T4 (T4.DATAAREAID = T1.DATAAREAID) is skipped. The workaround to get around this is to apply an operator on the join with supportHrsView, so if i change my condition in the select query as follows:
where supportHrsView.TsTimesheetLineRef == TSTimesheetLine.RecId + 0 

Now, the compiler skips the forced dataAreaId and Partition join and the SQL query issued by the compiler to SQL is as follows: 

SELECT COUNT(T1.RECID) 
FROM TSTIMESHEETLINE T1 
CROSS JOIN TSTIMESHEETTABLE T2 
CROSS JOIN TSTIMESHEETLINEWEEK T3 
WHERE (T1.PARTITION=@P1) 
AND ((T2.PARTITION=@P2) AND ((T2.TIMESHEETNBR=T1.TIMESHEETNBR AND (T2.DATAAREAID = T1.DATAAREAID) AND (T2.PARTITION = T1.PARTITION)) AND (T1.PROJID<>@P3))) 
AND ((T3.PARTITION=@P4) AND ((T3.TSTIMESHEETLINE=T1.RECID AND (T3.DATAAREAID = T1.DATAAREAID) AND (T3.PARTITION = T1.PARTITION)) AND ((((((((T3.HOURS+T3.HOURS)+T3.HOURS2_)+T3.HOURS3_)+T3.HOURS4_)+T3.HOURS5_)+T3.HOURS6_)+T3.HOURS7_)>@P5))) 
AND NOT (EXISTS 
(
SELECT 'x' 
FROM AFZSUPPORTHRSVIEW T4 
WHERE ((T4.PARTITION=@P6) AND (T4.TSTIMESHEETLINEREF=(T1.RECID+@P7)))
)
)

Please note that the condition T4.DATAAREAID = T1.DATAAREAID is now not applied and we get the desired results. 

Saturday, September 21, 2019

Merge Queries

Had a requirement where two Queries created using the dynamics query framework classes, had to be merged together. My business case was as follows.

Business Case: Required to create a report which would be run for a selected no of employees (query1). Within this selected set of employee certain data was required for a further finer selection of employees ( query2). The query2 was a subset of employee like managers and part time employees with the selected query1.

It was required that the Query2 is appended to the original Query1 and the filters are copied so that the results can be achieved.

if ( filterQuery != null )
{
    //start by looping for all the datasources in the source query and find the common datasource in target query
    //if a common datasource is found, then merge the ranges. If not found try to find the parent if a common parent
    //is found then add the datasource below the right parent and merge the ranges         

    for (int ctr = 1; ctr <= filterQuery.dataSourceCount(); ctr ++)
    {
//check if a common datasource/table exists between the two queries
qdbCurrentSource = filterQuery.dataSourceNo(ctr);
qdbCommon = finalQuery.dataSourceTable(qdbCurrentSource.table());

if (!qdbCommon) //if a common table is not found then look for a parent
{
    parentTable = qdbCurrentSource.parentDataSource().file();

    if (parentTable)
    {
qdbCommonParent = finalQuery.dataSourceTable(parentTable);
if (qdbCommonParent) //if the parent is found then add the current datasource to the common parent
{
    qdbCommon = qdbCommonParent.addDataSource(qdbCurrentSource.table());
    qdbCommon.fetchMode(QueryFetchMode::One2One); //IMPORTANT without this the query can get seperated
 
    for( int intLinkCtr=1; intLinkCtr<= filterQuery.dataSourceNo(ctr).linkCount(); intLinkCtr ++)
    {
link = filterQuery.dataSourceNo(ctr).link(intLinkCtr) ;
if ( link.relatedField() == 0)
{
    qdbCommon.relations(true); //this only works between the parent and current datasource
}
else
{
    qdbCommon.joinMode( filterQuery.dataSourceNo(ctr).joinMode() );
    qdbCommon.addLink( link.field(), link.relatedField() );
}
    } //link counter

    SysQuery::mergeRanges(finalQuery, filterQuery, ctr, false, true);
    SysQuery::mergeFilters( filterQuery, finalQuery,ctr,true,false);
} //common parent
    }
 
    if (!qdbCommon)
    {
qdbCommon = finalQuery.addDataSource(filterQuery.dataSourceNo(ctr).table());                     
qdbCommon.relations(true);
    }
 
}
else
{
    SysQuery::mergeRanges(finalQuery, filterQuery, ctr, false, true);
    SysQuery::mergeFilters( filterQuery, finalQuery,ctr,true,false);
}
    }

    SysQuery::copyDynalinks(finalQuery,filterQuery);
} //filterQuery is null

Tuesday, November 27, 2018

Performance Monitor counters for SQL

SQL Server works with objects and counters, with each object comprising one or more counters. For example, the SQL Server Locks object has counters called Number of Deadlocks/sec or Lock Timeouts/sec.

Access Methods – Full scans/sec: higher numbers (> 1 or 2) may mean you are not using indexes and resorting to table scans instead.

Buffer Manager – Buffer Cache hit ratio: This is the percentage of requests serviced by data cache. When cache is properly used, this should be over 90%. The counter can be improved by adding more RAM.

Memory Manager – Target Server Memory (KB): indicates how much memory SQL Server “wants”. If this is the same as the SQL Server: Memory Manager — Total Server Memory (KB) counter, then you know SQL Server has all the memory it needs.

Memory Manager — Total Server Memory (KB): much memory SQL Server is actually using. If this is the same as SQL Server: Memory Manager — Target Server Memory (KB), then SQL Server has all the memory it wants. If smaller, then SQL Server could benefit from more memory.

Locks – Average Wait Time: This counter shows the average time needed to acquire a lock. This value needs to be as low as possible. If unusually high, you may need to look for processes blocking other processes. You may also need to examine your users’ T-SQL statements, and check for any other I/O bottlenecks.

Monday, November 19, 2018

Configuring the Python Environment

Please follow the following link to install the python on windows

https://matthewhorne.me/how-to-install-python-and-pip-on-windows-10/

Once python is installed we will be faced with a requirement to manage the python libraries. Python ships its package manager which is called pip.

The above link also has the details on downloading the script for pip installation. The script file is called get-pip.py and should be executed using the python command line to update the pip installer.

To install numpy please use the pip installer as shown in the screen below


Thursday, September 20, 2018

Basic Docker Commands

Image is a class and container is an object of the class


To delete a container
=====================
docker ps -a //The command docker ps only displays running containers. To see all containers, including the stopped ones, use the -a parameter:
docker rm -f
docker system prune //clean up any resources — images, containers, volumes, and networks — that are dangling (not associated with a container):


How to see the log of a container
=================================
docker logs bcsandbox //every time the container starts is logs some important information like admin password etc


Get a list of images
====================
docker images or docker image ls


Run an executeable on a docker image (test is the container)
====================================
docker exec -it cmd //it stands for interactive 


Spin a container from an image (or create an object from a class)
==============================
docker run [options] image[:tag] [command] [args]
eg:
docker run -e accept_eula=Y 53ee8b0703ad //53ee8b0703ad is the name of the image


-e stands for an environment variables

There are certain environment variables that used by Navision images
--hostname (-h) specifies the hostname of the container. This is the name you will use to connect to the containers web client and the name you can ping. If you do not specify the hostname, the first 12 characters of the container Id will be used as the hostname.
--name specifies the name of the container. The name is used when referring to the container with other docker commands. If the name is not specified, a random name will be generated using a verb and a name (like friendly_freddy)
--memory (-m) specifies the max. amount of memory the container can use. The default for this option depends on how you are running the container. When you run Windows Server Containers there are no implicit memory option and the container can basically use all memory available to the host. When you run HyperV containers, the default max. memory is 1Gb.
--volume (-v) specifies a folder from the host you want to share with the container and the path inside the container, where you want to mount this folder. --volume c:\myfolder:c:\run\my shares the existing c:\myfolder on the host to the container and the content is in c:\run\my.
--restart specifies the restart options for the container.


command :
docker run -e accept_eula=Y --name bcsandbox -h NAVBC -m 4G -e useSSL=N  -v c:\myfolder:c:\run\my --restart always -e exitonerror=N -e locale=en-us microsoft/bcsandbox

to start using my license file placed in c:\myfolder
docker run -e accept_eula=Y --name bcdev -h DEVBC -m 4G -e useSSL=N  -v c:\myfolder:c:\run\my -e licensefile=c:\run\my\alfazance.flf --restart always -e exitonerror=N -e locale=en-us microsoft/bcsandbox


elaboration:
Which will accept the eula and run the dynamics-nav:devpreview container with 4Gb of memory, test as the name and hostname, http (not https), restart option set to always, locale to en-US, and use the licensefile, which is located in c:\myfolder\license.flf on the host computer.



//Publishing the Docker images on the Network
1. Stop any running containers -> Docker stop bcsandbox
2. Stop the docker service -> Stop-Service Docker
3. Remove the current Containe network-> Get-ContainerNetwork | Remove-ContainerNetwork -Force
4. modify daemon.json file -> '{"bridge":"none"}' | Set-Content C:\ProgramData\docker\config\daemon.json
5. create transprent network (use of of the two commands) -> docker network create -d transparent tlan
New-ContainerNetwork -Name tlan -SubnetPrefix 192.168.0.0/24 -GatewayAddress 192.168.0.1 -Mode Transparent -DNSServers 192.168.0.164,91.74.74.74
6. Start-Service Docker or restart the docker service
7. Run a container with the new adapter created ->
docker run --network tlan --ip 192.168.0.58 --name devcontainer -e accept_eula=Y -h BCDEV -e username=admin -e password=pass@word1 -e useSSL=N  -v c:\myfolder:c:\run\my -e licensefile=c:\run\my\alfazance.flf --restart always -e exitonerror=N -e locale=en-us microsoft/bcsandbox


 Create an container using navcontainerhelper
 ============================================
new-navcontainer -accept_eula -includeCSide -containerName test -licenseFile c:\myfolder\alfazance.flf -imageName microsoft/dynamics-nav:devpreview
new-navcontainer -accept_eula -includeCSide -containerName test -licenseFile c:\myfolder\alfazance.flf -auth NavUserPassword -imageName microsoft/bcsandbox


commit container as a new image
===============================
docker commit navcontainer navbcuaeloc/ver1:lan adapter changed
docker run -d navbcuaeloc/ver1


Fixing the Error = HNS failed with error : Element not found.
=============================================================
link https://github.com/docker/for-win/issues/750

stop-service hns
stop-service docker
del 'C:\ProgramData\Microsoft\Windows\hns\hns.data'
start-service hns
start-service docker

C:\myfolder\\CleanupContainerHostNetworking.ps1 -Cleanup -ForceDeleteAllSwitches
Restart-Computer -Force




docker run --network tlan --ip 192.168.0.59 --name bcangshudev -e accept_eula=Y -h BCANGSHU -e username=admin -e password=pass@word1 -e useSSL=N  -v c:\myfolder:c:\run\my -e licensefile=c:\run\my\alfazance.flf --restart always -e exitonerror=N -e locale=en-us microsoft/bcsandbox
docker run --network tlan --ip 192.168.0.60 --name bckirtidev -e accept_eula=Y -h BCKIRTI -e username=admin -e password=pass@word1 -e useSSL=N  -v c:\myfolder:c:\run\my -e licensefile=c:\run\my\alfazance.flf --restart always -e exitonerror=N -e locale=en-us microsoft/bcsandbox

New-NavContainer -containerName bcsan -accept_eula -alwaysPull -assignPremiumPlan -auth NavUserPassword -doNotExportObjectsToText -enableSymbolLoading -imageName microsoft/bcsandbox -includeCSide -memoryLimit 3G -shortcuts Desktop -updateHosts



CONVERT OBJECT TO AL
====================
Get-Command -Module NavContainerHelper
Export-NavContainerObject -containerName devcontainer -objectsFolder c:\myfolder -filter 'Type=Report;Id=206'
Convert-Txt2Al -containerName devcontainer -myDeltaFolder c:\myfolder -myAlFolder c:\myfolder\al -startId 70140931

Git Basics

Git is a local version control tool. Git also has an online centralized repository which is known as GitHub.

Getting started
  • Once you have installed git you can use the GitBash console to configure the local git. 
  • git config --global user.name "abc@abc.com'"
  • git config --global user.email "abc@abc.com"
  • git config --global core.editor "code --wait --new-window"  
The global setting are saved in an xml file on the  as done above can be viewed using
  • git config --edit --global 
connecting with the GitHub might require the user name and password to be authenticated each time. This can be avoid by using the SSH connection to the GitHub. To generate an SSH key use the command prompt and type the following command:
  • ssh-keygen -t rsa -b 4096 -C "yourmail@domain.com"
Before the next step, please ensure that the service for ssh is running in windows services "Open-SSH authentication agent". Please ensure that that you are in the %userprofile% directory before you issue the next command you can do 
  • C:\cd %userprofile% 
  • ssh-add ./.ssh/id_rsa : This command will add the identity created to the ssh agent running
  • cd .ssh
  • type id-rsa.pub 
Goto gitHub-> Profile -> settings and add the contents of the id_rsa.pub to the ssk section and give it a description from which computer this was added. To test if the ssh connection is working fine please use
  • ssh -T git@github.com


Concept
======
Git is a distributed source code management software. In centralized VSS tools entire history is on the central server and only the latest copy (checked in version) is in the local directory. In git the entire history of all the versions and check ins is on every local copy as well. 
  • Branches : These are features being worked on. A branch is created locally and when finalized is pushed. One user can be working on multiple features at the same time and hence can have multiple branches open at the same time. 
  • Push : process of sending finalized changes to the GitHub 
  • Pull : process of pulling the latest copies from GitHub 
  • File states
    • Committed : stored locally
    • Modified: changed but not committed 
    • Staged : marked for next commit.snapshot. 
  • File areas
    • Working directory: editing areas
    • Staging : files waiting for the commit
    • .git repo : committed files
  • Repositories: its a folder for your project integrated with git. A new  

Git process outline:
=============
  • git init : creates a local repository
  • git clone: clone a repository into a local directory and create a remote branch
  • git add : add a file into the staging area
  • git commit: commit the changes to the local repository
  • git push origin master: push the changes from the local master to the origin (remote repo)
  • git pull remote master: 
Branches:
=======
In a nutshell a branch is parallel copy of the original source code which can at sometime be merged into the original source. In traditional system this was done using a copy of the source being hosted as a separate repository. 

Git is based on snapshot (differential and not incremental changes) and hence its easier for Git to implement this concept. Each commit creates a differential snapshot and files changed are marked in the snapshot. If a file is not changed it remains pointed to the original commit pointer. Hence a branch is also a pointer to one of the commits

The master is the default branch and it always points to the last commit made. It also automatically moves forward when a commit is made. 

Branches should be used to work on a feature and should only be committed to the master once completed and tested. To avoid conflicts the branches should be logically arranged to avoid the same object in multiple branches, this would ease the merge operations. 

  • git branch <branchName>: This command is used to create a new local branch. 
  • git branch -a : This command will give a list of all the local and remote branches. 
  • git branch -vv: This command will give the local branches and their corresponding upsteam branches on remote repo. 
  • git checkout : At any given point there can be multiple branches that are active and the active branch can be changed using checkout command. If the branch is missing it will get created. 
  • git checkout -- . :this command can be used to discard all the unstaged changes and revert to the last committed stage on the current branch.
  • git push -u origin <newbranch>: This command pushes the branch created locally to GitHub make sure the branch name is same locally and on remote. 
  • git fetch : This command is used to fetch any branch from GitHub to the local repository. 
  • git pull : this is used to merge branches. A pull request is from the owner of the branch to the owner of the master branch (collaborators) to merge the changes.
  • git log --all  --oneline --decorate --graph: This command can be used to look at all the commit log and branches there are. 
  • git --rebase:  If you don’t want a bunch of commits in a pull request, you may need to use git rebase to “squash the commit history”.
Remote repo
========= 
remote repositories are the once located on GitHub. Origin keyword in an alias that is created by default for the remote repository if a name is not specified when we clone it. Alternatively if we initialized a local repo (not cloned), we can link it to a remote repo using: 
  • git remote add origin https://github.com/santoshkmrsingh/abc.git

we can also have multiple remote alias created (this would be logical when we are using forks (copies) of the same repository)
  • git remote add afzMobile https://github.com/Alfazance/mobile.git

given the above two commands we have two remote alias created. These can be seen using 
  • git remote: refers the link to the master branch on the remote repository.  
  • git remote -v : list the full details of the remote links
remote tracking branch tells us what the current branch looks like at origin. The remote tracking branches cannot be directly worked at. We will always work with local branches. 
  • origin/master : refers to the master branch on the remote repository. 
  • git merge origin/master: merges the current branch with the remote repository 
pull and push
==========
pull and push work from the local branch to a corresponding branch on the remote repository. It hence important that before a pull or a push is used, the local repo and branch have a corresponding link to a remote repo and a branch. A push always works from the current local branch to a branch on the remote repo. Hence, it is important that before a push request is issued the correct branch is checked out. 

To link the current branch to one on the remote, you can use the  following command. 
  • git branch --set-upstream-to origin/branch: This command is used to link a local branch to a branch on the origin
Once the link is established the following commands can be used to push the local commits to the remote repo
  • git push : this command will push the contents of the current branch to the corresponding branch ont the remote server
  • git push -u origin: : this command will push the contents of the current branch to the branch on the remote repo. If the branch does not exists on the remote repo it is created.
    Note: the branch name on the local repo should match the branch name on the remote. You cannot push changes from one branch name in the local directory to another branchname on the remote. 

Pull request:
Pull request can be more relevantly called merge request. Once the local contents have been pushed to the remote server the pull request can be used to merge these changes on the master branch. 

To have better control we can configure the pull requests branch rules (in settings on GitHub) where we can configure options like merging can only happen through pull requests and not directly through a local merge. 


Merge:
=====
Merge is used whenever there is a conflict i.e more than one user has made a change on the same file and same line. Merge can happen locally or on the server. 

If the local timestamp of a file in a branch is older then the one on the server, then the same should be locally merged (using a pull command), before it can be pushed up.  For the below commands please assume that we have the master branch checked out
  • git merge feature: this command will merge the feature branch into the current branch (master in this case) along with all the commits of the feature branch.
  • get merge --squash feature : this will merge the feature branch into the current branch (master) without the commit history of the feature branch.This should be followed by a commit on the master branch git commit -m "squashed commit with features".
  • git rebase master: If the current branch has a base in the master, the rebase will bring the latest commit from the master into the current feature branch and reapply all the local commits in the feature branch on top of it. Thus we will have a new base with the latest changes from master. 
  • git reset --hard <commitid>: This command deletes all the commits post the commitId this can be used to undo any unwanted commits.  
  • git stash : stash commands can be used to undo and redo changes in the current working tree. This command can be used to revert all the current changes and revert to the last clean working directory status. Each time a stash command is used the changes are pushed to the stash list. 
  • git stash push -m "comment" : this can be used to push a change to the stash with a custom message.
  • git stash list : this can be used to list all the changes that are pushed to the stash.
  • git stash apply:  will reapply all the changes that are parked in stash. If there are a no of stash on the stack we can choose which one to apply from the list using its index. 
  • git stash drop <index>: this can be used to drop a given checkpoint from the stack.
  • git stash clear : can be used to clear the stack maintained for stash.


Fork:
====
Fork is just a copy of the repository. Any changes to the fork does not impact the original repo. This is however internally still linked to the original repo. All the other commands to interact with a fork are the same as with any GitHub repository. 

Forks are very useful when it comes to accepting public contributions to your repository. One would not want changes to be done to the original repository, hence full access is provided to forked copies. Changes from the forked repository to the seed repository can only be done by the developer who has an write access to both the repositories. 

While working with a scenario like this, the need to have multiple named remote repositories might get evident.

$ git remote -v
> origin    https://github.com/YOUR_USERNAME/YOUR_FORK.git (fetch)
> origin    https://github.com/YOUR_USERNAME/YOUR_FORK.git (push)
> upstream  https://github.com/ORIGINAL_OWNER/ORIGINAL_REPOSITORY.git (fetch)
> upstream  https://github.com/ORIGINAL_OWNER/ORIGINAL_REPOSITORY.git (push)

In the example above the origin would refer to the clone repository created on GitHub and upstream is another remote link added to the original seed repository. Additional remote repo link is creating using :

$ git remote add upstream  https://github.com/ORIGINAL_OWNER/ORIGINAL_REPOSITORY.git

If you are hoping to contribute back to the original repository, you can send a request to the original author to pull your fork into their repository by submitting a pull request.

NOTES
=====
1. HEAD refers to the changes in the current branch

2. When prompted for message
press "i"
write your merge message.
press "esc"
write ":wq"
then press enter.

3. Create a file not possible using windows explorer (eg. a filename starting with period(.))
touch

4. To see a history of commits
git log --oneline --decorate --graph --all

5. Edit the .gitignore file
That can be done using the vim editor.
Simply type vi [path-to-file]\file and you will open the vim terminal editor.
Press a to toggle the edit mode
Edit stuff
Press Esc to finish edit mode and toggle the command mode
Press/type : to indicate you want to type a command
Type and enter wq to save the changes and exit


If you want to delete all your commit history but keep the code in its current state, please use the following:

Checkout
git checkout --orphan latest_branch

Add all the files
git add -A

Commit the changes
git commit -am "commit message"

Delete the branch
git branch -D master

Rename the current branch to master
git branch -m master

Finally, force update your repository
git push -f origin master

Wednesday, September 19, 2018

Outlook Duplicate Calendar Entries

use the following command to reset the calendar pane

outlook.exe /resetnavpane


Wednesday, July 11, 2018

AX2012 Sharepoint EP Portal performance


The performance for the Enterprise portal is highly dependent on the caching service pointed in the image below. 

Please try to restart the service to get the performance back. 



Tuesday, May 15, 2018

Move AX database between environment

If the active directories are different then the sysadmin role mapping with the active directory will be required to be updated

select SID, Networkdomain, networkalias from userinfo
where networkalias = ''


Find the SID for the user to be mapped in the new Active directory 

Whoami /user

Copy the new SID for the desired user and update the same in the userinfo table 

update userinfo set SID='', Networkdomain = '', networkalias = '' where id = 'admin'


Sunday, May 13, 2018

Windows 10 Task Bar Problems

Had a strange problem today when in the task bar search stopped working. Also the taskbar was opening disabled and i had to open it up multiple time to enable it back.


Follow the below steps to restore the taskbar setting using powershell script

1. Find C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe
2. Right-click and Run as Administrator
3. Copy this line:
Get-AppXPackage -AllUsers | Foreach {Add-AppxPackage -DisableDevelopmentMode -Register "$($_.InstallLocation)\AppXManifest.xml"}

Monday, January 29, 2018

Power BI D365 Integration

Every external application that needs an access to an Azure resource should be assigned necessary permissions. The registration process for Applications takes care of this.

To view the app registrations follow the following path: -
Goto Services and find App Registrations


Click on App Registrations and set the filter to All Apps
Then search of the name of the application as used when the app was registered on the PowerBI portal (in this case 365-hcpm-demo Power BI Integration )


Once you click on the app you will be able to see the details


At this location one should be able to view and manage most of the properties of the application.

Note: The secret key is only visible once when it is being created and thereafter hidden so if at all you have forgotten your key please delete the existing one and create a new one.

Name, Object id and other properties are visible using the properties tab page.

Reply urls tab page is used to alter the redirection page. 

Tuesday, January 02, 2018

Save RDP credentials

To be able to use saved credentials

1. Open Group Policy Editor via cmd -> gpedit.msc

2. Navigate to Local Computer Policy\Computer Configuration\Administrative Templates\System\Credentials Delegation\
3.Open Setting Allow Delegating Saved Credentials with NTLM-only Server Authentication, set it to Enabled

click on button Show... and in Show Contents window add Value TERMSRV/* Close all windows by pressing OK.

4. Run cmd and enter gpupdate command to update your policy

Sunday, November 19, 2017

Cannot create a record in Workflow tracking status table (WorkflowTrackingStatusTable). Correlation ID: {70B17AE2-AF1A-47A9-937F-39F206AF91DE}, Pending. The record already exists.

Encountered this issue once the system recovered from the RecId max out in the workflow tables. There were certain transactions that were submitted and updated as In-review however the workflow was not triggered for them. The workflow bar was missing and on trying to Re-Submit the workflows the error was received.

Solution :

  1. Shutdown all the AOS instance
  2. Bring up any one AOS, synchronize the database.
  3. Re-Submit the workflow for the failed records.
e.g: 

    TSTimesheetTable    timesheets;
    WorkflowTrackingStatusTable  trackingstatus;

    while select timesheets
    where 1==1
        && timesheets.ApprovalStatus == TSAppStatus::Pending
        //&& timesheets.TimesheetNbr == 'CCM-175040'
    notexists join trackingstatus
    where trackingstatus.ContextTableID == 4627
    && trackingstatus.ContextRecId == timesheets.RecId
    {
        /*
        ttsBegin;
        timesheets.ApprovalStatus = TSAppStatus::Create;
        timesheets.update();
        ttsCommit;
        */
    Workflow::activateFromWorkflowType("TSDocumentTemplate",
                                       timesheets.RecID,
                                       "Resubmitted due to error",
                                       false,
                                       timesheets.createdBy );

    info( timesheets.TimesheetNbr );

AX 2012-Cannot create a record in Workflow tracking status table

Got the message in the workflow. The messages can be seen in the EventViewer and can also be seen on screen if the workflow is triggered using the tutorial form.

The reason for this error is when the RECID of the tables involved and systemSequences is out of sync. Which means the RecId that the system tries to generate for a new record is something which is duplicate and has been generated in the past.

The reason why this mismatch occurs could be anything and is not known to me; however what is important at this point, is to get out of this situation. With the little R&D that i did, i realized the following facts.


  1. The RecIDs are not generated real time from the systemSequences table. In reality each AOS caches a certain amount of RecID and tries to use them. 
  2. Due to the above stated fact the max RecId that is visible directly from SQL database, is different from the one that actually exists in the AOS. Hence it is important that when we are correcting the NextVal in the systemSequences table, the value is fetched using the AOS table and not from the database directly (using SQL scripts).

Solution : 
  1. Check of the Maximum value of the RECID column for the table resulting in the error, in this case it is WorkflowTrackingTable. 
  2. Make sure you check for the value of RECID from AOS and not using the SQL table as there might be records that are not committed to the database. 
  3. Sort the table by RECID in descending order and copy the maximum RecID as reported by AOS. 
  4. Update the systemSequences table by adding 250 to this value to be on the safer side and to avoid cached values. 
  5. Compile the table giving the error on each of the AOS so that the cache's is cleared.