A typical D365 Finance and Operations implementation project usually requires multiple tier-1s developer environments. Each developer needs her own and they are also often used for prototyping, testing, and housing golden configurations. The accumulated cost can be pretty substantial. On the other hand, users often complain that the performance of the default-sized VMs is not great and try to increase the size for better experience.
Cost and performance are hard to get a handle on together. To put it simply, it seems to be an either-or situation. I have asked myself how an optimum can be found. Personally, I would define the optimum to be a very fast user experience when I use the VM at a very small additional cost. I use two approaches that help me. Since I noticed that many struggle with the same issue, I am sharing the two things I do to manage it. However, note that it is a requirement to have administrative privileges for the VM in the Azure portal.
Set up a daily Auto-Shutdown
I setup the VM to be shut down automatically every night. I get an email about it 30 mins in advance so I could change my mind in case I am still using the VM. It’s a way to keep the running time of the VM down, in case I forget to shut it down manually. With that setting, the VM has to be explicitly started again, so often I have the VM off for a few days which saves cost. Below is the setting in the Azure portal.
I usually start and stop my VMs manually.
Scale VM and disk sizes up and down
Consider the time a VM is actually running versus when it is stopped (when you do not need it). In my case, the running time is a small percentage. Because of that, I can justify scaling up the sizing at additional cost for the few times I use it.
There is a lot of public documentation about VM sizes and disk sizes and what workloads are better for some than others. I am not repeating this here, but just want to share my approach. I encourage you to experiment and share your comments.
Important note: The disk sizes incur cost even when the VM is stopped. It is not enough to stop a VM, it must be also scaled down. Make sure to understand this!
My Low-cost config for a stopped VM (cheap)
VM size (VM/Settings/Size): Standard B8ms (8 vcpus, 32 GiB memory)
Disk size for all disks (each disk/Settings/Size + performance): Standard HDD LRS
My High-performance config for a started vm (very fast)
VM size (VM/Settings/Size): Standard DS13_v2 (8 vcpus, 56 GiB memory)
Disk size for all disks (each disk/Settings/Size + performance): Premium SSD LRS
Powershell scripts to scale up and down
I am sharing 2 scripts that take care of the scaling. You can run them right in the Azure portal’s Cloud Shell. Just configure your resource group name and VM name, save the scripts and upload them to the Cloud Shell’s home directory.
When I want to use the VM, I run the scale up script and when the script finished, the VM is started.
When I am done using the VM, I run the scale down script (which also stops the VM).
There are occasions when a customer, partner, consultant, or support engineer needs to look at the low-level Dynamics 365 Finance & Operations telemetry data. These use cases include troubleshooting of errors, performance-related investigations or just to gain some additional understanding of how the platform work. Telemetry data can be accessed by authorized users via the Environment monitoring part of the LCS portal, can be filtered in a few different ways and displayed inside the LCS portal’s raw logs section. A data grid can be used to inspect the log entries. LCS does not allow for more sophisticated pivoting and users can use Excel for that purpose. For that purpose, the telemetry data can also be downloaded formatted in CSV format.
However, Excel is not the optimal tool for advanced querying of this data. The perfect tool, designed for this purpose is the Azure Data Explorer. It provides an innovative query language Kusto that is optimized for high-performance data analytics. Answering questions like “how often has a certain process taken place, how long has it taken in 90% of the times, how often per hour has a certain action taken place over the course of a day” becomes a lot easier and can be backed up with powerful graphics as well.
Here are examples how the graphics could look like:
A less known feature of the Azure Data Explorer is that it supports ingestion of CSV files. We can use it to get our CSV data files uploaded and staged so it can be queried with the Kusto language. If you have not setup Azure Data Explorer Cluster, follow these steps.
Steps to upload to Azure Data Explorer
Run your query on LCS raw logs page
Important: adjust the time interval or – filter to get to the right data (row limit is 5000 for export in next step)
export the grid to Excel
Open the file in Excel and save it without making any changes (this seems to fix a formatting issue)
In your Azure Data Explorer, right click on the cluster in the tree view and select “ingest new data” and then on the next page “ingest data from a local file”
Pick your cluster, name a new table for the data to be imported into, select up to 10 CSV files to import, select CSV format. Hit next a few times till your data is getting imported.
Use the Query tile to write a Kusto query against your data.
To learn more about the Kusto query language, go here.
Sample queries
Modern POS
all errors
Sometimes its nice to get an idea of what all the errors are. The use of the text fields to describe errors is not consistently used, so its better to use the EventIds and map them to the correct errors. They can be looked up in these two Commerce SDK files (but using the code snippet below may give you all of them already):
A next step would be to go through each of these error types, and look at them closer. A few of those could indicate quality issues that could effect the user experience.
In many cases, these errors can be fixed by cleaning up the extended code, adding proper SQL indexes or investigate better problem approaches.
In some cases, these errors could indicate problems with out-of-box experience (OOBE) or deployment. Please open a support request to get these fixed by Microsoft.
// Note: use selection "All error events" to only get the errors
FNOErrors24h
| summarize count() by formName, targetName, errorLabel
| order by count_ desc
FormName, targetName, errLabel are all good candidates to look up in Azure DevOps to find out the code location. That gives more insight in what areas these error occur.
Example: The @SYS18885 is thrown from ReqPlanData class. Now I can focus on figuring out why that is the case (is Master planning not configured right?)
Slow SELECT queries
// use selection "slow queries" in LCS diagnostics
SlowQueries
| extend MainTableName = extract("FROM\\s([A-Z0-9]+)\\s", 1, statement)
| where statement contains "SELECT"
slow inserts
// use selection "slow queries" in LCS diagnostics
SlowQueries
| extend MainTableName = extract("INSERT INTO ([A-Z0-9]+)\\s", 1, statement)
| where statement contains "INSERT"
Slow deletes
// use selection "slow queries" in LCS diagnostics
SlowQueries
| extend MainTableName = extract("DELETE FROM ([A-Z0-9]+)\\s", 1, statement)
| where statement contains "DELETE FROM"
Batch jobs performance
// use selection "All logs" and add query "where TaskName Equals BatchFinishedOneTask" in LCS diagnostics
BatchFinishedOneTask
| summarize count(), sum(durationInMilliSeconds), percentiles(durationInMilliSeconds, 75, 90) by className
| order by sum_durationInMilliSeconds desc
Have you deployed Microsoft Dynamics 365 F&O environments, packages, moved databases etc. using Microsoft Dynamics Lifecycle Services (LCS) before? If you have, you know that all of these operations take time, and in order to see how far it has progressed, you need to refresh the page. Also, you may waste time working on something else and forgetting to refresh the LCS page once in a while.
A simple solution to the problem that works well for me is to use a Google Chrome extension “Auto-Refresh” that allows for automatic and configurable tab page refreshes. With that you can have the browser sit in a corner of your screen, and when the LCS operation is done you will see it, without having to refresh the page.
Steps: – Install the “Auto-Refresh” Google Chrome extension – allow incognito browser sessions to use the extension too (as often I impersonate other users and use incognito for that) – click the extension’s icon and configure your refresh time. Be reasonable, a good refresh time is every 5 mins – now work on something else useful till you see the LCS operation is done
There are times when a deployment fails and even a re-try does not help. In these cases, a service ticket should be opened to the Microsoft engineers.
However, there are cases when this is not feasible or helpful. For example:
its a tier 1 development environment and you caused the issue, or
you cannot wait and need to get it done very fast, or
you moved the database but did not run the Retail Re-provisioning tool and the Retail deployment fails now (and you do not care because its not a Retail project).
In these and other cases, it may be OK to just step over the failing step and let the deployment finish (in non-production environments).
The following steps can be used to workaround. Again, this is almost “hack” territory, but sometimes is needed…
Find the step number that failed. LCS should tell you. Say for a moment, it’s step 43.
Wait till the deployment is in “Failed” state.
Login to the VM where this error occurred. This can also be found on LCS portal.
Find the current runbook.xml. Its under C:\RunbookOutput, and is going to be the latest changed file.
Open it in your favorite XML editor (i.e. Notepad++) and find the step with the number (search for “>43<“)
Mark the step “Completed” (from “Failed”).
Save the file and resume the deployment from LCS.
Note, sometimes I have found that marking this step is not enough. If that is the case, you can also edit the PowerShell file that the step calls into and essentially comment out all the code. The PowerShell file will be in the service directory under AOSService\DeployablePackages.
Again, this is a hack, but sometimes desperate times call for desperate measures.
An often-requested Dynamics Retail channel feature is the support of store credit cards. The retailer wants to encourage the use of a store credit card; so if the customer uses it, he should get a discount percentage.
Ideally, we should be able to configure the discount percentage, the products that should apply or not, and certain other ordering rules for the discount (apply on-top of other discounts or replace other discounts if better, etc.). The retailer should be able to configure these in Headquarters (AX) without changes in the channel.
This discount applies only during or right before payment, it cannot be applied during simple add-to-cart operations, like other discounts.
Modelling the discount as an affiliation
The pricing or discount engine is a complex piece of code. Even though it supports extensibility features, I would rather not go there. Is there an easier way to do that? Yes, there is. Read on…
We do not want to apply the associated discounts automatically but rather only when a special store credit card is used. We can use affiliation and affiliation price groups to achieve that. The only thing we need to “code” is the application of the affiliation to the cart at the right moment (right when the customer checks out with a store credit card).
A nice benefit of not needing pricing extensions is that we can use the Price simulator in HQ to verify that our discount rules are correct.
In order to try this out, we need to create a new discount for the products we want, with the right percentage, associate with a price group and add it to an affiliation.
As soon as we save, we can try it out in the Price Simulator. Below, you can see that the store credit card discount is applied to all projects, but other discounts are honored (because the StoreCreditCardDiscountAffiliation was set). As soon as the affiliation is removed from the General section, the discounts would disappear.
Applying the store credit card discount in POS
As mentioned above, we only need to apply correct affiliation to the cart before we checkout, and we are done. That functionality already exists in POS today. So, to simply try this out, I did not need to make any code changes (POS transaction screen -> Discounts -> Add affiliation):
Ideally, this affiliation should only be added right before the store credit card is being used. Therefore I opted for adding a new button next to the “Pay Card” button that will do all the work and the cashier would not need to do this via the “Add Affiliation” operation. See the steps below for details.
It would even be better to “detect” that the credit card is a store credit card and only then apply the affiliation. However, it turned out that this may not be easily doable today, as the payment view in POS does not support this. Hence, I opted for the slightly more manual option where the cashier has choose the button.
Here are the steps to add the button to POS:
In HQ, add a new operation (i.e. StoreCreditCardDiscountAffiliation, 5000)
In the layout designer, edit the button grid and add the button and associate it with the new operations
Run these jobs: 1050 for Affiliation price groups, 1020 for discount, 1090 for the button grid changes
In POS.Extensions project, create a new project StoreCardDiscountExtension, with a new operation mapped to the same Id created earlier, and in there just add the affiliation to the cart.
public executeAsync(request: StoreCardDiscountOperationRequest): Promise<ClientEntities.ICancelableDataResult> {
let addAffiliationRequest: CartOperations.AddAffiliationOperationRequest =
new CartOperations.AddAffiliationOperationRequest(["StoreCreditCardDiscountAffiliation",], [], request.correlationId);
return this.context.runtime.executeAsync(addAffiliationRequest)
.then((
result: ClientEntities.ICancelableDataResult
): ClientEntities.ICancelableDataResult => {
return <ClientEntities.ICancelableDataResult>{
canceled: result.canceled,
data: new StoreCardDiscountOperationResponse()
};
});
}
Applying the store credit card discount in online channel
The same discount logic as above is executed automatically, as we use built-in concepts around discounts (as long as RetailServer is being called). However, just as in POS, we need to add the affiliation to the cart at the right moment. In eCommerce, we can accomplish this by updating the “AffiliationLines” property of the Cart object.
Since in eCommerce the payment process is a wizard with multiple steps, we do not need to add a new UI action to apply the discount affiliation. We can handle this better by detecting the use of a store credit card by number, then apply the affiliation automatically (form the UI or 3rd party ecommerce wrapper). The customer should see the updated cart, while moving to the next payment step.
In some cases it is very useful to see how a web site would perform under certain network conditions that are worse than what you have currently. You can use this tool to simulate it: https://jagt.github.io/clumsy/. As an example, if you want to simulate what a network latency of 200ms looks like while using POS, run it on the same computer as POS and configure it with 200ms outbound.
Even if you are on a good network, with this tool you can evaluate what it would look like for someone with higher latency (or other network conditions).
A real certificate should be used to code sign the ModernPOS packages. That will allow for more security and simpler deployments (as no self-signed certs will need to be installed before installing MPOS).
Since the real certificate is usually not accessible by all developers in the team (and should not be), non-official builds will keep using a simple self-signed cert, i.e. the Contoso certificate shipped with the Retail Sdk.
Additionally, recent releases of MPOS, RetailServer, and Dynamics AX use AAD app ids for proper authorization. A onetime setup in AAD and AX is needed. With the proper setup, we configure AX to only allow access from ModernPOS clients that claim the correct app id combination.
Note, that there is a direct relation between the signing certificate, the reply address of the final built ModernPOS and the AAD app id.
With these recent changes, non-production environments can be configured to allow both the real and self-signed ModernPOS clients to access. However, the actual production environment should only allow access by MPOS clients with real certs and self-signed certs should not be allowed.
An automatic benefit of this is that it cannot happen, that a developer connects by accident to production during development phase.
Note, that all the steps below have to be carried out once per tenant, and not per environment.
Detailed steps
Ideally, the following steps should be carried out after a dev environment is fully working with a self-signed cert. That ensures that AX is correctly setup (workers, stores, registers, etc.).
1. Build server setup
The first step is to install a real certificate on the build server so that the Retail Sdk build can use it.
Note, that the steps below require administrative access to the build server. Therefore a cloud-hosted build server is required (as opposed to a Microsoft-hosted build server without administrative privileges is not possible to be used). This is unfortunate, and Microsoft is working on a solution.
On the build machine, find the Windows service for the build and change the user account it runs under to Administrator. It is usually set to “NT Authority”.
On the build machine, install your company’s appx signing certificate into the build accounts user certificate store. A certificate password may be needed to do this. Mark the certificate as not exportable. Make a note of the certificate’s thumbprint. We will need it later.
2. Initial Retail Sdk setup
We want to use the certificate in Release builds (official) but keep using the Contoso signing cert for dev purposes in Debug builds. Other strategies can be used, but using the build flavor has worked well for some. The RetailSdk allows us to provide either a path to a certificate or a certificate thumbprint. We will use the first approach for the self-signed test certificate and the 2nd approach for the real certificate. We can use simple MSbuild logic to switch between the two. Below is an example how to do that. This change would have to be made to the RetailSdk’s customization.settings or better, to a global.props file sitting next to it with all customer values. Use the thumbprint from the step above.
<!– Release builds pick up the secure cert from the build server with named thumbprint, Debug builds use the test cert part of the Retail Sdk –> <ModernPOSPackageCertificateKeyFile Condition=”‘$(ModernPOSPackageCertificateKeyFile)’ == ” and ‘$(Configuration)’ != ‘Release'”>$(MSBuildThisFileDirectory)\ModernPOSAppxSigningCert-Contoso.pfx</ModernPOSPackageCertificateKeyFile> <ModernPOSPackageCertificateThumbprint Condition=”‘$(ModernPOSPackageCertificateThumbprint)’ == ” and ‘$(Configuration)’ == ‘Release'”>22aa3bdca99b70d4ca44d0c51d23a07e06fcfc61</ModernPOSPackageCertificateThumbprint>
The final changes to the dllhost.exe.config you had to make in step 3 should be added to the RetailSdk\Assets\dllhost.exe.config file. That way, any new and officially build ModernPOS will automatically have the right app id information configured.
Note: For development purposes, the older app id information must be used. This can be easily done by leaving the original settings in the config file, but leave them commented out. Switching between the production and dev ModernPOS is that as easy as un-commenting the developer values before activating MPOS.
Note: Implemented with Dynamics 365 version 7.2.11792.62628, or App Update 4 (should work with many other versions). Sample code can be downloaded at the end of this blog.
Imagine there is some additional business logic that should be executed during “AddToCart” in order to cancel the operation and show a dialog in POS (both ModernPOS or CloudPOS). It could be that an external system has some additional information about item availability, or it could be a credit check that the customer on the transaction failed. Whatever the actual business logic may be, our code extension shall meet these goals:
If a certain condition is true, do not persist the new cart item,
show a well-formed, localized error message in POS about the issue, and
keep the existing POS view open, with unchanged data, after the dialog is closed.
This can be accomplished with a very small extension in CRT and without any changes in POS. The CRT request in charge of saving the cart data is “SaveCartVersionedDataRequest”. All we need to do is to augment the CRT request with a pre-trigger that will give us the opportunity to “cancel” before saving the cart. Steps:
Create a simple new CRT trigger and implement the interface methods
class MyAddToCartLineValidationTrigger : IRequestTrigger
{
public IEnumerable SupportedRequestTypes
{
get
{
return new[] {
typeof(SaveCartVersionedDataRequest),
};
}
}
public void OnExecuted(Request request, Response response)
{
}
public void OnExecuting(Request request)
add business logic to validate any data on the cart or transaction
if you need to invalidate/cancel the “AddToCart” because the business rules call for it, throw a CommerceException with localized message
if (invalidCart)
{
throw new CommerceException("Microsoft_Dynamics_Commerce_30104", "Custom error")
{
LocalizedMessage = string.Format("The item with Id {0} is not allowed to be added at this time.", disallowedItemId),
LocalizedMessageParameters = new object[] { disallowedItemId }
};
}
Add your changes to the Retail Sdk, edit the commerceruntime.exe.config, global.prop or Customization.settings files
There are two ways to find out the version information. Either use LCS or look at individual files on the box (in case the VM is not hosted on LCS).
In the LCS case, browse to the environment in question and follow the “View detailed version information” link. The following information will be available:
There are a few wikis at https://docs.microsoft.com/en-us/dynamics365/unified-operations/dev-itpro/index?toc=dynamics365/unified-operations/fin-and-ops/toc.json which help with specific dev ALM and deployment topics. I found myself and from questions of others that it is difficult to pull all this information together into a single process. This write-up hopefully helps somebody that wants to update to follow the process without errors. It will certainly help me next time I need to take a hotfix since I can just follow a simple cheat sheet. So lets get started.
There are 3 different update types. 1) A platform update is fully backwards compatible with the application, its of binary nature, and it can simply be deployed. 2) A X++ application hotfix ships fixes in X++ source code that can be integrated into ones own code (code merged), sort of like a customization. And finally, 3) a binary hotfix is an application update for other tools, binaries, and Retail source code. The latter is cumulative, so you will always get the latest. For that reason, use a good naming convention for the packages you download (and upload to the Asset library) as this helps later when you need to bring multiple environments to the same hotfix level.
Recommended (best) practices:
Download to and upload from the cloud development box. That way these files get transferred much faster and your intranet is not used.
If you have multiple development boxes, the steps below should be taken by one. Once all looks good, the other development boxes should get the changes via syncing VSTS (not via deployable package).
Some binary hotfixes depend on X++ hotfixes. Deploy these first (either by deployable package or VSTS), then tackle the binary update.
The LCS Asset library has a feature that allows for package merging. If you use it, you can decrease the overall deployment time. Unfortunately, merging does not work if the
Installed platform version, X++ hotfix (KB number) and binary hotfix (build version) can be inquired on the LCS detailed version info. See here for details.
There is no easy way to infer the KB number of a binary hotfix from the build version. One thing to deal with this is to include build version and download date into the package name. Then you can tell with high likelihood that a KB that is older then the binary package date is included. A KB with a newer date is likely not included in the package.
See the picture below to see an overall flow of the process. Refer to it while reviewing the following sections.
Note, that there are 4 total different deployment packages that could be deployed (purple boxes). See the details below how to get them.
Note: For the following steps, the integration of the Retail Sdk’s build with the Dynamics 365 for Operations build is assumed. If you do not have this setup, follow Retail Sdk and Dynamics 365 build definition.
Platform updates
Platform updates are the easiest. There is no code merge required, not VSTS needed, you just move it to the Asset library and deploy it. See more information at FAQ monthly platform updates.
X++ application hotfixes
X++ hotfixes are not cumulative, so there may be some code merging needed, if you pick multiple. Visual Studio and the Dynamics Operations VS addin will help with that.
Download the hotfix
Ideally, you pick the hotfix you need and be done with it. However, I have found it to be better in the long run to take all hotfixes. It incurs slightly more testing up-front, but less testing when updating to the next major version. Additionally, other fixed issues you may just not have not encountered will get fixed before you see them.
Logon to LCS on the development box, chose the right environment and hit the X++ hotfix tile
Select All and hit Add button
Download package
Select all and download
Name it with meaningful data so you can identify the package later. I use environment name, date of download, the fact that this is not a binary update but an X++ update etc. (i.e. NewX++HotfixesForSpringEnv170903)
Unblock and unzip the package
Backup PackageLocalDirectory
I got burned by a failed hotfix application after my dev box became corrupted. I could not tell for sure which files had been touched… Since then, I do a simple robocopy-backup to be able to rollback if something happens.
Open a cmd.exe window with elevated privileges (runas admin)
Change directory into the parent of your PackagesLocalDirectory folder (here K:\AosService, might be in J: also)
I have had issues with the VS addin in the past, so I always use the command line version. Additionally, it is important to understand the –prepare statement. Use it! Otherwise, code merges you may need to do later will be hard.
Update:
Below is a batch script that I add to each Metadata folder (VSTS too). Just update the environment variables, and remove the commented lines one by one, first -prepare, then -install. See below for details. here is the contents of the script:
setlocal
set HotfixPackageBundlePath=C:\Temp\Downloads\AllX++HotfixesTill06192018\HotfixPackageBundle.axscdppkg
set PLD=k:\AosService\PackagesLocalDirectory
set TFSUri=https://xxxx.visualstudio.com/defaultcollection
rem bin\SCDPBundleInstall.exe -prepare -packagepath=%HotfixPackageBundlePath% -metadatastorepath=%PLD% -tfsworkspacepath=%PLD% -tfsprojecturi=%TFSUri%
rem bin\SCDPBundleInstall.exe -install -packagepath=%HotfixPackageBundlePath% -metadatastorepath=%PLD% -tfsworkspacepath=%PLD% -tfsprojecturi=%TFSUri%
endlocal
Save it with a name like UpdateAppHotfixes.cmd. Run it from an elevated cmd console, while the local directory is the PackagesLocalDirectory.
Open Visual Studio and make sure you are logged in with the same account that is going to be used to access VSTS. If you are not sure, logout and log back in. All we want to get is a new valid authentication token so the steps below will succeed.
Close all VS instances
Open a cmd.exe window with elevated privileges (runas admin)
Change directory into PackagesLocalDirectory\bin folder (here K:\AosService\PackagesLocalDirectory\Bin, might be in J: also)
Once the command finished, check for conflicts: open Visual Studio, Select Dynamics 365/Addings/Create project from conflicts
If there are conflicts, you need to resolve them
Do a full build: Dynamics 365/Build models/Packages-select all/Options-use default plus select sync database and then hit the Build button
When the build succeeded without errors, submit the changed files with a meaningful changelist name
Binary hotfixes
Binary hotfixes are cumulative. You need to pick one of them, and you will get the latest. If Retail channel components are not customized, then there is no code merge needed.
Download the binary hotfix
Logon to LCS on the development box, chose the right environment
Click the download binaries button
Name it with meaningful data so you can identify the package later. I date of download (i.e. AllBinary72UpdatesLatestPlatform170903)
Unblock the zip file and then unzip it
Upload the zipped package to LCS’s Asset library
Apply the binary hotfix
Use the LCS environment’s Maintain menu to deploy this package.
Only in case of Retail channel customizations: Update the Retail Sdk mirror branch
In order to effectively do code merges, it is suggested to use 2 branches. For more details, check Retail Sdk Overview (at the end of the wiki page).
Ideally, the Retail Sdk branch would be hosted in the same VSTS project, in parallel to the Trunk folder.
In order to update it:
Make sure the mirror branch/folder is fully synced to latest version.
Close all but one Visual Studio instances
In a first Windows Explorer window, find the new Retail Sdk which we will use to update the mirror. On a brand new environment, find it in the service drive (K:\ or J:\) under “Retail Sdk”. If this is a binary hotfix, unzip the hotfix package as you downloaded it, and find the SDK in the RetailSDK\Code folder.
In a second Windows Explorer window open the location of the outdated mirror Retail Sdk branch/folder (where it is mapped from VSTS to local folder)
Delete all files in the outdated mirror Retail Sdk branch/folder (open in the second Windows Explorer Window)
Copy and paste all files from the new Retail Sdk into the folder you just cleaned (copy from second to first Windows Explorer window)
(Optional) If you have any doubt whether the shipped Retail Sdk has a build error, carry out these steps to verify:
(Optional) Make a temporary copy of the new Retail Sdk (from the hotfix) to any other place of your choice
(Optional) Open a Visual Studio 2015 msbuild command prompt and change directory to the temporary location
(Optional) Type “msbuild” and hit Enter (if this shows any build errors, please open a support request or bug as the shipped Retail Sdk should build without errors)
Delete all files in the mirror branch in Windows explorer, and add the new Retail Sdk back. This will ensure that removed files are properly being removed from the source control.
In “Source Control Explorer”, right click the mirror branch, “Add items to Folder…”, Add all folders from the same source location back. Make sure there are no “excluded items”, and hit Finish.
Make sure there are no files from the mirror branch listed under “Team Explorer”, “Pending Changes”, “Excluded Changes” and “Detected”. If there are, promote them to the “Included Changes”
Check In the changes.
Only in case of Retail channel customizations: Code merge the Retail Sdk customization branch
Make sure you do not have any changed files in the customization branch before you start. If this is difficult to accomplish create a new client mapping, get the customization branch into a different folder or machine and do the merge there. Do not start merging if you have opened files.
In Source Control Explorer, right click the mirror branch and select “Branching and Merge…”, Merge
Make sure that the source is the mirror branch and destination is your customization branch
Hit Next and Finish
Resolve any possible merge conflicts
Watch closely that all “Included files” are the correct files. These should only be the merged files, or updated files in the mirror
Watch closely that all “Excluded files” only include generated files. Do not promote them
Only in case of Retail channel customizations: Test local Retail Sdk customization build and submit to VSTS
Before checking in these changes, lets make sure that all builds fine. Open a Visual Studio MSBuild developer command window, and type “msbuild” at the root of the Retail Sdk customization branch. Once all builds fine, submit the changes with a meaningful changeset name.
Run build on build machine
Inspect the submitted changes in the VSTS code branch. In the example below, I see 2 checkins for the X++ hotfixes, one other code change, one to update the Retail Sdk mirror branch and one to code merge the Retail Sdk customization branch.
Upload the AX and Retail deployable packages to LCS
Find the packages in VSTS and upload them to LCS.
Deploy AX and Retail deployable packages
Deploy the packages from the LCS asset library (in the image below the 3rd and 4th). The RetailDeployablePackage is only needed in case of Retail channel customizations.
Once the deployment succeeded, you should see the tile count go down.
Retail only: Update channel components
Follow the wiki about how to deploy the store components (Modern POS, Modern POS Offline, Hardware station, Retail Store Scale Unit)
Below are some code samples for reading and writing data in Channel database. This is just for Channel database. This does not necessary mean that the data needs to go to D365 HQ. It may, either via CDX pull job or via Real-time service call. This information is only for simple reads and writes in Channel db.
Prerequisites:
Dynamics 365 for Operations (1611)
KB3214687
KB3214679
First, create the SQL table in ax schema and a view in crt schema:
CREATE TABLE [ax].[MY_EXTENSION](
[DATAAREAID] [nvarchar](4) NOT NULL,
[RECID] [bigint] NOT NULL,
[COL1] [int] NOT NULL,
[COL2] [nvarchar](100) NOT NULL,
[COL3] [bit] NOT NULL,
[ACCOUNTNUM] [nvarchar](20) NOT NULL)
. . .
END
GO
CREATE VIEW [crt].[MY_EXTENSIONVIEW] AS
(
SELECT ACCOUNTNUM, DATAAREAID, COL1, COL2, COL3 FROM [ax].[MY_EXTENSION]
)
GO
GRANT SELECT ON [crt].[MY_EXTENSIONVIEW] TO [UsersRole];
GO
The shown sample adds 3 columns to be stored from 3 extension properties.
Grant the right permissions to table and view. This is the list of supported SQL roles:
DataSyncUsersRole
Used by CDX process account
PublishersRole
Used by publishing process account (eCommerce)
ReportUsersRole
Used by reporting user account
UsersRole
Used by runtime user (RetailServer)
Next, create a sproc for updating:
CREATE PROCEDURE [crt].[MY_UPDATEEXTENSIONPROPERTIES]
@TVP_EXTENSIONPROPERTIESTABLETYPE [crt].EXTENSIONPROPERTIESTABLETYPE READONLY
AS
BEGIN
DECLARE @nvc_DataAreaId NVARCHAR(4);
DECLARE @recId bigint;
DECLARE @accountNum [nvarchar](20);
DECLARE @Col1Value int = 0;
DECLARE @Col2Value nvarchar(100) = '';
DECLARE @Col3Value bit = 0;
SELECT DISTINCT TOP 1 @recId = tp.PARENTRECID, @nvc_DataAreaId = ct.DATAAREAID, @accountNum = ct.ACCOUNTNUM
FROM @TVP_EXTENSIONPROPERTIESTABLETYPE tp
JOIN [ax].CUSTTABLE ct on ct.RECID = tp.PARENTRECID
WHERE tp.PARENTRECID <> 0
SELECT @Col1Value = COALESCE(tp.PROPERTYVALUE, 0)
FROM @TVP_EXTENSIONPROPERTIESTABLETYPE tp
where tp.PARENTRECID <> 0 and tp.PROPERTYNAME = 'COL1'
SELECT @Col2Value = COALESCE(tp.PROPERTYVALUE, '')
FROM @TVP_EXTENSIONPROPERTIESTABLETYPE tp
where tp.PARENTRECID <> 0 and tp.PROPERTYNAME = 'COL2'
SELECT @Col3Value = CAST(CASE WHEN tp.PROPERTYVALUE = 'True' THEN 1 ELSE 0 END AS BIT)
FROM @TVP_EXTENSIONPROPERTIESTABLETYPE tp
where tp.PARENTRECID <> 0 and tp.PROPERTYNAME = 'COL3'
MERGE INTO [ax].[MY_CUSTOMEREXTENSION] dest
USING (SELECT @accountNum as ACCOUNTNUM) as source on dest.ACCOUNTNUM = source.ACCOUNTNUM
WHEN matched then
update set dest.COL1 = @Col1Value, dest.COL2 = @Col2Value, dest.COL3 = @Col3Value
when not matched then
INSERT (RECID, ACCOUNTNUM, DATAAREAID, COL1, COL2, COL3)
VALUES (@recId, @accountNum, @nvc_DataAreaId, @Col1Value, @Col2Value, @Col3Value);
END
GO
GRANT EXECUTE ON [crt].[MY_UPDATECUSTOMEREXTENSIONPROPERTIES] TO [UsersRole];
GO
Note the MERGE INTO statement. This allows to either create or update a record.
For reading in the CRT, here especially for extension properties, use SQLPagedQuery, SQLServerDatabaseContext and ExtensionsEntity:
var query = new SqlPagedQuery(QueryResultSettings.SingleRecord)
{
Select = new ColumnSet(new string[] { "COL1", "COL2", "COL3" }),
From = "MY_EXTENSIONVIEW",
Where = "ACCOUNTNUM = @accountNum AND DATAAREAID = @dataAreaId",
};
query.Parameters["@accountNum"] = customer.AccountNumber;
query.Parameters["@dataAreaId"] = request.RequestContext.GetChannelConfiguration().InventLocationDataAreaId;
using (var databaseContext = new SqlServerDatabaseContext(request))
{
ExtensionsEntity extensions = databaseContext.ReadEntity(query).FirstOrDefault();
if (extensions != null)
{
var col1 = extensions.GetProperty("COL1");
if (col1 != null)
{
customer.SetProperty("COL1", col1);
}
else
{
customer.SetProperty("COL1", 0);
}
. . .
}
else
{
customer.SetProperty("COL1", 0);
}
}
In order to write the data to the db, use this code to call the sproc shown above:
using (var databaseContext = new SqlServerDatabaseContext(r))
using (var transactionScope = new TransactionScope())
{
if (!r.Customer.ExtensionProperties.IsNullOrEmpty())
{
ParameterSet parameters = new ParameterSet();
parameters["@TVP_EXTENSIONPROPERTIESTABLETYPE"] = new ExtensionPropertiesTableType(r.Customer.RecordId, r.Customer.ExtensionProperties).DataTable;
databaseContext.ExecuteStoredProcedureNonQuery("MY_UPDATEEXTENSIONPROPERTIES", parameters);
}
transactionScope.Complete();
}
In the case of you want to write other data to the database (no extension properties to entities), build the SqlParameters one by one and match in your sproc. For reading, it may be easier to just treat it as extension properties, then convert to whatever you want. Or you could create your own entity and query for that.
In order to save cost or time, it may be practical for partners/customers to host their own version of the VHD that is based on the official downloadable VHD by Microsoft. One scenario could be that the Contoso demo data is good enough for development, but some additional data setup, hotfix application or code customizations may be needed. These steps could be carried out once by one person, and then that VHD could be re-used. Some partners may be on a monthly cadence to “rev” their dev environments. There are two options to do this:
Reuse the VHD and host it locally in HyperV or similar virtualization technologies
Reuse the VHD and host it in Azure
The option 1) is straightforward and many will opt for this. There are some cases where it is useful to host in Azure though, mostly for simpler sharing of a VM or because an appropriate host for the VM is not available (i.e. laptop not powerful enough). Here is a step-by-step guide that worked for me to bring up the VHD to Azure so I can simply spin up a new instance in relatively short time:
Download the VHD from https://connect.microsoft.com/ and unpack it
(optional) prepare the VHD with data, binary fixes or customizations
Upload the VHD to your Azure subscription. If you have not done already, you need to install the Windows Azure Sdk. If you have not done already, you need to create a management certificate for Azure on the local machine and upload it to Azure (basically this grants access to the Azure subscription). Then follow this: https://docs.microsoft.com/en-us/azure/virtual-machines/virtual-machines-windows-classic-createupload-vhd. I did not have to run sysprep on the downloaded VHD from Microsoft. I think this step is needed if you carry out step 2).
When the steps are done, you should see the new VHD ready to be used as a template for creating new VMs.
Using the POS screen layout designer is a tedious work and very likely you would not want to do this in every environment over and over again (production, sandbox, dev, etc.). The solution to this (and similar tasks) is to use the DIXF framework to import and export entities. For this particular task of moving POS screen layouts, here are the steps to follow:
Configure the data source
Enter the “Data management” workspace
Click on “Configure data source”
Select “CSV-Unicode” and edit
Set the “text qualifier” to ~ (tilde)
Save the data source
Export
configure the data source as above
Start a new export project with target data source set to “CSV-Unicode”
One solution is to look at all EventLog entries for anything “Dynamics”. Here is how to do it:
Open the EventViewer.
Custom Views/Create new custom view.
Select Event levels you want to see.
Select the Event logs you want to see. Here, make sure you select Applications and Services Logs/Microsoft/Dynamics
Hit OK and call it “Dynamics”.
This video demonstrates how to store application settings securely and manage them in AX. These settings are needed for AX business logic and CommerceRuntime business logic. The data is fetched via a RetailTransactionServiceEx call. The CommerceRuntime service takes care of calling the RTS and caching for a configurable period. Video also shows how to test this by exposing it via RetailServer and using the RetailServer TestClient.
This video shows the steps involved to code merge a new Retail Sdk into your current customization branch. In this case, I am updating my AX 7 RTW Retail Sdk (with customizations) to the AX 7 Update 1 Sdk. Same steps apply if you snap to a hotfix or to another update. It is a good practice to use a mirror branch as it makes code merges much easier. The update of the mirror branch is a prerequisite for this step, and was shown in a separate video.
This video shows the steps involved to update your VSTS-based Retail Sdk mirror branch with a newer build. In this case, I am updating my AX 7 RTW Retail Sdk to the AX 7 Update 1 Sdk. Same steps apply if you snap to a hotfix or to another update. It is a good practice to use a mirror branch as it makes code merges much easier. The code merge (as the next step) will be shown in a separate video.
Final note: If you take the Sdk from a new development VM, it may be installed at the C:\ or J:\ drive as shown in the video. If you however take a hotfix, the Retail Sdk can be found in the ‘Code’ subfolder:
If you are following https://ax.help.dynamics.com/en/wiki/recurring-integrations/to setup your recurring imports, and if you are trying to use a data package with multiple entities, you may notice that the server shows a “preprocessing failure”. This is a bug. Until this is fixed, a workaround is to over-layer the ApplicationFoundation:
If you are following https://ax.help.dynamics.com/en/wiki/recurring-integrations/ to setup your recurring exports, you may notice that the download URL that the dequeue call returns is wrong. This may be caused by the load balancer in a production environment. In any case, the client code that fetches the data can use simple code to solve the issue. In C#, it would look like this:
var newDownloadLocation = new UriBuilder(dataMessage.DownloadLocation)
{
Scheme = Uri.UriSchemeHttps,
Port = -1,
};
Once the download URL is “fixed up”, download with a normal GET request.