Dynamics 365 Commerce: Azure Data Explorer dashboard to investigate Commerce Scale Unit performance

Overview

Operations Insights for the Commerce is now generally available and provides a wealth of telemetric information. After enabling and configuring it, every RetailServer API call and SQL query is being logged to the customer’s Azure Application Insights, no matter whether it is coming from out-of-box or extension code. The extend of the information is comparable with what is available to Microsoft’s product and support teams, including the execution time of these operations in milliseconds. This proves to be a treasure trove of information for customers and implementation partners to understand better what is happening inside the CSU. Azure provides different tools to “look” into this data which are based on the Kusto query language ((TODO: add link)). These are well documented.

While querying the data yourself proves useful for many cases, it is lacking a high-level view. While working on a Commerce implementation project recently, I have build a Azure’s data explorer dashboard to provide this high-level view. After receiving great feedback from my team and since it took a few iterations to get the Kusto queries right, I have been able to “re-create” the dashboard for another Commerce implementation in under 15 minutes. I am providing a step-by-step manual on how do do that so other’s can use it in their implementations as well.

See a screen shot of the dashboard below:
(TODO: add link)

If you have any feedback, corrections or additional implementation ideas please let me know.

Note: Only CSU telemetry is currently supported, but POS, HardwareStation and even the Dynamics 365 Finance and SCM traces are planned for the future.

Creation

First, enable Operations Insights in F&SCM and verify that the telemetry data is flowing (ToDO add link).

Add the cluster

Go to https://dataexplorer.azure.com/clusters/ and select Add Cluster. Then configure the dialog as below:

The connection URI should have the following format:

https://ade.loganalyticshttps://ade.applicationinsights.io/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.insights/components/<application-insights-name>

Note that the above URI needs to be edited for correct subscription Id, resource group name and application insights names.

Add the dashboard and data source

  • Create a new Azure Data Explorer dashboard at (todo add link……….. https://dataexplorer.azure.com/dashboards)
  • Add a new dashboard at https://dataexplorer.azure.com/dashboards
  • On the new dashboard, click the top right down-caret, pick Data Sources and click Create new data source
  • Enter the same URI that you used for the cluster above, and give it a prescriptive name. Note: Make sure the URI is proper, copying and pasting in this dialog is a little difficult
  • Click Connect and select the data source. If it did not find the data source, the URI is incorrect.
  • Save the dashboard before continuing.

Create the [@] Parameters

  • Click the [@] Parameters link
  • A “Time range” parameter will already be pre-configured, and we will keep it as is
  • Create a new parameter “ScaleUnit”. See details below
  • Create a new parameter “HttpStatus”. See details below
  • Create a new parameter “RS Api” as below
  • Create a new “Sql” parameter as below

At this point, we are done with the parameters. The dashboard should show these new query parameters:

Create the first tile

Strategic memory caching in Commerce SDK CRT extensions

Strategic memory caching in Commerce SDK CRT extensions (including sample code)

Introduction

The store-side business logic for Dynamics 365 Commerce is often extended with custom code. There are no limits to what a CRT extension can do. It can use process existing data from the database, can process data from extended tables, or can call external systems. Some of these changes to the out-of-box experience (OOBE) may impact performance negatively. Assuming the custom code is already optimized, meaning proper database indexes are applied, there may be cases when caching can lead to additional performance improvements. Note that there are ways to cache to the database, but in my experience memory caching is easier to implement and maintain. I have come across a few cases when caching was useful:

  1. Calculation/fetching of local data is expensive. Data needed often enough that the processing impacts the experience. Data are not needed in real-time (some delay acceptable)
  2. Calculation/fetching of local data from the database. Data are needed very frequently and even 50ms database calls may add up in total. Data are not needed in real-time.
  3. Calling an external API (i.e. Real-time transaction service or other services). Data is needed frequently and data does not need to be real-time or changes rarely (i.e. secret keys)
  4. Fetching data that is always static for a store, customer, cart, product, etc. but still expensive to fetch.

Cases 1 and 2 are similar to any data that can only be edited in the back office and need to be synced to the channel. Because of that, they are by definition not real-time data and could be cached further if its needed.

Focusing on the cases where the data is not real-time or truly external will usually give already good gains to the user experience.

There are some cases where data can also be written in the channel. These are usually a little trickier to cache. Think about saving customer data and still seeing the old data after refreshing. In some cases, these issues can be overcome, for example when the cache key is based not only on a simple key but maybe some other data that changed (i.e. cart, etc.).

Another important note: Since we are caching in memory of a web server and there are multiple load-balanced servers, there will be multiple caches. This does not present a large issue, as requests from a single user usually get routed to the same server.

Memory caching use cases

The different use cases (including proper logging as described here and memory cache implementation) discussed below are available for download.

Feel free to take a look or use for your project at your own risk. Let me know of any bugs.

Case 1: Response caching inside own handler

You can override a request handler and have the new handler take care of caching for all responses of the original handler. In some scenarios, this is an advantage over caching at the caller. GetUserDefinedSecretStringValueRealtimeRequest is a good example to cache in the handler and not at every caller. We can assume that the secrets can be cached for a certain time safely.

Case 2: Response caching outside own handler

In this case, the caching is controlled at the caller. This way we could have different behaviors for different callers, dependent on the caller. If the same caching code would have to be added to multiple places, and the same caching behavior is needed, consider use case 1 instead.

Case 3: Simple value caching outside own handler

This case is similar to case 2. The caching occurs at the caller. However, we may not always have a “proper” Response but a simple value to cache.

Maximizing diagnostics capabilities with Application Insights for Dynamics 365 Commerce CSU

Operations Insights, Applications Insights and custom logger to maximize results for Dynamics 365 Commerce CSU code.

Anyone who has troubleshooted a production environment with diagnostics traces in order to understand why a certain operation did something unexpected knows that it can be quite difficult. Anyone who also had to find not enough traces were available will agree that it would have been great if they had just spent the time to do it right…

So, if you have not yet, let’s do it right this time.

There are 2 logging-related topics I want to discuss first with regards to tracing in the Commerce CSU.

Operational Insights

Operational Insights. Enabling it will allow you to see almost as much as Microsoft. You can check on slow queries, long running operations, errors, count how many Checkouts you had and so on. The following screen shot is from a dashboard that uses the Operational Insights data from a performance environment. It’s very easy to see what the slowest SQL statements are.

A simple performance dashboard that uses Operational/Application Insights

Details about how to use the data is something for another blog.

Extension code instrumentation

Instrument your extension code for Application Insights. When a customer or partner extends the Commerce functionality, Microsoft recommends to properly instrument the extension code with logging statements. The documentation includes a simple class that can be used for logging. Once done and the code is deployed, it will emit traces that look like this:

A custom trace in Application Insights

I have highlighted the customDimension property in red. It is an important piece of information as we have the capability to put anything into the trace for later querying.

Improving the logger class

Any out-of-the-box trace that comes from the CSU has many data inside customDimension. The ones that are very useful to filter or correlate with while querying Application Insights (with Kusto language) are highlighted below in yellow:

out-of-box experience (OOBE) traces and its customDimensions property

The concept of an ActivityId allows you to query for all traces that occurred as part of one request.

AppSessionId is for all data from a terminal.

UserSessionId is unique for each logged in user.

There are many more that are pretty self-explanatory.

I was able to improve the simple logging class that Microsoft provides in such a way that customDimensions holds more meaningful information. In the screen shot below you can see that many of the out-of-box experience (OOBE) properties are now included in the custom trace.

Once we instrument the code (in places where it would aid troubleshooting), we have a great chance to find issues just by looking into the Application Insights store. The code to write a trace is simple:

var logger = Logger.GetLogger(request.RequestContext);
var trace = logger.CreateTraceWithSessionContext(SeverityLevel.Information);
trace.Properties.Add("executionTimeMilliseconds", stopwatch.ElapsedMilliseconds.ToString(CultureInfo.InvariantCulture));
trace.Properties.Add("EventId", "100000");
trace.Message = "Some message";
logger.TrackTrace(trace);

Feel free to take a look or use the logger for your project at your own risk. Let me know if you find any bugs. It is ready to go for a Dynamics 365 CRT extension project based on the latest Commerce SDK.

I am looking forward to a future release when Microsoft supports sending traces from Dynamics 365 Finance and SCM to Applications Insights as well.

Thanks.

Use Azure Data Explorer to query LCS Raw information logs

Note: This is now also part of the official documentation on docs.microsoft.com: Use Azure Data Explorer to query raw information logs – Finance & Operations | Dynamics 365 | Microsoft Docs. I will continue to maintain the samples at the end of this blog.

There are occasions when a customer, partner, consultant, or support engineer needs to look at the low-level Dynamics 365 Finance & Operations telemetry data. These use cases include troubleshooting of errors, performance-related investigations or just to gain some additional understanding of how the platform work. Telemetry data can be accessed by authorized users via the Environment monitoring part of the LCS portal, can be filtered in a few different ways and displayed inside the LCS portal’s raw logs section. A data grid can be used to inspect the log entries. LCS does not allow for more sophisticated pivoting and users can use Excel for that purpose. For that purpose, the telemetry data can also be downloaded formatted in CSV format.

However, Excel is not the optimal tool for advanced querying of this data. The perfect tool, designed for this purpose is the Azure Data Explorer. It provides an innovative query language Kusto that is optimized for high-performance data analytics. Answering questions like “how often has a certain process taken place, how long has it taken in 90% of the times, how often per hour has a certain action taken place over the course of a day” becomes a lot easier and can be backed up with powerful graphics as well.

Here are examples how the graphics could look like: 

A less known feature of the Azure Data Explorer is that it supports ingestion of CSV files. We can use it to get our CSV data files uploaded and staged so it can be queried with the Kusto language. If you have not setup Azure Data Explorer Cluster, follow these steps.

Steps to upload to Azure Data Explorer

  • Run your query on LCS raw logs page
  • Important: adjust the time interval or – filter to get to the right data (row limit is 5000 for export in next step)
  • export the grid to Excel
  • Open the file in Excel and save it without making any changes (this seems to fix a formatting issue)
  • In your Azure Data Explorer, right click on the cluster in the tree view and select “ingest new data” and then on the next page “ingest data from a local file”
  • Pick your cluster, name a new table for the data to be imported into, select up to 10 CSV files to import, select CSV format. Hit next a few times till your data is getting imported.
  • Use the Query tile to write a Kusto query against your data.

To learn more about the Kusto query language, go here.

Sample queries

Modern POS

all errors

Sometimes its nice to get an idea of what all the errors are. The use of the text fields to describe errors is not consistently used, so its better to use the EventIds and map them to the correct errors. They can be looked up in these two Commerce SDK files (but using the code snippet below may give you all of them already):

  • Pos.RetailLogger.js
  • Microsoft.Dynamics.Retail.Diagnostics.Sinks.man
// Note: Filter EventSeverity == Error before exporting/ingesting 
MPosErrors24h
| extend EventName = case(
EventId == 53231, "runtimeInterceptorFailed",
EventId == 48434, "posPaymentTerminalAuthorizeRefundActivityFailed",
EventId == 48347, "posAuthorizeOrRefundPaymentFailed",
EventId == 48104, "posTenderPaymentOperationFailed",
EventId == 46802, "extensibilityFrameworkExecuteRuntimeRequestFailed",
EventId == 44322, "viewModelCartProcessTextFailed",
EventId == 42112, "viewModelCartProcessTextFailed",
EventId == 42106, "RetailProxyExtensionAdapterManagerMethodNotFound",
EventId == 40450, "posInitiatedHardwareStationRequestFailed",
EventId == 40409, "peripheralsBarcodeScannerEnableFailed",
EventId == 40325, "operationSetQuantityOptionsValidationFailed",
EventId == 40255, "RetailOperationFailed",
EventId == 40209, "coreRetailOperationOnExecutingFailed",
EventId == 40191, "modelManagersChainedRequestExecutionFailed",
EventId == 40170, "modelManagersErrorParserHardwareStationError",
EventId == 40150, "modelManagersChainedRequestFactoryExecuteOfflineLogonRequestUnAvailable",
EventId == 40104, "ModelManagersCommerceRuntimeRequestError",
EventId == 40101, "ModelManagersRetailServerRequestError",
EventId == 7505 , "PaymentConnectorLogErrors",
EventId == 7503 , "PaymentConnectorLogException",
EventId == 7014 , "HardwareStationActionFailure",
EventId == 1001 , "CrtExecuteRequestErrorFailure",
strcat("*************** Unknown EventId: ", EventId))
| where EventSeverity == "Error"
| summarize count() by EventId, EventName
| order by count_ desc

Now, its easier to decide where to look next, i.e. what more detailed queries to issue for specific errors…

Errors returned by RetailServer

// Note: Filter EventSeverity == Error before exporting/ingesting 
MPosErrors24h
| extend EventName = case(
EventId == 53231, "runtimeInterceptorFailed",
EventId == 48434, "posPaymentTerminalAuthorizeRefundActivityFailed",
EventId == 48347, "posAuthorizeOrRefundPaymentFailed",
EventId == 48104, "posTenderPaymentOperationFailed",
EventId == 46802, "extensibilityFrameworkExecuteRuntimeRequestFailed",
EventId == 44322, "viewModelCartProcessTextFailed",
EventId == 42112, "viewModelCartProcessTextFailed",
EventId == 42106, "RetailProxyExtensionAdapterManagerMethodNotFound",
EventId == 40450, "posInitiatedHardwareStationRequestFailed",
EventId == 40409, "peripheralsBarcodeScannerEnableFailed",
EventId == 40325, "operationSetQuantityOptionsValidationFailed",
EventId == 40255, "RetailOperationFailed",
EventId == 40209, "coreRetailOperationOnExecutingFailed",
EventId == 40191, "modelManagersChainedRequestExecutionFailed",
EventId == 40170, "modelManagersErrorParserHardwareStationError",
EventId == 40150, "modelManagersChainedRequestFactoryExecuteOfflineLogonRequestUnAvailable",
EventId == 40104, "ModelManagersCommerceRuntimeRequestError",
EventId == 40101, "ModelManagersRetailServerRequestError",
EventId == 7505 , "PaymentConnectorLogErrors",
EventId == 7503 , "PaymentConnectorLogException",
EventId == 7014 , "HardwareStationActionFailure",
EventId == 1001 , "CrtExecuteRequestErrorFailure",
strcat("*************** Unknown EventId: ", EventId))
| where EventSeverity == "Error"
| where EventId == 40101
| summarize count() by requestAction, error
| order by count_ desc

CSU

All errors

Similarly to the MPOS errors, we can get them from the CommerceSDK (Microsoft.Dynamics.Retail.Diagnostics.Sinks.man)

// Note: Filter EventSeverity == Error before exporting/ingesting 
CSUErrors24h
| extend EventName = case(
EventId == 1017, "CrtSlowHandlerExecution",
EventId == 1005, "CrtExecuteRequestWarningFailure",
EventId == 5055, "RetailServerRequestWarningFailure",
EventId == 60208, "SyncLibraryMergeDataIntoTableWarning",
EventId == 60125, "ProcessDeleteRequestStart",
EventId == 2617, "CrtWorkflowUserAuthenticationRequestHandlerFailure",
EventId == 2486, "CrtServicesEmployeePasswordDoesNotMatch",
EventId == 5106, "RetailServerSecretRetrievalWarning",
EventId == 2503, "CurrencyServiceGetSupportedChannelCurrenciesNotFound",
EventId == 6943, "RtsClientLibraryApiExecutionWarning",
EventId == 3007, "CrtTransactionServiceClientRtsExecutionWarning",
EventId == 2413, "CrtServicesSalesOrderTransactionServiceMarkReturnedItemsFailure",
EventId == 2483, "CrtServicesLocalLogonFailedDueToIncorrectStaffId",
strcat("*************** Unknown EventId: ", EventId))
| summarize count() by EventId, EventName
| order by count_ desc

A next step would be to go through each of these error types, and look at them closer. A few of those could indicate quality issues that could effect the user experience.

In many cases, these errors can be fixed by cleaning up the extended code, adding proper SQL indexes or investigate better problem approaches.

In some cases, these errors could indicate problems with out-of-box experience (OOBE) or deployment. Please open a support request to get these fixed by Microsoft.

Slow CRT handlers by requesttype

// Note: Filter EventSeverity == Error before exporting/ingesting 
CSUErrors24h
| extend EventName = case(
EventId == 1017, "CrtSlowHandlerExecution",
EventId == 1005, "CrtExecuteRequestWarningFailure",
EventId == 5055, "RetailServerRequestWarningFailure",
EventId == 60208, "SyncLibraryMergeDataIntoTableWarning",
EventId == 60125, "ProcessDeleteRequestStart",
EventId == 2617, "CrtWorkflowUserAuthenticationRequestHandlerFailure",
EventId == 2486, "CrtServicesEmployeePasswordDoesNotMatch",
EventId == 5106, "RetailServerSecretRetrievalWarning",
EventId == 2503, "CurrencyServiceGetSupportedChannelCurrenciesNotFound",
EventId == 6943, "RtsClientLibraryApiExecutionWarning",
EventId == 3007, "CrtTransactionServiceClientRtsExecutionWarning",
EventId == 2413, "CrtServicesSalesOrderTransactionServiceMarkReturnedItemsFailure",
EventId == 2483, "CrtServicesLocalLogonFailedDueToIncorrectStaffId",
strcat("*************** Unknown EventId: ", EventId))
| where EventId == 1017
| summarize count(), sum(inclusiveExecutionTime), percentiles(inclusiveExecutionTime, 75, 90) by handlerAssemblyName, requestType
| order by sum_inclusiveExecutionTime desc

F&O

All errors

// Note: use selection "All error events" to only get the errors
FNOErrors24h
| summarize count() by formName, targetName, errorLabel
| order by count_ desc

FormName, targetName, errLabel are all good candidates to look up in Azure DevOps to find out the code location. That gives more insight in what areas these error occur.

Example: The @SYS18885 is thrown from ReqPlanData class. Now I can focus on figuring out why that is the case (is Master planning not configured right?)

Slow SELECT queries

// use selection "slow queries" in LCS diagnostics
SlowQueries
| extend MainTableName = extract("FROM\\s([A-Z0-9]+)\\s", 1, statement)
| where statement contains "SELECT"

slow inserts

// use selection "slow queries" in LCS diagnostics
SlowQueries
| extend MainTableName = extract("INSERT INTO ([A-Z0-9]+)\\s", 1, statement)
| where statement contains "INSERT"

Slow deletes

// use selection "slow queries" in LCS diagnostics
SlowQueries
| extend MainTableName = extract("DELETE FROM ([A-Z0-9]+)\\s", 1, statement)
| where statement contains "DELETE FROM"

Batch jobs performance

// use selection "All logs" and add query "where TaskName Equals BatchFinishedOneTask" in LCS diagnostics
BatchFinishedOneTask
| summarize count(), sum(durationInMilliSeconds), percentiles(durationInMilliSeconds, 75, 90) by className
| order by sum_durationInMilliSeconds desc

Configuring CommerceRuntime extensions properly

The CommerceRuntime is the business logic for Retail POS and other channels. Here are a couple of good rules how to correctly configure it.

  1. In both of the commerceRuntime.*.ext files, be specific about what handlers you want. Do not use a wildcard inclusion for the whole assembly. If you do the latter, you have no choice about what handlers are disabled, etc.  The example below should help.
  2. Even though you could write code that figures out where the runtime context is online or offline, its not a good practice. Its better practice to use the config file to control that. See example below.
  3. To enable different functionality for online and offline scenarios, you have multiple options:
    1. If you do not want/need offline mode at all, either disable the feature for that register OR install the installer that does not include the offline part.
    2. If you want just the built-in offline features (fewer features than online), you can have an empty CommerceRuntime.MPOSOffline.Ext.config file
    3. If you want what you get in 3.2) with the addition of a few custom-built features, you can have some of the entries in the CommerceRuntime.MPOSOffline.Ext.config file.
    4. If you want what you get in 3.2) and all of your custom-built features, you should have the same entries in CommerceRuntime.MPOSOffline.Ext.config and CommerceRuntime.Ext.config

Notice that the config files are almost the same, with the one difference that the online version has 3 more handlers (ProductAvailability, PurchaseOrder related). Those are based on RTS calls which cannot be done in offline mode.

CommerceRuntime.exe.config:

<?xml version="1.0" encoding="utf-8"?>
<commerceRuntimeExtensions>
  <composition>
    <!--New Handlers-->
    <add source="type" value="FOO.Commerce.Runtime.Extensions.Receipts.CustomReceiptsRequestHandler, FOO.Commerce.Runtime.Extensions.Receipts" />
    <add source="type" value="FOO.Commerce.Runtime.Extensions.Customer.CreateCustomerRequestHandler, FOO.Commerce.Runtime.Extensions.Customer" />
    <add source="type" value="FOO.Commerce.Runtime.Extensions.Customer.UpdateCustomerRequestHandler, FOO.Commerce.Runtime.Extensions.Customer" />
       <add source="type" value="FOO.Commerce.Runtime.Extensions.ProductAvailability.SaveCartRequestHandler, FOO.Commerce.Runtime.Extensions.ProductAvailability" />
       <add source="type" value="FOO.Commerce.Runtime.Extensions.ProductAvailability.ValidateCartForCheckoutRequestHandler, FOO.Commerce.Runtime.Extensions.ProductAvailability" />
       <add source="type" value="FOO.Commerce.Runtime.Extensions.PurchaseOrder.SavePurchaseOrderRealtimeRequestHandler, FOO.Commerce.Runtime.Extensions.PurchaseOrder" />
  
    <!--Extended Handlers-->
    <add source="type" value="FOO.Commerce.Runtime.Extensions.StoreWalkin.StoreHourWalkinDataService, FOO.Commerce.Runtime.Extensions.StoreWalkin" />

    <!--Extended Triggers-->
    <add source="type" value="FOO.Commerce.Runtime.Extensions.ReturnPolicy.GetSalesOrderDetailsByTransactionIdServiceTrigger, FOO.Commerce.Runtime.Extensions.ReturnPolicy" />
  </composition>
</commerceRuntimeExtensions>

CommerceRuntime.offline.exe.config:

<?xml version="1.0" encoding="utf-8"?>
<commerceRuntimeExtensions>
  <composition>
    <!--New Handlers-->
    <add source="type" value="FOO.Commerce.Runtime.Extensions.Receipts.CustomReceiptsRequestHandler, FOO.Commerce.Runtime.Extensions.Receipts" />
    <add source="type" value="FOO.Commerce.Runtime.Extensions.Customer.CreateCustomerRequestHandler, FOO.Commerce.Runtime.Extensions.Customer" />
    <add source="type" value="FOO.Commerce.Runtime.Extensions.Customer.UpdateCustomerRequestHandler, FOO.Commerce.Runtime.Extensions.Customer" />
  
    <!--Extended Handlers-->
    <add source="type" value="FOO.Commerce.Runtime.Extensions.StoreWalkin.StoreHourWalkinDataService, FOO.Commerce.Runtime.Extensions.StoreWalkin" />

    <!--Extended Triggers-->
    <add source="type" value="FOO.Commerce.Runtime.Extensions.ReturnPolicy.GetSalesOrderDetailsByTransactionIdServiceTrigger, FOO.Commerce.Runtime.Extensions.ReturnPolicy" />
  </composition>
</commerceRuntimeExtensions>

Implementing Retail Store Credit Card discounts


This post originally appeared on Implementing Retail Store Credit Card discounts.

An often-requested Dynamics Retail channel feature is the support of store credit cards. The retailer wants to encourage the use of a store credit card; so if the customer uses it, he should get a discount percentage.

Ideally, we should be able to configure the discount percentage, the products that should apply or not, and certain other ordering rules for the discount (apply on-top of other discounts or replace other discounts if better, etc.). The retailer should be able to configure these in Headquarters (AX) without changes in the channel.

This discount applies only during or right before payment, it cannot be applied during simple add-to-cart operations, like other discounts.

Modelling the discount as an affiliation

The pricing or discount engine is a complex piece of code. Even though it supports extensibility features, I would rather not go there. Is there an easier way to do that? Yes, there is. Read on…

We do not want to apply the associated discounts automatically but rather only when a special store credit card is used. We can use affiliation and affiliation price groups to achieve that. The only thing we need to “code” is the application of the affiliation to the cart at the right moment (right when the customer checks out with a store credit card).

A nice benefit of not needing pricing extensions is that we can use the Price simulator in HQ to verify that our discount rules are correct.

In order to try this out, we need to create a new discount for the products we want, with the right percentage, associate with a price group and add it to an affiliation.

As soon as we save, we can try it out in the Price Simulator. Below, you can see that the store credit card discount is applied to all projects, but other discounts are honored (because the StoreCreditCardDiscountAffiliation was set). As soon as the affiliation is removed from the General section, the discounts would disappear.

Applying the store credit card discount in POS

As mentioned above, we only need to apply correct affiliation to the cart before we checkout, and we are done. That functionality already exists in POS today. So, to simply try this out, I did not need to make any code changes (POS transaction screen -> Discounts -> Add affiliation):

Ideally, this affiliation should only be added right before the store credit card is being used. Therefore I opted for adding a new button next to the “Pay Card” button that will do all the work and the cashier would not need to do this via the “Add Affiliation” operation. See the steps below for details.

It would even be better to “detect” that the credit card is a store credit card and only then apply the affiliation. However, it turned out that this may not be easily doable today, as the payment view in POS does not support this. Hence, I opted for the slightly more manual option where the cashier has choose the button.

Here are the steps to add the button to POS:

    • In HQ, add a new operation (i.e. StoreCreditCardDiscountAffiliation, 5000)
    • In the layout designer, edit the button grid and add the button and associate it with the new operations

    • Run these jobs: 1050 for Affiliation price groups, 1020 for discount, 1090 for the button grid changes
    • In POS.Extensions project, create a new project StoreCardDiscountExtension, with a new operation mapped to the same Id created earlier, and in there just add the affiliation to the cart.

public executeAsync(request: StoreCardDiscountOperationRequest): Promise<ClientEntities.ICancelableDataResult> {

    let addAffiliationRequest: CartOperations.AddAffiliationOperationRequest =
        new CartOperations.AddAffiliationOperationRequest(["StoreCreditCardDiscountAffiliation",], [], request.correlationId);

    return this.context.runtime.executeAsync(addAffiliationRequest)
        .then((
            result: ClientEntities.ICancelableDataResult
        ): ClientEntities.ICancelableDataResult => {

            return <ClientEntities.ICancelableDataResult>{
                canceled: result.canceled,
                data: new StoreCardDiscountOperationResponse()
            };
        });
}

The POS extension code is attached below in zip format.
StoreCardDiscountPosExtensions

Applying the store credit card discount in online channel

The same discount logic as above is executed automatically, as we use built-in concepts around discounts (as long as RetailServer is being called). However, just as in POS, we need to add the affiliation to the cart at the right moment. In eCommerce, we can accomplish this by updating the “AffiliationLines” property of the Cart object.

Since in eCommerce the payment process is a wizard with multiple steps, we do not need to add a new UI action to apply the discount affiliation. We can handle this better by detecting the use of a store credit card by number, then apply the affiliation automatically (form the UI or 3rd party ecommerce wrapper). The customer should see the updated cart, while moving to the next payment step.

Retail Channel performance investigations

This post originally appeared on https://dynamicsnotes.com/retail-channel-performance-investigations/.

There may be a time during a Retail project when you likely will hear someone saying: “Doing X in POS is slow. What’s going on?”. If this is the case, my hope is that this during the implementation phase and not on a live production environment.

What do we do to find out what is wrong? Is it a customization to the CRT business logic, latency to an external call, generic latency to the server from POS, or a Microsoft bug?

The following steps will hopefully help to give an answer to these questions.

Latency between POS and RetailServer

RetailServer API timings as seen from POS

Simulating network “issues”

“Looking inside” RetailServer/CRT/SQL with Commerce Runtime Analyzer

Profiling Channel database SQL queries

Some things to remember:

RetailServer could be either Microsoft hosted or RSSU-hosted. It is agnostic to this investigation, but you need to make sure you are testing against the right RetailServer (especially if you use both Microsoft-hosted and RSSU-hosted at the same time).

Microsoft-hosted RetailServer in production cannot be accessed, so some of the steps below cannot be carried out on a production environment. It is advised to carry these out on a lower tier environment.

RSSU-hosted RetaiSserver, even in production, is under the management of the customer, so the steps below can be carried out, but with care.

Sometimes the slowness could occur only when there are multiple POS used at the same time. If that is the case, you can still use the steps below, but would have to reproduce the issue by running multiple POS.

Profiling Channel database SQL queries


This post originally appeared on https://dynamicsnotes.com/profiling-channel-database-sql-queries/.

The information below is not particular to the Retail Channel or Dynamics 365 F&E databases. It can be used on any SQL database.

Monitoring performance by using the Query Store

Find bad performing queries, missing indexes, changed query performance with the Query Store.  More information can be found here:

Monitoring performance by using the Query Store

Monitoring performance by using the SQL Profiler

SQL Profiler can be used to inspect and trace and measure all SQL access. Compared to the Query Store, SQL Profiler provides data that is “less prepared”. However, in some cases that is what’s needed.  Follow these steps:

  1. Start SQL Server Profiler and connect to the Channel database to be profiled
  2. Start a new trace with default configuration (and hit Run)
  3. Rerun the scenario for which you want to profile the SQL access
  4. Stop the trace
  5. In SQL Server Profiler, hit File/Save As/Trace Table and save the trace data to a new table, i.e. into tempdb.ChannelDbTraces
  6. Using SQL Server Management Studio, query the trace table
SELECT [rownumber], 
       [eventclass], 
       Iif(( CONVERT(NVARCHAR(max), [textdata]) LIKE '% [[]ext%' ) 
            OR ( CONVERT(NVARCHAR(max), [textdata]) LIKE '% ext%' ), 'ext', '') 
       AS 
       ExtSchema, 
       [textdata], 
       [cpu], 
       [reads], 
       [writes], 
       [duration], 
       [spid], 
       [starttime], 
       [endtime] 
FROM   [tempdb].[dbo].[channeldbtraces] 
WHERE  loginname = 'axretailruntimeuser' 
       AND duration IS NOT NULL 
       AND ( textdata IS NOT NULL 
             AND CONVERT(NVARCHAR(max), [textdata]) <> 
                 'exec sp_reset_connection' ) 
       AND Iif(( CONVERT(NVARCHAR(max), [textdata]) LIKE '% [[]ext%' ) 
                OR ( CONVERT(NVARCHAR(max), [textdata]) LIKE '% ext%' ), 'ext', 
           '') = 
           'ext' 
ORDER  BY duration DESC    

Useful query scenarios:

  • Longest SQL queries: add order clause “order by Duration desc”. By the way, the Duration columns unit is microseconds.
  • Include only extended SQL queries: add filter clause “and IIF((convert(nvarchar(max), [TextData]) like ‘% [[]ext%’) or (convert(nvarchar(max), [TextData]) like ‘% ext%’), ‘ext’, ”) = ‘ext’” (in 7.2 and later, any custom SQL has to be implemented in the [ext] database schema.)
  • Find any queries that are called multiple times and order by the number of calls, and execution times. This may help to find out if caching mechanisms could be applied to optimize Microsoft or custom code.
    SELECT count(CONVERT(NVARCHAR(max), [textdata])) CountOfExcecutes,
           CONVERT(NVARCHAR(max), [textdata])  as SQLStatement, sum(duration) SUMOfDuration, avg(duration) AVGOfDuration
    FROM   [tempdb].[dbo].[ChannelDbTraces] 
    WHERE  loginname = 'axretailruntimeuser' 
           AND duration IS NOT NULL 
           AND ( textdata IS NOT NULL 
                 AND CONVERT(NVARCHAR(max), [textdata]) <> 
                     'exec sp_reset_connection' ) 
    group by CONVERT(NVARCHAR(max), [textdata])
    order by count(CONVERT(NVARCHAR(max), [textdata])) desc, sum(duration) desc
    

Looking at these SQL traces can reveal issues with inadequate tracing. Some CRT base entities are cached, so you will only see a single call (per RetailServer instance) but if you extend an entity you need to make sure you implement some caching as well if this meets the requirement. For example, fetching an item in the CRT base implementation is cached, so you should only see that call once but if you extended the item with an extension table AND did not use any caching, you will see many calls with the same query during POS operations that require an item lookup. That would indicate that caching should be considered. The caching of a CRT extended entity will be covered in a different blog.

The below trace was taken by adding a single item to a cart (unsorted, so the shown order is the actual execution order). Notice the red marked row. This is the GetItem call of the CRT, and followed the fetch of the item’s extended data. We can clearly see that both of these are only called once, as expected. If this item is being added again within a short period of time, we should not see these SQL traces again.

I can see quite a few similar SQL calls to fetch pricing data, just for adding a single item to a cart. I am not sure why this is, possibly this is something that could be optimized by Microsoft. As a partner/customer, you have full control over the SQL logic in the [ext] schema. Any other issues should be submitted to Microsoft.

Since the data is in format of a SQL table, the data can be easily shared with support or engineers to help with troubleshooting. Simple right click the results header in Management studio/Save Results as/choose CSV file and you are good.

“Looking inside” RetailServer/CRT/SQL with Commerce Runtime Analyzer

This post originally appeared on https://dynamicsnotes.com/looking-inside-retailserver-crt-sql-with-commerce-runtime-analyzer/.

The Retail Product team built a tool that can be used to trace the RetailServer calls, CRT executes, and SQL calls. It shows call-stack like information based on event traces, similarly to the Dynamics 365 Trace Parser. The tool can be used to find slow handlers, slow SQL or just understand what is happening as part of a call.

As usual, a picture says more than 100 words. See my investigation of slow performance of the UpdateCartLines RetailServer call. From that call stack, I can learn this flow includes a lot of other internal steps: get identity, get cart, get products, get unit of measures, calculate reason codes, calculate prices tax, calculate discounts, calculate sales tax, calculate totals, save versioned cart, final saving of the transaction in SQL and much more. This is VERY valuable information and can be obtained with the Commerce Runtime Analyzer WITHOUT reflecting the source code.


I notice that the CalculateCartRequiredReasonCodesServiceRequest takes 2021 ms. When I add up the included child timings, there is a full 2s missing. This is in my own customization (for this sample, I have added a 2s delay in an extended handler).

Another scenario is to find out what data the CRT logic is using. I ran a product search and the tool lets me know that the data for the 62 result rows comes from [crt].[GetProductsSearchResultsByText_V2].

The tool is not perfect. At this time, it does not seem to show custom handlers properly. If the Retail team gets enough “requests”, I am sure this could be an easy addition made.

In order to use this tool follow the steps:

1. Ask me for a link of to CommercePerfAnalyzer.zip
2. Unzip and execute the CommercePerfAnalyzer.exe
3. Exercise your scenario
4. Wait a few seconds
5. Inspect the output of the tool

Simulating network “issues”

This post originally appeared on https://dynamicsnotes.com/simulating-network-issues/.

In some cases it is very useful to see how a web site would perform under certain network conditions that are worse than what you have currently. You can use this tool to simulate it: https://jagt.github.io/clumsy/. As an example, if you want to simulate what a network latency of 200ms looks like while using POS, run it on the same computer as POS and configure it with 200ms outbound.

Even if you are on a good network, with this tool you can evaluate what it would look like for someone with higher latency (or other network conditions).

RetailServer API timings as seen from POS

This post originally appeared on https://dynamicsnotes.com/retailserver-api-timings-as-seen-from-pos/.

There may be some APIs that “appear” slower than others. To find out, just hit F12 if you are running in the browser or ModernPOS to launch the tool (ModernPOS may require to enable developer mode in Windows).

Running the CloudPOS logon sequence on the same computer that I measured the latency above from, I get these timings:

As you can see, the quickest API calls are all around 60 ms, which makes sense as our measurements above showed an always occurring latency of 41ms.

Depending which scenario is slow, focus on these and see what exact RetailServer call(s) are the culprits. Then move on to the next step to find out why.

By the way, I am not too worried about the time a login takes, as it is usually not a very frequent operation. This may be different for some customers that configure a logoff after each transaction or after a certain time. In general, I would focus on the top 5 customer scenarios.

Latency between POS and RetailServer

This post originally appeared on https://dynamicsnotes.com/latency-between-pos-and-retailserver/.

There is always going to be a physical separation between the server (RetailServer) and client (ModernPOS or CloudPOS browser instance).  If this physical distance is large, every RetailServer call will incur a larger additional latency. There is no code change that can fix this, a redeployment of cloud or client components will be needed to lower the value.

Here is how to measure the latency.

  1. Find out what Azure data center the RetailServer is hosted. Use LCS environment page for the specific environment or ask your IT administrator. For example, my RetailServer is hosted on the Azure primary region “West Central US”.
  2. On the same computer as your store terminal open an internet browser window and browse to http://www.azurespeed.com/. Check at correct Azure region or a few on the same area. The web page will tell you the latency from your computer to the data center the RetailServer is hosted. It is very important to use the same computer as the store terminal, as sometimes customers use a dedicated network for the store terminals and a computer in the back office may be on a different network.

In my example, you can see I get a latency of about 41 ms (for every RetailServer call).  That is a very good value (for this particular single store, it may have been better to use “West US 2” to get an even better response). Off course the choice should be balanced between all different stores that use this RetailServer instance.

If you want to further investigate where a latency issue occurs, you may use other networking utilities. For example tracert (http://www.mediacollege.com/internet/troubleshooter/traceroute.html). It will list all the routers it passes through until it reaches its destination, or fails to and is discarded. In addition to this, it will tell you how long each ‘hop’ from router to router takes.

Note that Azure does not seem to allow ICMP packets through, but tracert and ping can still be used to get a sense of what the issues are outside of the data center.

Beefing up ModernPOS security with a real app package signing certificate and AAD authentication

Overview

A real certificate should be used to code sign the ModernPOS packages. That will allow for more security and simpler deployments (as no self-signed certs will need to be installed before installing MPOS).

Since the real certificate is usually not accessible by all developers in the team (and should not be), non-official builds will keep using a simple self-signed cert, i.e. the Contoso certificate shipped with the Retail Sdk.

Additionally, recent releases of MPOS, RetailServer, and Dynamics AX use AAD app ids for proper authorization. A onetime setup in AAD and AX is needed. With the proper setup, we configure AX to only allow access from ModernPOS clients that claim the correct app id combination.

Note, that there is a direct relation between the signing certificate, the reply address of the final built ModernPOS and the AAD app id.

With these recent changes, non-production environments can be configured to allow both the real and self-signed ModernPOS clients to access. However, the actual production environment should only allow access by MPOS clients with real certs and self-signed certs should not be allowed.

An automatic benefit of this is that it cannot happen, that a developer connects by accident to production during development phase.

Note, that all the steps below have to be carried out once per tenant, and not per environment.

Detailed steps

Ideally, the following steps should be carried out after a dev environment is fully working with a self-signed cert. That ensures that AX is correctly setup (workers, stores, registers, etc.).

1. Build server setup

The first step is to install a real certificate on the build server so that the Retail Sdk build can use it.

Note, that the steps below require administrative access to the build server. Therefore a cloud-hosted build server is required (as opposed to a Microsoft-hosted build server without administrative privileges is not possible to be used).  This is unfortunate, and Microsoft is working on a solution.

  • 2. Initial Retail Sdk setup

    We want to use the certificate in Release builds (official) but keep using the Contoso signing cert for dev purposes in Debug builds. Other strategies can be used, but using the build flavor has worked well for some. The RetailSdk allows us to provide either a path to a certificate or a certificate thumbprint. We will use the first approach for the self-signed test certificate and the 2nd approach for the real certificate. We can use simple MSbuild logic to switch between the two. Below is an example how to do that. This change would have to be made to the RetailSdk’s customization.settings or better, to a global.props file sitting next to it with all customer values. Use the thumbprint from the step above.

    <!– Release builds pick up the secure cert from the build server with named thumbprint, Debug builds use the test cert part of the Retail Sdk –>
    <ModernPOSPackageCertificateKeyFile Condition=”‘$(ModernPOSPackageCertificateKeyFile)’ == ” and ‘$(Configuration)’ != ‘Release'”>$(MSBuildThisFileDirectory)\ModernPOSAppxSigningCert-Contoso.pfx</ModernPOSPackageCertificateKeyFile>
    <ModernPOSPackageCertificateThumbprint Condition=”‘$(ModernPOSPackageCertificateThumbprint)’ == ” and ‘$(Configuration)’ == ‘Release'”>22aa3bdca99b70d4ca44d0c51d23a07e06fcfc61</ModernPOSPackageCertificateThumbprint>

  • 3. AAD setup

    Carry out an official build in order to get a properly signed ModernPOS appx/installer
    Install the fully signed ModernPOS on a client machine and point to a RetailServer/AX environment of your choice (dev environment, for example)
    follow the steps in this blog to create 2 app ids: https://community.dynamics.com/ax/b/axforretail/archive/2017/11/05/mpos-aad-changes-in-monthly-update-4
    Activate ModernPOS

    4. Final Retail Sdk setup

    The final changes to the dllhost.exe.config you had to make in step 3 should be added to the RetailSdk\Assets\dllhost.exe.config file. That way, any new and officially build ModernPOS will automatically have the right app id information configured.
    Note: For development purposes, the older app id information must be used. This can be easily done by leaving the original settings in the config file, but leave them commented out. Switching between the production and dev ModernPOS is that as easy as un-commenting the developer values before activating MPOS.

Inject business or validation logic during AddToCart (AddCartLines)





Note: Implemented with Dynamics 365 version 7.2.11792.62628, or App Update 4 (should work with many other versions). Sample code can be downloaded at the end of this blog.

Imagine there is some additional business logic that should be executed during “AddToCart” in order to cancel the operation and show a dialog in POS (both ModernPOS or CloudPOS).  It could be that an external system has some additional information about item availability, or it could be a credit check that the customer on the transaction failed. Whatever the actual business logic may be, our code extension shall meet these goals:

  • If a certain condition is true, do not persist the new cart item,
  • show a well-formed, localized error message in POS about the issue, and
  • keep the existing POS view open, with unchanged data, after the dialog is closed.

This can be accomplished with a very small extension in CRT and without any changes in POS. The CRT request in charge of saving the cart data is “SaveCartVersionedDataRequest”. All we need to do is to augment the CRT request with a pre-trigger that will give us the opportunity to “cancel” before saving the cart. Steps:

    1. Create a simple new CRT trigger and implement the interface methods
    class MyAddToCartLineValidationTrigger : IRequestTrigger
    {
        public IEnumerable SupportedRequestTypes
        {
            get
            {
                return new[] {
                    typeof(SaveCartVersionedDataRequest),
                };
            }
        }

        public void OnExecuted(Request request, Response response)
        {
        }

        public void OnExecuting(Request request)
    1. add business logic to validate any data on the cart or transaction
    2. if you need to invalidate/cancel the “AddToCart” because the business rules call for it, throw a CommerceException with localized message
                if (invalidCart)
                {
                    throw new CommerceException("Microsoft_Dynamics_Commerce_30104", "Custom error")
                    {
                        LocalizedMessage = string.Format("The item with Id {0} is not allowed to be added at this time.", disallowedItemId),
                        LocalizedMessageParameters = new object[] { disallowedItemId }
                    };
                }
  • Add your changes to the Retail Sdk, edit the commerceruntime.exe.config, global.prop or Customization.settings files

Extensions.MyAddToCartLineValidationTriggerForBlog

Overriding CommerceRuntime tax calculation with a 3rd party implementation





Note: Implemented with Dynamics 365 version 7.2.* (but likely working fine with lower or higher versions). Sample code can be downloaded at the end of this blog.

This information explains how to override tax calculation in the CRT in Dynamics 365 for Operations (and similar products). This is a simple example how integrate the calls to external tax providers and how to embed the returned values back into the CRT. The code is of non-production quality; its main purpose is to show the concepts only.

High-level overview

In this example, I demonstrate how to short-circuit any CRT tax calculations (by implementing my own IRequestHandler) and forward the individual line items on the cart in one large payload to a call to an external system. The external system will calculate a 20% tax rate on each item (a simplification). Our new implementation of the tax service will then take the return values and save the information on the transaction as it is expected by the CRT and CDX frameworks.

The identifier of the product in the external system is modelled to be different from ItemId or ProductId in Dynamics. Here, I showcase the use of the product attribute framework to add an attribute called “ExternalTaxId”. As part of the newly implemented code, we must fetch the value of this attribute and forward it to the external provider. As an optimization, the fetching of this attribute is cached. This should help with limiting the number of calls to a minimum (per IIS process). There are other approaches that could be taken, but this seemed to be the simplest.

Finally, imagine that we have “additional” data coming from the external tax provider to be saved per sales tax line and to be brought back to Dynamics HQ. This shows the use of a table extension in X++, an extension table in the channel database and the inclusion of this data in the transaction upload job (P-job).
When all work is finished, we will see the 20% tax rate in the POS clients:

After running the P-0001 job, we can also see the data in AX:

Details

There are 3 tax-related CRT request handlers that can be overridden for tax-related purposes.

  • CalculateTaxServiceRequest: Re-calculates taxes for individual items and whole transaction
  • AssignTaxCodesServiceRequest: Fetches and assigns sales tax codes. Since we do not calculate the taxes locally, this is not really needed. However, there is no negative impact to just leave that code untouched.
  • PopulateTaxRatesRequest: Since we will override CalculateTaxServiceRequest, there is no need for this handler. We will just provide with our own empty implementation.

In the overridden CalculateTaxServiceRequest, we carry out these steps:

  • Use the passed in SalesTransaction, loop through all active sales line items and get the external product id (from the product attributes). For the steps how to setup product attributes see this blog post
  • cache the external product id for later calls (see blog post for more detailed discussion)
  • pass the sales line information and external product id to the external tax service
  • dummy tax service returns a single tax code per sales line item with 20% tax and some other string value that must be stored on the tax line
  • the CRT tax service builds the right data structure with tax lines and calculates final total taxes
  • CRT tax service adds additional string value as Extension property on each tax line
  • CRT tax service returns the SalesTransaction

In addition, we need to hook into the transaction save process. Therefore, we implement the SaveSalesTransactionDataRequest post trigger. More details in this blog post.

The full source code can be found Extensions.MyTaxServiceBlog

Extending a transactional Retail Channel table with additional data (HQ X++ table extension, CDX, extension table, CRT data service)





Note: Implemented with Dynamics 365 version 7.2.11792.62628, or App Update 4 (may not work with lower versions). Sample code can be downloaded at the end of this blog.

A frequent customization need is to store more transactional data in the channel database in order to synchronize it back to HQ. There is a list of tables that holds these data, i.e. RetailTransactionTable, RetailTransactionSalesTrans, RetailTransactionTaxTrans tables, and a few others.

In this example, I am going to show how to extend the RetailTransactionTaxTrans table, how to write to it from a CRT data service, how to create a view to sync it back to AX and how to configure CDX. The single string that the tax line is extended with, is being passed in by means of an CRT Extension property. This sample is part of a larger sample to override the CRT’s tax calculation.

Lets work on the pieces from bottom up to the top.

HQ table extension (X++)

Build an X++ table extension for the table RetailTransactionTaxTrans by use of the AOT and Visual Studio. Before you do that, create a new Dynamics 365 package and model. In the screenshot below you can see I created a package called “Extensibility” and a model called “Extensibility” and added a new string column called ADDLSTRING1.

Make sure to build and db sync, and when all looks good to add the new package to VSTS.

Channel database extension (SQL)

Since we want to get the data from the channel to the HQ, we must store it initially in the channel (or even in the ModernPOSOffline db). In order to do that, we must develop some SQL for the channel database.

The new way for doing this is to create any new SQL on the db schema “[ext]” and NOT in any preexisting schemas, i.e. [ax], [dbo], [crt]. At this point you can reference Microsoft’s schemas, but we careful to not take too many dependencies. SQL objects cannot be considered a real interface, so the more assumptions your SQL makes on Microsoft’s SQL, the higher the probability you may need to fix something someday when taking an update. In the SQL I show here, you will see that all the new SQL objects live in the [ext] schema.

New table

We need to store the new column in a new table. Since, in this sample we want to store one more string value, we must at least have this new column in the table, plus all the other columns that allow us to reference the corresponding record in the original table (base table). The correct approach is to add all PK columns from the original table to the new table as well. In addition, it is a good practice (but not required) to add columns for creation and modification time.
Also, make sure that the PK on the new table is the same as on the original one. This is needed so CDX can find the data correctly.
Lastly, we need to provide CRUD privileges on the table to the DataSyncUsersRole (CDX) and insert privileges to the UsersRole (CRT process).

    CREATE TABLE [ext].[RETAILTRANSACTIONTAXTRANS](
		[CHANNEL] [bigint] NOT NULL,
		[SALELINENUM] [numeric](32, 16) NOT NULL,
		[STOREID] [nvarchar](10) NOT NULL,
		[TAXCODE] [nvarchar](10) NOT NULL,
		[TERMINALID] [nvarchar](10) NOT NULL,
		[TRANSACTIONID] [nvarchar](44) NOT NULL,
		[DATAAREAID] [nvarchar](4) NOT NULL,
		[CREATEDDATETIME] [datetime] NOT NULL,
		[MODIFIEDDATETIME] [datetime] NOT NULL,
		[ROWVERSION] [timestamp] NOT NULL,
		[ADDLSTRING1] [nvarchar](200) NOT NULL DEFAULT (''),
		 CONSTRAINT [I_EXT_RETAILTRANSACTIONTAXTRANS_PK] PRIMARY KEY NONCLUSTERED 
		(
			[CHANNEL] ASC,
			[TERMINALID] ASC,
			[STOREID] ASC,
			[TRANSACTIONID] ASC,
			[SALELINENUM] ASC,
			[TAXCODE] ASC,
			[DATAAREAID] ASC
		)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
		) ON [PRIMARY]

END
GO

GRANT SELECT,INSERT,UPDATE,DELETE ON [ext].[RETAILTRANSACTIONTAXTRANS] TO [DataSyncUsersRole]
GRANT INSERT ON [ext].[RETAILTRANSACTIONTAXTRANS] TO [UsersRole]
GO

New view

Consider that the view should represent the exact same columns as the original table before, plus the new column. As explained already above, we have the same PK on both tables, so we will use these to LEFT JOIN the two.
Grant the priviledge to SELECT by the DataSyncUsersRole (CDX).

CREATE VIEW [ext].[RETAILTRANSACTIONTAXTRANSEXTVIEW] AS
(
	SELECT
	tt.[AMOUNT],
	tt.[CHANNEL],
	tt.[ISINCLUDEDINPRICE],
	tt.[REPLICATIONCOUNTERFROMORIGIN],
	tt.[SALELINENUM],
	tt.[STOREID],
	tt.[TAXCODE],
	tt.[TERMINALID],
	tt.[TRANSACTIONID],
	tt.[DATAAREAID],
	tt.[ROWVERSION],
	exttt.[ADDLSTRING1],
	exttt.MODIFIEDDATETIME,
	exttt.CREATEDDATETIME
	 from [ax].RetailTransactionTaxTrans tt
	LEFT JOIN [ext].[RETAILTRANSACTIONTAXTRANS] exttt
	ON 
		tt.[CHANNEL] = exttt.[CHANNEL] AND
		tt.[TERMINALID] = exttt.[TERMINALID] AND
		tt.[STOREID] = exttt.[STOREID] AND
		tt.[TRANSACTIONID] = exttt.[TRANSACTIONID] AND
		tt.[SALELINENUM] = exttt.[SALELINENUM] AND
		tt.[TAXCODE] = exttt.[TAXCODE] AND
		tt.[DATAAREAID] = exttt.[DATAAREAID]
)
GO
GRANT SELECT ON [ext].[RETAILTRANSACTIONTAXTRANSEXTVIEW] TO [DataSyncUsersRole]
GO

New table type (TVP)

A table value type is a convenient way to “pass around a data table”. Eventually, we want to call a stored procedure and pass in a whole data table in order to be more efficient. An alternative would be to just pass in the individual parameters for each column and call the stored procedure multiple times. In this sample, since we want to create tax lines, we will create usually a few rows in one call (5 sales items with 2 different tax codes each would produce 10 rows). I created the TVP by copying the original one, and expanded it with the new column.

CREATE TYPE [crt].[EXTRETAILTRANSACTIONTAXTRANSTABLETYPE] AS TABLE(
	[DATAAREAID] [nvarchar](4) NULL,
	[SALELINENUM] [numeric](32, 16) NULL,
	[STOREID] [nvarchar](10) NULL,
	[TAXCODE] [nvarchar](10) NULL,
	[TERMINALID] [nvarchar](10) NULL,
	[TRANSACTIONID] [nvarchar](44) NULL,
	[ADDLSTRING1] [nvarchar](200) NOT NULL)
GO
GRANT EXECUTE ON TYPE::[crt].[EXTRETAILTRANSACTIONTAXTRANSTABLETYPE] TO [UsersRole];
GO

New stored procedure (sproc)

The sproc will get supplied by the TVP and the channel Id. This sproc will be called by our CRT data service below.

CREATE PROCEDURE [ext].[INSERTRETAILTRANSACTIONTAXTRANS]
			   (@bi_ChannelId		bigint,
			   @tvp_ExtRetailTransactionTaxTrans		[crt].[EXTRETAILTRANSACTIONTAXTRANSTABLETYPE] READONLY)
AS
BEGIN
...

CRT data service to write to the new table

Each CRT service (for data or other purposes) requires a Request class. In this example, we want to store additional tax line data from the transaction. So we can pass the SalesTransaction instance to our service via the Request.

public sealed class InsertTaxTransExtensionDataRequest : DataRequest
{
    public InsertTaxTransExtensionDataRequest(SalesTransaction transaction)
    {
        this.Transaction = transaction;
    }

    [DataMember]
    [Required]
    public SalesTransaction Transaction { get; private set; }
}

The data service must find the data we want to store and build a DataTable with multiple DataRows that corresponds to the TVP in SQL. Make sure the columns are in the same order. Then we use the DatabaseContext class to make the call to the sproc, by passing in the expected parameters.

using (DataTable taxTable = new DataTable("EXTRETAILTRANSACTIONTAXTRANSTABLETYPE"))
{
    taxTable.Columns.Add(DataAreaIdColumn, typeof(string)).DefaultValue = string.Empty;
    taxTable.Columns.Add(SaleLineNumColumn, typeof(decimal)).DefaultValue = 0m;
    taxTable.Columns.Add(StoreIdColumn, typeof(string)).DefaultValue = string.Empty;
    taxTable.Columns.Add(TaxCodeColumn, typeof(string)).DefaultValue = string.Empty;
    taxTable.Columns.Add(TerminalIdColumn, typeof(string)).DefaultValue = string.Empty;
    taxTable.Columns.Add(TransactionIdColumn, typeof(string)).DefaultValue = string.Empty;
    taxTable.Columns.Add(ADDLSTRING1Column, typeof(string)).DefaultValue = string.Empty;

    foreach (var line in tx.ActiveSalesLines)
    {
        foreach (var taxItem in line.TaxLines)
        {
            DataRow row = taxTable.NewRow();
            row[DataAreaIdColumn] = request.RequestContext.GetChannelConfiguration().InventLocationDataAreaId;
            row[SaleLineNumColumn] = line.LineNumber;
            row[StoreIdColumn] = tx.StoreId;
            row[TaxCodeColumn] = taxItem.TaxCode;
            row[TerminalIdColumn] = tx.TerminalId;
            row[TransactionIdColumn] = tx.Id;
            object oAddlString = taxItem.GetProperty("AdditionalString1");
            string addlString = string.Empty;
            if (oAddlString != null)
            {
                addlString = (string)oAddlString;
            }

            row[ADDLSTRING1Column] = addlString;
            taxTable.Rows.Add(row);
        }
    }

    ParameterSet parameters = new ParameterSet();
    parameters[ChannelIdVariableName] = request.RequestContext.GetPrincipal().ChannelId;
    parameters[TVPVariableName] = taxTable;

    int errorCode;
    using (var databaseContext = new SqlServerDatabaseContext(request))
    {
        errorCode = databaseContext.ExecuteStoredProcedureNonQuery("[ext].INSERTRETAILTRANSACTIONTAXTRANS", parameters);
    }

Finally, register this CRT data service with the CRT, in the commerceruntime.ext.config (RS) and the commerceruntime.ext.offline.config (MPOS offline).

Implement trigger in the appropriate place to call CRT data service

For our use case, it is perfect to piggy-bag onto the SaveSalesTransactionDataRequest CRT request and provided an implementation for a post-trigger to call our new data service. In that trigger, we initialize the request to our new data service and let the CRT take care of routing the call.

class SaveSalesTransactionDataRequestTrigger : IRequestTrigger
{
    public IEnumerable SupportedRequestTypes
    {
        get
        {
            return new[] { typeof(SaveSalesTransactionDataRequest) };
        }
    }

    public void OnExecuted(Request request, Response response)
    {
        ThrowIf.Null(request, "request");
        ThrowIf.Null(response, "response");

        var newRequest = new InsertTaxTransExtensionDataRequest(((SaveSalesTransactionDataRequest)request).SalesTransaction);
        request.RequestContext.Execute(newRequest);
    }

    public void OnExecuting(Request request)
    {
    }
}

Do not forget to register the trigger in the commerceruntime.ext.config (RS) and the commerceruntime.ext.offline.config (MPOS offline), similarly as shown above.

Configure CDX

New AX7 schema table

Create a new AX7 schema table called “ext.RetailTransactionTaxTransEXTView” in the “Retail Channel schema” page for AX7 schema. Make sure the same columns are used as in the original table called “ax.RETAILTRANSACTIONTAXTRANS”, plus the new ADDLSTRING1 column.

Switch scheduler subjob to use view instead of original table

In “Scheduler subjobs” find the job subjob called “RetailTransactionTaxTrans” and change it table to the newly created one and add the one new column.

Finally, use the “Generate queries” in the “Retail channel schema” to initialize CDX.

Resource file to automatically configure CDX after deployment

Brand new feature, to be continued at a future time…

Testing

Run through your scenario in your POS client to save a transaction. Then look in the database before and after to see that the extension table gets populated and synced with the P-0001 job.


Extensions.MyTaxTransExtensionDataServiceBlog

Simple local caching CommerceRuntime service (LocalCacheService)




Note: This can be implemented with any Dynamics 365 or older CommerceRuntime versions. Sample code can be downloaded at the end of this blog.

In some extensibility scenarios it may be useful to cache data in CRT memory. It could be data that may need to be frequently reused and does not change often, especially, if it may be expensive to calculate the data or fetch the data from the database. There are multiple examples for cases when we may need such a service.

One example could be pieces of secure information from HQ (AX) that we do not want to store in the channel database (i.e. SecureAppSettings sample).

Another example may be the fetching of product attributes in multiple extension dlls. In some places we have the attribute values already available, in others we do not, so it would have to be re-queried (i.e. SimpleProduct).  One could off course extend the CRT further to pass the queried product attribute down to where we need it later (via extension properties), but I have found it may be sometimes more trouble than it is worth. Even further, if we have multiple RetailServer requests involved, extension properties are not automatically persisted, so we would have to find another place to save this state.

In these and other examples, a simple cache makes the coding simpler.  Just make sure the data that get cached does not change very frequently. One thing to note is that this cache lives in the memory of the host of the CRT (RetailServer or MPOSOffline), so in a production environment there will be multiple cache instances, one for each host process. In many cases, this may not really matter much. In other cases, it may be better to use a distributed cache instead (topic for a future blog).

This simple local cache solution is based on the .NET MemoryCache object.  No SQL is needed. A client can use a single line to fetch (try to fetch) the data with this code:

var getFromCacheRequest = new GetFromLocalCacheRequest(cacheKeyExternalProductTaxId);
object cachedValueExternalProductTaxId = context.Runtime.Execute(getFromCacheRequest, context).Value;
if (cachedValueExternalProductTaxId != null)
{
   // 
}

and can push a new item onto the cache with this code:

//cache productId to externalProductTaxId mapping for 30 min
var saveToCacheRequest = new SaveToLocalCacheRequest(cacheKeyExternalProductTaxId, externalProductTaxId, 30 * 60);
context.Runtime.Execute(saveToCacheRequest, context);

The LocalCacheService implements as usual for CRT services the IREquestHandler. Below is some of the crucial code for reference. In case you want to look at all the code (or reuse it in some of your projects), download the zipped up version below.

public Response Execute(Request request)
{
    Type requestType = request.GetType();
    Response response;
    if (requestType == typeof(GetFromLocalCacheRequest))
    {
        var cacheKey = ((GetFromLocalCacheRequest)request).Key;
        var cacheValue = cache.Get(cacheKey);
        string logMessage = string.Empty;
        if (cacheValue != null)
        {
            logMessage = "GetFromLocalCacheRequest successfully fetched item from cache for key '{0}'.";
        }
        else
        {
            logMessage = "GetFromLocalCacheRequest could not find item in cache for key '{0}'.";
        }

        RetailLogger.Log.ExtendedInformationalEvent(logMessage, cacheKey);
        response = new GetFromLocalCacheResponse(cacheValue);
    }
    else if (requestType == typeof(SaveToLocalCacheRequest))
    {
        var cacheKey = ((SaveToLocalCacheRequest)request).Key;
        var cacheValue = ((SaveToLocalCacheRequest)request).Value;
        var cacheLifeInSeconds = ((SaveToLocalCacheRequest)request).CacheLifeInSeconds;
        cache.Set(cacheKey, cacheValue, DateTimeOffset.Now.AddSeconds(cacheLifeInSeconds));
        RetailLogger.Log.ExtendedInformationalEvent("SaveToLocalCacheRequest saved item for key '{0}'", cacheKey);
        response = new NullResponse();
    }
    else
    {
        throw new NotSupportedException(string.Format(CultureInfo.InvariantCulture, "Request '{0}' is not supported.", request.GetType()));
    }

    return response;
}

Extensions.MyLocalCacheServiceBlog

Product attribute usage in the channel




Note: Implemented with Dynamics 365 version 7.2.* (but likely working fine with lower or higher versions). Sample code can be downloaded at the end of this blog.

This information explains how to setup product attributes in Dynamics 365 for Operations (and similar products) and use it in code in the channel (in a CRT service).

Before we can see any new values in POS or write code against, there are some setup steps needed.

Setup

Adding a new product attribute

Steps for adding a new product attribute (e.g.: ExternalTaxId) to a category of products (e.g: Gloves and Scarves in fashion hierarchy) to the Houston channel:

  1. Create a new type under “Attribute type” (e.g.: ExternalTaxId)
  2. Create a new attribute (e.g: ExternalTaxId) under “Attributes” with the attribute type created in #1. In this same form, make sure to open the Filter settings and save it once, even if you are not making any changes
  3. Add the created attribute to a new attribute group under “Attribute groups” (e.g.: ExternalTaxId)
  4. Go to “Channel navigation categories” and select the navigation hierarchy (fashion navigation hierarchy) for the specific category (Gloves and Scarves). Under “product attribute groups” tab, add the new attribute GROUP.
  5. Go to “Channel categories and product attributes”. Set attribute metadata for channel = Houston. Find the attribute (season) and set Include attribute to yes and save.
  6. Publish channel updates on same form.

Saving values for product attributes

Browse to a product that is in the same category for which we added the new attributes (Products by category) and use “Setup/Product attributes” to save values.

Update the channel with changes

Go to distribution schedule and run the distribution jobs 1040, 1070,1150

Usage

You could now use MPOS for the respective channel to view the new value:

If you wanted to use CRT code, you can simply query for the attribute:

var channelId = context.Runtime.CurrentPrincipal.ChannelId;
int catalogId = 0;
var settings = QueryResultSettings.AllRecords;
var request = new GetProductAttributeValuesServiceRequest(channelId, catalogId, productId, settings);
var response = context.Runtime.Execute(request, context);
var externalProductTaxIdAttributeValue = response.AttributeValues.FirstOrDefault(av => av.Name == "ExternalTaxId");

if (externalProductTaxIdAttributeValue != null)
{
    externalTaxId = externalProductTaxIdAttributeValue.TextValue;
}

Inquiring version information of a Dynamics 365 for Finance and Operations deployment

There are two ways to find out the version information. Either use LCS or look at individual files on the box (in case the VM is not hosted on LCS).

In the LCS case, browse to the environment in question and follow the “View detailed version information” link. The following information will be available:

    • Platform update version (blue)
    • Binary hotfixes (red)
    • Application/X++ hotfixes (black)

    • Microsoft modules versions (incl. Retail)

    • custom modules versions