Azure Application Insights vs. [ICS] Monitoring Tool: What’s Best for Your Business? (Part 2)
By Yuliana Voronova, Senior Functional Consultant, Industry Consulting Service (ICS)
In Part 1 of this blog series, we explained why effective batch job monitoring is critical for maintaining operational stability in Microsoft Dynamics ERP systems and introduced two key solutions:
- Azure Application Insights – Microsoft’s cloud service for monitoring the status of batch jobs;
[ICS] Monitoring Tool – a specialized solution designed for proactive control and automation of batch job monitoring.
By comparing their functionality, we concluded that while Azure Application Insights provides a solid foundation for monitoring batch processes, it lacks advanced features like root cause analysis, customizable alerts, and business logic-driven monitoring – capabilities that [ICS] Monitoring Tool delivers.
Now, let’s see how both tools perform in real-world scenarios. Specific cases make it easy to see the strengths of each solution and understand where [ICS] Monitoring Tool is most effective and where Azure Application Insights is more convenient to use. This comparative view helps to quickly assess the value for your business and choose the most suitable approach. So, below we present four real-world business cases demonstrating how these tools help accelerate issue resolution, reduce risks, and improve efficiency.
Business Cases
Case 1: Issues with Transfer Order Release
Problem:
A company has set up a batch job for releasing transfer orders to the warehouse. Occasionally, some orders failed to release. The batch job completed successfully with no visible errors, so business users only noticed the issue later when supply processes were disrupted, but they could not understand the reasons why.
Azure Application Insights:
Azure Application Insights only recorded the fact that the batch job was running regularly and successfully. No errors or failures were visible. Business users had to manually check infologs in the ERP system to find the root cause of errors and troubleshoot them themselves, which significantly increased resolution time.
[ICS] Monitoring Tool:
Following the implementation of [ICS] Monitoring Tool, it became apparent that each time the batch job was successfully completed in the ERP system, an infolog was generated containing a list of transfer orders that had not been released to the warehouse and the reason for the failure.
Analysis of error statistics revealed that the problem was in the start time of the batch job – it was executed at a time when some warehouse transactions had not yet been completed and the orders remained in the 'Reserved physical' status. Because of this, they were not included in the release.
Business users adjusted the batch job execution schedule, and the problem was resolved.
| Azure Application Insights | [ICS] Monitoring Tool | |
|---|---|---|
Issue Detection |
No errors were detected during the batch job execution. No visible failures were identified. |
Identified infologs with errors after each completion of the batch job. |
Root Cause |
Not detected automatically. Users had to manually search for infologs in D365 and analyze errors. |
Analysis of error statistics showed that the batch job was launched too early, when some of the warehouse transactions were still open, and the orders had 'Reserved physical' status, which prevented their release. |
Alerts |
No notifications – only manual investigation. |
Automatic notifications were received about errors in the batch job. |
Conclusion |
The problem was identified and resolved by manually analysing batch jobs in the system. This resulted in significant labour costs and time spent on resolving the issue. |
The problem was quickly identified by [ICS] Monitoring Tool, and users were notified. Root cause analysis was accelerated thanks to the statistics built into the solution. Labour costs and resolution time were both significantly reduced. |

Case 2: Data Sync Delay with Mobile App
Problem:
A company noticed that, periodically, data in the external mobile application and in the ERP system became out of sync, and outdated data was stored for 1 hour. The reason for this was unclear – data synchronisation ran without errors, but with a delay.
Azure Application Insights:
Azure Application Insights showed that batch jobs for data synchronisation had been performed correctly and without errors. Only by manually analysing the heatmap did the team notice that the batch job took too long to execute (45+ minutes). This led to data desynchronisation between the mobile application and the ERP system.
[ICS] Monitoring Tool:
The Monitoring Tool flagged the anomaly and informed users that every day at 21:00 the batch job for data synchronisation was running abnormally long – 45+ minutes instead of the usual <5 minutes – leading to data desynchronisation between the mobile application and the ERP system.
It also revealed a dependency: at 20:55, another batch job was launched to update InventTable, which updated all records in the table. Consequently, the data synchronisation batch job had to process a significantly larger amount of data, resulting in long processing times.
Based on this information, the support team suggested amending the logic of the batch job for updating InventTable so that it only updates new records instead of the entire table, which significantly sped up synchronisation.
| Azure Application Insights | [ICS] Monitoring Tool | |
|---|---|---|
Issue Detection |
Batch jobs for data synchronisation were recorded as completed correctly. Heatmap analysis showed that the batch job was taking longer than usual to complete. |
Identified that the execution time increased from <5 to 45+ minutes. |
Root Cause |
Heatmap analysis showed which batch jobs were executed before synchronisation, which helped the support team identify the problem. |
Analysis of running batch jobs helped the support team identify the problem. |
Alerts |
No notifications – manual monitoring and heatmap analysis were required. |
Automatic notifications were received about the long execution of the batch job for synchronisation. |
Conclusion |
Additional heatmap analysis and manual verification of dependencies were required. |
Alerts about the long execution time of the batch job drew support team’s attention to the existing problem. Error occurrence statistics helped identify dependent batch jobs so that their execution logic could be corrected. |

Case 3: Overdue Customer Report Batch Job
Problem:
A company has set up a batch job to update the data in the consolidated overdue customer report. It runs daily at 00:01 and usually takes around 30 minutes to complete. On one occasion, however, it took over 5 hours, and the new data did not appear in the report. The long execution time blocked the launch of other regular batch jobs, posing a risk to critical business processes.
Azure Application Insights:
In Azure Application Insights, the batch job continued to be displayed as ‘Executing’ and was deemed to be running correctly. No errors or failures were recorded, so no notifications were received. Since no notifications were received, the search for the problem began only after user complaints.
In order to identify the true cause of the problem, users had to manually analyse the system logs and investigate SQL locks.
[ICS] Monitoring Tool:
The Monitoring Tool detected that a batch job was taking an abnormally long time to execute and immediately sent notifications:
- about exceeding the permissible execution time for a batch job;
- about delays in the execution of subsequent batch jobs.
Thanks to the collected diagnostics, the support team quickly identified the SQL locks that had caused the process to hang. With the locks removed, the batch job finished correctly, while the report was updated.
| Azure Application Insights | [ICS] Monitoring Tool | |
|---|---|---|
Issue Detection |
No errors were detected during the batch job execution. No visible failures were identified. |
Recorded both an abnormally long execution time for a batch job (>5 hours instead of the usual ~30 minutes) and delays to the execution of other batch jobs. |
Root Cause |
The cause was not identified directly. Manual analysis of SQL locks was required. |
Identified SQL locks, which caused the batch job to hang. |
Alerts |
No notifications – manual analysis was required. |
Notifications were sent automatically: |
Conclusion |
Following a manual analysis, the support team removed the SQL locks, the batch job was completed, and the data was updated. However, the lack of timely notification resulted in significant labour and time costs, as well as temporary system downtime. |
Thanks to the timely notification, the support team began removing SQL locks on time, the report was updated, and critical processes were not impacted. |

Case 4: Uncontrolled Batch Job Impacting Performance
Problem:
One of the users launched a batch job in Execution mode without notifying his colleagues. This task turned out to be ‘heavy’ and took almost two days to complete, gradually consuming all available resources. Consequently, other batch jobs began to experience considerable delays, resulting in performance degradation and risks to business processes.
Azure Application Insights:
From Azure Application Insights’ perspective, the batch job was running ‘correctly’ — no errors or failures were recorded. However, during a manual analysis of the heatmap, specialists noticed that one process had been running for an abnormally long time. But since this batch job was not initially included in the monitoring pool, the problem was identified too late, after delays had already begun on a massive scale and system performance had deteriorated.
[ICS] Monitoring Tool:
Even though this batch job had not been added to the list of monitored batch jobs, the Monitoring Tool automatically detected its abnormally long execution time. Within a few hours, the system sent a warning about a potential problem. This allowed the support team to intervene in time and avoid serious consequences for the entire system by preventing critical failures before they began to escalate.
| Azure Application Insights | [ICS] Monitoring Tool | |
|---|---|---|
Issue Detection |
The batch job ran for almost two days without any errors – the process was displayed as ‘Executing’ and deemed to be running correctly. |
The Monitoring Tool detected an abnormally long batch job execution time, even though it was not in the pool of monitored jobs, and sent a warning. |
Root Cause |
The cause was not identified automatically; manual analysis of all batch jobs and their duration was required to identify the batch job causing the errors. |
The system showed that one batch job was ‘capturing’ resources, causing delays in other batch jobs. |
Alerts |
No notifications – manual investigation and heatmap analysis were required. |
Automatic notifications about the long execution time and delays in subsequent batch jobs. |
Conclusion |
The problem was noticed too late, after the performance issues had already started to occur. |
The support team intervened promptly based on the alert from the Monitoring Tool, preventing failures and maintaining process stability. |
Summary
The above cases demonstrate that both tools are valuable for batch job monitoring, but Azure Application Insights is useful for only basic oversight, it lacked timely alerts and often required manual investigation of problems.
[ICS] Monitoring Tool goes one step further. From hidden errors in transfer order releases to prolonged batch executions and uncontrolled batch jobs impacting performance, our solution consistently enabled faster detection, automated notifications, and actionable insights – saving time, reducing risks, minimizing downtime, and giving our clients complete control over batch processes.
Want to see it in action for your processes? Then contact us for a live demo.