The stated mission of Silveus Insurance Group (SIG) is to enrich the lives of the American farmer by helping them manage risk. Silveus Insurance agents use proprietary software developed at SIG to help farmers choose the most beneficial crop insurance available to protect them against loss of crops. Factors impacting such potential loss of crops and resultant revenue can range from the decline of agricultural commodity prices to natural disasters, such as, hail or drought. The software developed by SIG is used by over 150 agents and provides these agents a distinct advantage over other non-Silveus insurance agents.
Case Study: Application Insights for Desktop Applications
SIG wanted to know how agent users interact with their two main custom desktop sales applications and also wanted insight into the health status and monitoring of those applications.
Typically, the development team would not know that something was wrong with a production release until helpdesk received a phone call or email from an angry user. The applications actually produced errors logs, but unfortunately, these were saved on agents’ laptops and were not accessible outside of developers receiving the logs via email. The software was programmed to periodically try and email the log files to the development team, but since most agents work in remote parts of the country with no internet access, those emails were received very infrequently.
Additionally, it was difficult to prioritize new features and new bugs because the development team had no insight into how users utilize the software on a day to day basis. There was no concept of which areas of the application were most essential or popular.
iTrellis created two software packages hosted on a private NuGet feed that could easily be plugged into their desktop applications – one for logging and one for telemetry. These packages leverage Application Insights in the Microsoft Azure cloud, addressing the issue of accessing the logs from the agents’ machines, while at the same time, converting them into a format which could be easily accessible by the development team.
First, iTrellis built a telemetry library. Application Insights is geared toward web-based applications, which makes it very simple to plug in and start using through packages provided by Microsoft. Unfortunately, though, for desktop Windows Presentation Foundation (WPF) applications, there are no such packages available. Using the Application Insights software development kit (SDK), iTrellis was able to build a custom library which could be easily packaged and plugged into any desktop WPF application with just a few simple steps. The telemetry library built also supported offline syncing of data. That is, when the user is online, data will be pushed to Azure upon closing the application. However, if the user is offline, data will be stored on disk until the user is back online and starts the application, at which time the data is then uploaded to Azure.
Next, iTrellis developed a library for logging. This library saves the logs to the agents’ machines but also will use the telemetry client to periodically transmit those logs to the Application Insights instance in Azure. Again, because the telemetry client supports offline syncing of data, the logs are also saved to disk when the user is offline and then sent back to the internet once the user connects.
As a result of having this new wealth of data in Application Insights, iTrellis set up alerts to receive notifications for things like spikes in exceptions or longer-than-average file load times. This gave the development team immediate feedback when users were having trouble before the phone started ringing. iTrellis also was able to proactively look for patterns in the errors because the logs could be queried, which allowed for the generation of a daily report of which errors had been observed that gave the developers great insight into what was happening within the application.
The telemetry data was equally as valuable. iTrellis then had a better understanding of which features were use the most by the agents. This helped tremendously in prioritizing new features and bugs by understanding the impacts of changes and errors. iTrellis also gained insights into user flow throughout the system so it could seek out patterns for improving upon the user experience.
Case study: Development Cadence
The client’s engineering team had been developing features ad-hoc from agent requests with minimal planning, which resulted in sporadic releases with features and functionality not having been fully vetted and tested. This produced quite a few defects which were reported by agents, and the bugs were resolved in a haphazard fashion. There wasn’t a set methodology and approach for triaging, replicating, and deploying fixes in a test environment to be verified and validated before pushing out to the production environment. This model produced many unhappy customers/agents not to mention the stressed-out developers, particularly during seasonally busy sales periods.
The solution involved introducing an agile discipline to the development team built around two-week sprints. In addition to pre-planning sessions, the first seven days of each sprint were devoted to coding and development and the last three days were spent testing. Other planning activities included the product owners being consulted for longer-term epic and feature planning as part of the overall release planning along with shorter-term feature and bug fix planning to take place in the immediately upcoming sprint. As part of the estimation process, developers were consulted to size the work items according before included them in the work effort of any sprint. Automated and manual testing processes were deployed to ensure fewer bugs were released; and production bugs that were uncovered were evaluated and prioritized before any work to resolve them was done. Each sprint was evaluated at the end with a demonstration to the product owners and a retrospective among the developers, which allowed for new workable software to be released after every sprint.
As a result of all of these combined efforts, the development team became more effective and less reactive, while product owners and end users could better anticipate when to expect features and functionality to be released. That coupled with fewer bugs being accidentally deployed created a much more efficient, structured, and transparent overall process for SIG’s software development approach and methodology.
Case study: UI Testing for Desktop WPF Applications
One of SIG’s custom applications was developed in-house by a series of developers over a period of several years. It started out relatively small with only a few features and over time, grew to include many features. The application’s underlying architecture also evolved during that time period to include several different patterns, many of which were not easily unit-testable. This resulted in the application being more vulnerable to bugs whenever changes were made to the code. For that same reason, refactoring to improve testability or otherwise improve the overall architecture was a very risky endeavor.
The overall solution was a multi-pronged approach, with the first step being the introduction of an automated UI testing framework built on Windows Application Driver (WinAppDriver), a Selenium-like automation framework for Windows desktop applications. The framework was based on a page object model pattern used to simplify writing the tests. The components in the user interface were modeled in the code in a way that reduced the amount of code that needed to be written for each test which made each test easier to read in code along with being easier to create and maintain.
The result of adding these tests increased the overall confidence level of SIG when releasing new versions of the application. Prior to each release, the automated UI tests were executed to confirm that the existing behavior still worked correctly. This led to a significant decrease in the number of bugs being released to production compared to what had happened in the past. Furthermore, this had given the team the confidence that was needed for refactoring the legacy software without introducing additional defects and the breaking of existing features.