When it comes to speed and data management in the digital market it may be difficult to parse which areas of an application will need improvement or updating. But there are many sides to the technology, design, and solutions that work to maintain accuracy as demand fluctuates. In this post, I’ll explore a comparison of testing data access speeds to determine what the best strategy can be, based on the outcome. Processing sensitive information throughout a large framework requires not only speed but capacity and flexibility within the architecture. To provide a more accurate understanding of improved data processing for a client’s application, I performed a benchmark to compare different data access methods to reach a client’s databases.
The following data access methods were tested:
I also show what it takes to direct SP execution from SSMS. Below you’ll find the SQL Script for getting direct SP execution times. I kept the execution log for all ten executions, all of them using Production DB/API for the client. Here is the summary of the executions, in milliseconds:
- During the first execution, Web API was fastest. At that point it was taking advantage of connection pooling because Web API had been already executing. A truly fair comparison would have required a restart to Web API service, which was not done.
- After the first execution, Console App also takes advantage of connection pooling so the time it took to create the data connection was minimized. At that point Web API was no longer the fastest but rather the slowest.
- Results were very different when Console App was just executing each call at a time. It would be like only comparing the first call several times. But any app may take advantage of connection pooling, which is what happens from the second execution on.
- To be honest I was expecting a much higher difference. While Web API is certainly not as slow as I initially thought, the average data access was still in the range of 600-800 milliseconds; it is good enough, even when only using Web API as data access. In any case, this difference may add up when one application executes many database calls in a burst, which should always be avoided, to make sure any batch processing is done as close to the DB server as possible (with the fastest SP available even while caching strategies will always be faster).
- I would not suggest a dramatic architecture change based on this performance difference. It’s much more likely that a detected bottleneck came out of a poorly implemented design rather than an architectural flaw.
With these facts and analysis points in mind, I would certainly suggest keeping the client production project implemented using a direct DB connection. It looks like there is no real difference in performance between available data access mechanisms for .NET apps. When considering the execution and run-time speeds to scale an operation, this method of gauging data access and speed could provide many insights along with answers you weren’t even looking for.
This was the code used to execute the benchmark: https://bitbucket.org/aristidescc/data-access-benchmark-gfy