PostsAboutGames
All posts tagged with .net core

Bubble Sort Benchmark: C++ vs .NET Core vs Node

June 12, 2020 - Søren Alsbjerg Hørup

We recently hit a bottleneck in a visualization application that we are working on, specifically in the backed of the visualization app. The app consist of an ASP .NET Core backend, a CosmosDB for hot storage, Blob storage for cold storage and a React frontend to visualize both the hot and cold data.

We have the non-functional requirement (NFR) that data must be visible within 5 seconds (easy to test, hard to achieve). Our backend collects the data both from hot and cold storage and exposes this for the React fronted through a REST API. Works fine, but the NFR of 5 seconds are not achievable using this approach since the frontend will “hang” / “freeze” with a spinner until data is ready, which can easily take 20-30 seconds.

I had the idea of streaming data from the backend to the frontend and let the frontend do the processing of the data instead of the back end. While it would still take 20-30 seconds before the data was 100% visible in the frontend, we could at-least achieve the 5 second NFR since we are able to show the data as it is ready, which would greatly enhance the user experience.

The team was not fond of my idea, due to “JavaScript not being fast enough to process the data”. While this might have been true in the nineties, it is no longer true with the V8 engine powering the JavaScript of today. I decided to convince the team through a quick experimental benchmark where I would Implement Bubble Sort in C#/.NET Core and JavaScript/Node and compare the results. Oh, and just for the kicks, I did an implementation in C++ using Clang as compiler.

The source-code of the benchmark can be found on my Github: BubblesortBenchmark

For the benchmark, 50.000 elements are generated and bubble-sorted 10 times to avoid measuring “cold start”.

For the C++ benchmark, the clang compiler is used with optimization and without the -fsanitize=address flag. This flag will introduce bounds checking to make the C++ runtime comparable to that of C# and Node. The -O flag is also used to optimize the compiled code. Two bubble sort implementations have been done for C++, one using std:vector and another using arrays.

For the C# benchmark, DotNet Core 3.1 have been used with -configuration=Release flag to produce a optimized binary. Also for C#, two implementations have been one, one using List and another using a C# array.

For the JavaScript benchmark, a single implementation has been done using JavaScript arrays. Node v12 is used as runtime.

Now for the results!
Taken directly from the shell:

> clang -O -fsanitize=address main.cpp && a.exe 
Creating library a.lib and object a.exp 
30156ms to sort 50000 elements 10 times (Vector) 
22360ms to sort 50000 elements 10 times (Array)

> clang -O main.cpp && a.exe 
9984ms to sort 50000 elements 10 times (Vector) 
8859ms to sort 50000 elements 10 times (Array)

> dotnet run --configuration=Release 
28766ms to sort 50000 elements 10 times (List) 
14687ms to sort 50000 elements 10 times (Array)

> node index.js 
29131ms to sort 50000 elements 10 times

And visualized as bar graphs (lower is better):

2020 06 12 06 06 15

The results are not that surprising.

C++ without address sanitizer outperforms all other implementations. The reason is simple, the program does not check for out of bounds when accessing the array and will therefore save one or more instructions per array access, which for Bubblesort is O(n^2). std:vector and array are also comparable in performance since very little overhead is introduced with the std:vector abstraction.

C# .NET Core using List is nearly two times slower than C# .NET using arrays. This could be attributed to the fact that an array of Integers in .NET is guaranteed to be a continuous set of integers where a List boxes the Integers into objects introducing additional indirect access.

JavaScripts approach to arrays are similar to that of C# lists, which is also evident in the benchmark. Comparing the C# List and JavaScript Array implementations, they are nearly 100% identical! Compared this with the C++ std:vector implementation using address sanitize, all three “list like” implementations performs the same.

One surprising aspect is that C# .NET Core array outperforms all C++ implementations when guarded with the address sanitize flag. I believe this is due to the just-in-time compilation nature of .NET Core, since .NET might deduce that no range-checks are needed during the JIT compilation process and thus save several instructions.

All in all, a nice little benchmark that has convinced the team that we can surely due data processing using JavaScript if needed.

Next step could be, to look into the JavaScript implementation and see if it can be improved by using e.g. Object.seal to increase performance. Another aspect could be to introduce a similar implementation in Rust which I would expect performed the same as the C++ arrays implementation using address sanitizer.

Hangfire - .NET background processing made easy

April 29, 2020 - Søren Alsbjerg Hørup

The cloud project I am currently working had the requirement that we needed to ingest, process, and write several gigs of data in a CosmosDB every 15 min.

For the processing part, we needed something that could scale, since the amount of data was proportional to the number of customers we have hooked up to the system.

Since the project consisted mainly of C# .NET Core developers, the initial processing was done using C# using async operations. This worked well, but was not really scalable - one in the team suggested to use Hangire.io for the processing, which turned out was a great fit for our use case. (Wished it was my idea, but it was not…)

Hangfire is an open source .NET Core library which manages distributed background jobs. It does this by starting a Server in the application where jobs can be submitted. Job types include: fire and forget, delayed jobs, recurring jobs, and continuations.

Hangfire uses a database to ensure information and metadata about jobs are persisted. In our case, we simply use an Azure SQL server. Multiple instances of the application hosting the Hangfire server helps with the processing of the jobs.

This architecture makes it possible to e.g. submit a job for each connected customer, which is processed by one or more nodes. If resources becomes a problem, we horizontal scale up the application to include more instances - which can even be done automatically depending on CPU load or other metric. In our case, we use Kubernetes for that.

What I really like about Hangfire is the fact that one can simply start with one instance of the application hosting the Hangfire server, and scale up later if needed.

Oh! and Hangfire comes with its own integrated dashboard that allows one to see submitted jobs. Neat!

Although we are not yet in production, my gut feeling is good on this one. Highly recommended!

Accessing The Microsoft Graph API using .NET

January 03, 2020 - Søren Alsbjerg Hørup

I recently needed to fetch all users contained within a Microsoft Azure Active Directory tenant from within a trusted application running on a Windows 10 computer.

For this, I implemented a .NET Console Core console application that utilizes the Microsoft Graph API to fetch the needed data.

Microsoft provides NuGet packages that makes this a breeze. Assuming the application has been registered in Azure Active Directory and a Client Secret has been created, access can be obtained by constructing an IConfidentialClientApplication object using ConfidentialClientApplicationBuilder like so:

IConfidentialClientApplication confidentialClientApplication = ConfidentialClientApplicationBuilder
   .Create(clientId)
   .WithTenantId(tenantId)
   .WithClientSecret(secret)
   .Build();

Where clientId is the Guid of the application, tenantId is the Guid of the Azure Active Directory Tenant and secret is the client secret. The IConfidentialClientApplication and ConfidentialClientApplicationBuilder types are exposed the Microsoft.Identity.Client NuGet package.

To Access the Graph API, a GraphServiceClient must be constructed. This object provides properties and methods that can chained to construct queries towards the API.
This type is provided by the Microsoft.Graph NuGet Package.

GraphServiceClient needs an instance of a IAuthenticationProvider for it to be able to get an access token.
Microsoft provides ClientCredentialProvider which takes our IConfidentialClientApplication as parameter. ClientCredentialProvider is provided by the Microsoft.Graph.Auth NuGet package.

IAuthenticationProvider authProvider = new ClientCredentialProvider(confidentialClientApplication);

image

Note: The Microsoft.Graph.Auth package is currently in preview. Make sure to check “Include prerelease” to be able to find this package if you use the NuGet Package Manager in VS2019

Since ClientCredentialProvider implements the IAuthenticationProvider interface, we can now instantiate the GraphServiceClient

GraphServiceClient graphClient = new GraphServiceClient(authProvider);

With this, we can do queries towards the graph API. The following example gets all users of the Active Directory and returns these as an IGraphServiceUsersCollectionPage for further processing.

IGraphServiceUsersCollectionPage users = await graphClient.Users
   .Request()
   .Select(e => new {
      e.DisplayName,
      e.GivenName,
      e.PostalCode,
      e.Mail,
      e.Id
})
.GetAsync();

That’s it! Remember to provide the needed API Permissions for the application if you intent to execute the query above:

image 1

Azure Blob Storage Benchmark

July 25, 2019 - Søren Alsbjerg Hørup

Another quick benchmark I did regarding azure was the transfer of 65K messages to an Azure Blob Storage.

My test application was a .NET Core Console application that:

  • Created 1000 tasks.
  • Each tasks would instantiate a message with a unique GUID and populate the message with a payload of 65K.
  • Each task would then push the message to the Blob Storage.
  • This process would be repeated 1000 times.

In total, 1000 x 1000 Blobs were created, transmitted and stored during the run of the application.

The results were that my app was consuming 28% of my CPU and using 600Mbit/s worth of bandwidth - which is the limit of my upload.

Clearly, Azure is not the limiting factor here, neither is my CPU.

Mosquitto Performance on low-end VM

July 24, 2019 - Søren Alsbjerg Hørup

I am currently designing a new topology for inter-controller communication using Mosquitto as broker. For fun, I wanted to see how much I could push Mosquitto so I started a low-end Ubuntu VM in Azure and wrote a simple .NET Test application using MQTTnet to put a bit of a load onto it.

I choose the B1s VM (1 vcpu and 1 GiB of memory), installed Ubuntu, installed Mosquitto and opened the default 1883 port for MQTT. This would serve as my broker.

For the benchmarking application I wrote a .NET Core Console app that would:

  • Instantiate N Tasks
  • Each Task instantiates its own MQTT Net client and Connect to the broker.
  • Each Task subscribes to to a “ping/{clientid}” topic and each second send a ping message containing Environment.TickCount + a 65K payload to the ping topic with its own client id.
  • The ping would be returned back to the Task due to the subscribe and ping-time could be measured.

With N = 200, 200/msg/s would be transmitted to the broker and back again, resulting in a theoretical bandwidth requirement of 13MB/s + overhead.

For the benchmark I decided to run the client on my own dev machine, since I can currently upload 50MB/s using the fiber connection that I have available - plenty of bandwidth for my test purposes.

Starting the client app with N=200, I saw the ping times fluctuating from 60ms to 300ms, indicating a bottleneck somewhere (needs to be analyzed further before concluding anything)

Using Azure Metrics, I could quickly mesure the CPU and bandwidth usage of the VM. The CPU usage of the VM peaked at 28.74% and the upload from the VM to my Client peaked at 15.61 MB/s or 125 Mbit/s.

I did not expect this kind of bandwidth / performance from this class of VM in Azure. I of course only tested the burst performance and not the long running performance of the VM, which might differ due to the VM class being a ‘burstable VM’.

Conclusion after this test: if you want a MQTT broker, start with a small burstable VM and see how it performs over time and then upgrade.

Azure 502.5 error with ASP.NET Core 2.1

August 23, 2018 - Søren Alsbjerg Hørup

I recently updated the .NET Core version of one of my ASP.NET Core project from 2.0 to 2.1. To my surprise, the application failed to start after publishing to Azure with a 502.5 error thrown at my face.

Investigation showed that the target version of .NET Core i was using was not avaliable on the App Service instance I was targeting.

To fix this, I changed my Publish configuration from Framework-dependent to Self-contained.

2018-08-23_08-55-53.png

This change will deploy the complete framework alongside my application with the downside that:

  • Deployment takes a bit longer due to the framework needs to be deployed.
  • I need to manage the framework alongside my app, e.g. update it if security bugs or similar are found in the framework.

In any case, this allows me to deploy 2.1 applications to Azure until 2.1 is properly deployed as part of the infrastructure.

Storing JWT access token in a Cookie

July 10, 2018 - Søren Alsbjerg Hørup

I am using JWT access tokens for my latest ASP.NET Core project. Authentication is added in ConfigureServices:

services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme).AddJwtBearer(options =>
{
 options.TokenValidationParameters = new TokenValidationParameters()
 {
  ValidateIssuer = true,
  ValidateAudience = true,
  ValidateLifetime = true,
  ValidateIssuerSigningKey = true,
  ValidIssuer = Configuration\["Jwt:Issuer"\],
  ValidAudience = Configuration\["Jwt:Issuer"\],
  IssuerSigningKey = new SymmetricSecurityKey(Encoding.UTF8.GetBytes(Configuration\["Jwt:Key"\]))
 };
});

This works well for my SPA application, where I store the access token in localStorage (which is bad).  Moving the JWT access token to a cookie is a better approach, however I want the ability to use JWT Bearer for my APIs. Configuration of Dual authentication where:

  1. JWT token can be passed as part of Authorization header
  2. And JWT token can be passed as a token.

has proven cumbersome to implement.

A simple approach is to 1. add an access token cookie when forming the token and to 2. fake the Authorization header on the server if an access token is received as a cookie.

In the TokenController, the Cookie is either set or deleted depending on the success of the authorization:

[HttpPost]
public IActionResult Post(\[FromBody\]UserBody user)
{
 IActionResult response = Unauthorized();
 if (this.Authenticate(user))
 {
  var token = this.BuildToken(user);
  response = Ok(new { token = token });
  Response.Cookies.Append("access\_token", token);
 }
 else
 {
  Response.Cookies.Delete("access\_token");
 }

 return response;
}

When a client sends his credentials, the credentials are checked and if successful a token is returned as part of the response. In addition, the token is added to an access_token cookie (which should be httpOnly for security reasons).

To make use of the cookie, we simply forge the Authorization header based upon the value of the cookie. This is done by writing a simple middleware before the app.UseAuthentication() in Startup.Configure.

app.Use(async (context, next)=>
{
 var token = context.Request.Cookies\["access\_token"\];
 if (token != null)
 context.Request.Headers\["Authorization"\] = "Bearer " + token.ToString();
 await next();
});

app.UseAuthentication();

If the cookie exists, it is added to the Authorization header, thus invoking the JWT Bearer authorization mechanism in the pipeline. If the authorization fails in the pipeline, the Cookie is cleared.

Simple as that!

MinResponseDataRate

December 13, 2017 - Søren Alsbjerg Hørup

I recently encountered a problem with one of my many Azure services. The service in question provides an API to download files from a Azure file share; this worked perfectly on some endpoints but not on others.

It took me a while to nail the issue which was due to a violation of MinResponseData rate limit in Kestrel. On endpoints with a stable Internet connection, everything was OK, but on others were the Internet connection was less stable the download frequently failed.

Even if the endpoint had OK bandwidth, the download would still fail since the connection might be gone for a few seconds. According to the docs:

MinResponseDataRate: Defaults to 240 bytes/second with a 5 second grace period.

This can be disabled by

.UseKestrel(options => options.Limits.MinResponseDataRate = null)

.NET Core

June 30, 2017 - Søren Alsbjerg Hørup

.NET Core is yet another .NET framework implementation implementing .NET Standard 1 (production) and .NET Standard 2 (beta) while also extending the standard with .NET core specific API’s such as Console and Thread (which are not part of the .NET Standard).

The cool thing with .NET Core is that it is: 100% open-source, 100% cross-platform and very modular. This stands in contrast to .NET and Mono since these are very monolithic implementations and huge in size. The source is available under MIT on GitHub: https://github.com/dotnet/core

Since .NET Core is very crossplatform, one can write .NET applications for x86/ARM Windows and x86/ARM Linux - and since it is very modular, the framework does not occupy much space compared to the desktop implementations.

API’s are generally not available in the framework but needs to be downloaded from NuGet, e.g. EntityFramework, ASP.NET, and even some Reflection support are an “add-on”.

I measured the raw framework installation on my Win 10 x64 box to about 70MB, while the .NET 4.6 framework is roughly 2000MB in size, that is, 28 times larger! Deployment wise, .NET Core SDK supports deploying application + framework meaning that the target does not have to have the specific framework installed.

The SDK will automatically pull all the dependencies into the publish package, which includes all NuGet dlls + native assemblies where applicable (e.g. if using sqlite .NET core wrapper). One can also publish to a specific target, such as Linux-x86 or Linux-arm, which will produce a platform specific package with an elf executable that can be executed.

Referencing .NET assemblies which does not target .NET Standard or .NET Core is not possible in .NET Core 1.1 since the APIs are incompatible. .NET Core 2.0 will include a compatibility layer making it possible to reference assemblies targeting other frameworks - although this sounds awesome, I have not experimented with this feature yet.

.NET Core is without a doubt the future of .NET!