Deploying a SignalR Core Application to AWS (2023)

One time at the full moon someone asks me if I want to deploy a SignalR Core application on AWS and generally the questions are more in line with what AWS services they should be using to have a working SignalR Core application on AWS. Some of the questions go along these lines:

  • Can I use this service to deploy my SignalR application?
  • Should I deploy my app to a Windows virtual machine? Can I use a Linux virtual machine?
  • Can I use an API gateway in front of my SignalR application?
  • How should a load balancer be configured to work correctly with WebSockets?
  • I need to scale the SignalR application. What AWS service should I use for the backplane?
  • Can I use the Azure SignalR Service if my application is hosted on AWS?

In general, I think there is a knowledge gap when trying to deploy a SignalR Core application on AWS, so in this post I'm going to talk a bit about what AWS services can be used and how to properly configure them on a SignalR Core. Deploy the main application on AWS.

In this first section, I want to talk about AWS services that check and balance network traffic and how to use them with SignalR Core.

The services we will review are:

  • AWS Application Load Balancer
  • AWS WebSocket-API-Gateway

1. AWS Application Load Balancer

There are two things worth considering when setting up an AWS ALB: WebSockets and Sticky Sessions.

WebSockets

The ALB allows us to route traffic to an application using a few features:

  • A target group.
  • a listener

The target group defines where the traffic should be sent. We can configure the port and protocol for forwarded traffic, as well as a health check so that the load balancer doesn't send traffic to a down service.

The listener defines how the load balancer gets its traffic from the outside. There you define the port and protocol to reach the load balancer and the default behavior for traffic arriving at that listener.

Amazon ALB listeners only offer the HTTP or HTTPS protocol, but the good news is that WebSocket initially communicates with the server using HTTP if you use ws://, or HTTPS if you use wss://.

Your server then responds with101 Mediation Protocols, which instructs the client to upgrade to a WebSocket connection, and from then on, communication is over WebSocket.

SignalR uses the WebSocket transport when available and falls back to legacy transports when necessary.

SinceYou do not need to do any additional configuration in the ALB to start using WebSockets, but it's good to know how it works.

fixed sessions

SignalR requires that all HTTP requests for a given connection be handled by the same instance. When a SignalR application runs behind a load balancer with multiple instances of the same service,Fixed sessions should be used.

The only circumstances in which Sticky Sessions are not required are:

  • When hosted in a single instance of the application.
  • By using the Azure SignalR service (more on this a bit later).
  • When all clients are configured to use only WebSockets.

The following code snippet shows an example of a CDK application creating an ALB audience with persistent session enabled.

nuevoTarget group of the application (go dead, "tg-app-ecs-signalr-core-demo", nuevoApplicationTargetGroupProps{Target group name ="tg-app-ecs-signalr-core-demo",Vpc = vpc,TipoObjetivo = TipoObjetivo.IP,Protocol Version = Application Protocol Version.HTTP1,StickinessCookieDuration = Duration.Days(1),health check =nuevohealth check{Protocolo = Amazon.CDK.AWS.ElasticLoadBalancingV2.Protokoll.HTTP,HealthyThresholdCount =3,camino ="/Health",porta ="80",Interval = Duration.Millis(10000),Timeout = Duration.Millis(8000),UnhealthyThresholdCount =10,HealthyHttpCodes ="200"},porta =80,goals =nuevoIApplicationLoadBalancerTarget[] {goal}});

2. AWS WebSocket-API-Gateway

In AWS API Gateway, you can create a WebSocket API that you can use as a stateful front-end to an AWS service or HTTP endpoint.
The WebSocket API calls its backend based on the content of messages it receives from client applications.

I wasA SignalR Core application cannot work with AWS WebSocket Api Gateway.

(Video) SignalR in ASP.NET Core Projects (1/3)- Full Course from Wilder Minds

The problem is how the API gateway handles the connections and forwards the messages. API Gateway uses three predefined routes to communicate with the application:$connect,$disconnect, mi$default.

  • API gateway chama o$connectpath when starting a persistent connection between the client and a WebSocket API.
  • API gateway chama o$disconnectpath when the client or server disconnects from the API.
  • API gateway chama o$defaultroute if the route selection expression cannot be evaluated with the message or if no matching route is found.

This is a very specific way of handling connections and routing messages, and it doesn't work well with SignalR. In fact, it doesn't seem possible to integrate a SignalR Core application with an AWS WebSocket Api Gateway.

If anyone knows of a way, please contact me.

A SignalR application needs to keep track of ALL connected clients, which creates problems in an environment like AWS, where applications can scale automatically.

In the following diagram, we have 2 instances of a SignalR Core application behind a load balancer. When Client A interacts with the application, it is redirected to Instance A. When Client B interacts with the application, it is redirected to Instance B.
In this scenario, Instance A does not know about the clients connected to Instance B and vice versa. Therefore, if a client connected to Instance A tries to send a message, only clients connected to Instance A will receive the message.

Deploying a SignalR Core Application to AWS (1)

Here a backplane is needed. When a client connects, the connection information is passed to the backplane. When a server wants to send a message to all clients, it sends it to the backplane. The backplane knows all connected clients and on which servers they are located.

Deploying a SignalR Core Application to AWS (2)

This section reviews some AWS services that can be used as a backplane for SignalR. These services are the following:

  • AWS ElastiCache para Redis.
  • AWS Memory Database.
  • AWS RDS para SQL Server.
  • Azure SignalR Service.

1. AWS ElastiCache para Redis

Amazon ElastiCache is a fully managed in-memory caching service. You can use ElastiCache for caching or as a primary data store for use cases that don't require persistence, such as session storage, game leaderboards, streaming, and analytics. ElastiCache is compatible with Redis and Memcached.

Using Redis as a backplane is the recommended way to scale AWS-hosted SignalR Core applications, and ElastiCache is probably the best solution.

How does it work? The SignalR Redis backplane uses the Redis Pub/Sub function to forward messages to other servers. When a client connects, the connection information is passed to the backplane. When a server wants to send a message to all clients, it sends it to the backplane. The backplane knows all connected clients and on which servers they are located. It broadcasts the message to all clients through their respective servers.

How to configure the service

No additional configuration is required for ElastiCache to function as a SignalR backplane.

How to configure the application

  1. install thisMicrosoft.AspNetCore.SignalR.StackExchangeRedisNuGet packages.
  2. I amconfigure servicesinsideStart.cs, configure SignalR with.AddStackExchangeRedis():
services.AddSignalR().AddStackExchangeRedis("elasticache_redis_connection_string");

If you are using a single ElastiCache server for multiple SignalR applications, use a different channel prefix for each application. So:

services.AddSignalR().AddStackExchangeRedis("elasticache_redis_connection_string", options => {opts.Configuration.ChannelPrefix ="Cat Room";});

Defining a channel prefix isolates a SignalR application from others that use different channel prefixes.

If you don't assign different prefixes, a message sent from one application to all of its own clients will be sent to all of the clients of all of the applications that use the Redis server as a backplane.

(Video) Real-time web applications with ASP.NET Core SignalR

2. AWS Memory Database

Amazon MemoryDB for Redis is another Redis-compliant in-memory database service built in part on the open source Redis platform.

What is the difference between MemoryDb and ElasticCache? Which one should I use?

The difference between Amazon ElastiCache and MemoryDB is that the former is intended to be an in-memory cache that works alongside a main database, while MemoryDB itself is a full database service designed to run independently.

We could say that MemoryDB for Redis is AWS's answer to Redis Enterprise and looks to fill the gap for customers looking for a persistent counterpart to ElastiCache. Basically, think of Amazon MemoryDB as a higher level of ElastiCache for Redis.

Both MemoryDb and ElastiCache support the Redis Pub/Sub feature, so you can use either service as a SignalR backplane.

I prefer ElastiCache over MemoryDb for two reasons:

  • ElastiCache is cheaper.
  • The extra features MemoryDb has over ElastiCache, like durability and persistence, aren't really useful for a backplane.

How to configure the service

No additional configuration is required for MemoryDb to function as a SignalR backplane.

How to configure the application

Please read the previous section where I talked about configuring your application with an ElastiCache as a backplane, since the usage is exactly the same for both services.

The only thing worth noting is that when creating a MemoryDb cluster, you need to configure authentication, whereas authentication with ElastiCache is optional, which means that when configuring your SignalR application to use MemoryDb as a backplane, you need to configure the need to specify the username and password. like this:

services.AddSignalR().AddStackExchangeRedis("memorydb_redis_connection_string", options => {opts.Configuration.ChannelPrefix ="Cat Room";opts.Configuration.User ="user1";options.settings.password ="¡Senha12345!";opts.Configuration.Ssl =This right;});

3. Amazon RDS para SQL Server

Amazon RDS for SQL Server makes it simple to set up, operate, and scale SQL servers in the cloud.

Using SQL Server as a backplane is not the right solution for applications that require very high performance or a very high level of scalability. Consider using one of the Redis services (ElasticCache or MemoryDb) for these cases.

There is no official Microsoft implementation to use SQL Server as a backplane, if you want to use it you have to use itIntelliTect.AspNetCore.SignalR.SqlServerNuGet packages.
More information about this package here:

How to configure the service

No additional configuration is required in RDS for SQL Server to function as a SignalR backplane.

How to configure the application

  1. install thisIntelliTect.AspNetCore.SignalR.SqlServerNuGet packages.
  2. I amconfigure servicesinsideStart.cs, configure SignalR with.UseSqlServer():

Easy setup:

services.AddSignalR().AddSqlServer(Configuration.GetConnectionString("rds_sqlserver_connection_string"));

Advanced configuration:

services.AddSignalR().AddSqlServer(o =>{o.ConnectionString = Configuración.GetConnectionString("rds_sqlserver_connection_string"); // See above: Try to enable Service Broker on the database at startup // if it's not already enabled. False by default, as it may crash if the database has other sessions.o.AutoEnableServiceBroker =This right; // Each hub has its own message table(s). // This determines which part of the named table is derived from the hub name. // IF THIS IS NOT UNIQUE AMONG ALL CENTERS, THEIR CENTERS WILL COLLIDE AND THE MESSAGES WILL BE MIXED.o.TableSlugGenerator = hubType => hubType.Name; // The number of tables to use per hub. Adding some extras can increase performance // reducing table competition, but all servers must agree on the number of tables used. // If you find that you need to increase this, it's probably an indication that you need to switch to Redis.o.TableCount =1; // The SQL Server schema to use for the backing tables for this backplane.o.SchemaName ="SinalRCore";});

4. Azure SignalR Service

Azure SignalR Service is a managed backplane of SignalR, eliminating the need to manage your own Redis/SQL Server instance.

(Video) Going real-time with ASP.NET Core SignalR and the Azure SignalR Service

This service has some advantages over other backplane alternatives:

  • Sticky sessions are not required as clients are immediately redirected to the Azure SignalR service when they connect.
  • A SignalR application can scale based on the number of messages sent, while the Azure SignalR Service scales to handle any number of connections. For example, there could be thousands of clients, but if only a few messages are being sent per second, the SignalR application doesn't need to scale out to multiple servers just to handle the connections.
  • A SignalR app does not use significantly more connection resources than a non-SignalR web app.

Another interesting feature that this service offers is the “Live Tracking Tool”. It is a single web application that exposes the SignalR traces that have passed through the service.
Traces include: link connect/disconnect events and message receive/link events.

Deploying a SignalR Core Application to AWS (3)

You're probably wondering why I'm talking about an Azure service, since you can use Azure SignalR Service as a backplane with a main AWS-hosted SignalR application, as long as both application and service are visible.

The ideal solution when choosing a backplane service is to have the backplane as close to the application as possible. Having the backplane with another cloud provider seems far from ideal. Therefore, if you decide to use them, consider network latency, throughput, and the amount of data transferred outside of AWS.

How to configure the application

  1. install thisMicrosoft.Azure.SignalRNuGet packages.
  2. I amconfigure servicesinsideStart.cs, configure SignalR with.AddAzureSignalR():

Easy setup:

services.AddSignalR().AddAzureSignalR("azure_signalr_service_connection_string");

it isAddAzureSignalR()The method can be used without passing the connection string parameter. In that case, try to use the default configuration key:Azure: SignalR: Connection String.

Advanced configuration:

services.AddSignalR().AddAzureSignalR(o =>{Optionen.ConnectionCount =10; // This option controls the initial number of connections per hub between applications // Azure SignalR server and service. Usually keeping it as default is enough. // During runtime, the SDK can initiate new server connections to optimize performance or // load balancing. If you have a large number of clients, you can specify a larger number. // for better performance.opciones.AccessTokenLifetime = TimeSpan.FromDays(1); // This option controls the valid lifetime of the access token generated by // SDK service for each client. The access token is returned to the client in the response. // Trade request.options.ClaimsProvider = context => context.User.Claims; // This option controls which claims you want to associate with the client connection. East // used when the SDK service generates access tokens for the client on client negotiation // Research. By default, all claims are made by the HttpContext.User of the business request // be reserved. They can be accessed in Hub.Context.User .options.GracefulShutdown.Mode = GracefulShutdownMode.WaitForClientsClose; // This option specifies the behavior after the application server receives a SIGINT (CTRL + C).opciones.GracefulShutdown.Timeout = TimeSpan.FromSeconds(10); // This option specifies the maximum time to wait for clients to shutdown/migrate.});

In the last section, I want to talk about some AWS services that you can use to deploy your SignalR Core application. These services are the following:

  • ECS-AWS
  • AWS EC2
  • AWS-EKS

1. AWS-ECS

Just build and deploy the container like any other .NET6 application. There are no additional steps or configurations required here.

Just remember to enable sticky sessions for the ALB audience and set up a proper backplane.

2. AWS EC2

The part where you deploy the SignalR app to the server is business as usual, just like deploying any .NET6 app. On the other hand, configuring the web server tends to be more troublesome, so let's talk about that a bit.

Using Windows EC2 with IIS as a web server

  • Habilitar WebSockets.
    • You can do this in a single command using Powershell:Enable-WindowsOptionalFeature -Online -FeatureName IIS-WebSockets.
  • The sticky session must be defined on the load balancer, not on the web server.

Using a Linux EC2 with Apache as web server

  • Create a VirtualHost configuration to enable web server communication. So:
<Host virtual *:80>ProxyPass /ws://127.0.0.1:5000/ProxyPassReverse / ws://127.0.0.1:5000/</HostVirtual>

127.0.0.1:5000is the local address where the SignalR application is running.

When you configure the VirtualHost configuration to usehttpbackwards ofws, So:

<Host virtual *:80>ProxyPass / http://127.0.0.1:5000/ProxyPassReverse / http://127.0.0.1:5000/</HostVirtual>
(Video) SignalR in .NET Core Console Apllication

It uses SSE (Server Sent Events) instead of WebSockets.

  • The sticky session must be defined on the load balancer, not on the web server.

Using a Linux EC2 with NGINX as a web server

  • Create a new configuration in the "http" section. Like this:
Map $http_connection $connection_update{ "~*Update" $http_connection; Originally hold-alive;}
  • In the "Servers" section, create a new "Site" configuration. Like this:
Talla /{ # Request Server URL Proxy_pass http:// host local: 5000; # Building for him WebSockets Proxy_Set_Header To update $http_update; Proxy_Set_Header Connection $connection_update; hidden_proxy For; # WebSockets Guerra implemented later http/1,0 http_proxy_version 1.1; # Building for him Events sent by the server proxy buffering For; # Building for him long poll o And yes are KeepAliveInterval it is more time And yes 60 seconds proxy_read_timeout 100s; Proxy_Set_Header host $host; Proxy_Set_Header X-resent-For him $proxy_add_x_forwarded_for; Proxy_Set_Header X-resent-prototype $schema;}
  • The sticky session must be defined on the load balancer, not on the web server.

3. AWS-EKS

Just create the container like any other .NET6 application and deploy it to the EKS cluster. There are no additional steps or configurations required here.

The tricky part here is setting up the input handler correctly.

Using the AWS Load Balancer Controller

AWS Load Balancer Controller is a controller that helps manage Elastic Load Balancers for an EKS cluster.

  • Provision Kubernetes Ingress resources by providing Application Load Balancers.
  • Provision Kubernetes service resources by providing network load balancers.

For our SignalR application, we needto create an Ingress resource.

Here is a brief explanation of how AWS Load Balancer Controller creates and configures an ALB for us.

  • The controller listens for incoming events from the API server. When you find Ingress resources that meet your needs, you start creating AWS resources.
  • An ALB is created in AWS for the new Ingress resource. This ALB can be Internet facing or internal. You can also specify the subnets where it will be created using annotations.
  • Target groups are created in AWS for each unique Kubernetes service described in the Ingress function.
  • Listeners are created for each port listed in your Ingress resource notes. If no port is specified, reasonable defaults (80 or 443) are used. Certificates can also be attached using annotations.
  • Rules are created for each route specified in your Ingress resource. This ensures that traffic destined for a certain route is routed to the correct Kubernetes service.

I won't go into more detail about how AWS Load Balancer Controller works, as it is outside the scope of this post. For a SignalR Core application, all you need to know is that you need to add thealb.ingress.kubernetes.io/target-group-attributesMetadata when the Ingress resource is created.

  • alb.ingress.kubernetes.io/target-group-attributesspecifies the audience attributes to apply to audiences.

Earlier in the Networking section, I talked about how you need to enable persistent sessions for a SignalR Core app to work properly with an ALB. Now let's use thosealb.ingress.kubernetes.io/target-group-attributesMetadata annotation to tell the AWS ALB driver to enable Sticky Session when creating the target pool.

So:

alb.ingress.kubernetes.io/target-group-attributes:stickiness.enabled=true,stickiness.lb_cookie.duration_seconds=86400

To better understand how to configure the Ingress resource, here is a sample Kubernetes manifest that contains an Ingress, a service, and a deployment resource.

# Quelle: chat/templates/ingress.yamlapiVersion:red.k8s.io/v1liquid:penetrationmetadata: Name:To chat hang tags: rudder.sh/chart:Chat-0.1.0 app.kubernetes.io/nombre:To chat app.kubernetes.io/instancia:To chat app.kubernetes.io/versión:"1.16.0" app.kubernetes.io/managed by:Rudder Comments: alb.ingress.kubernetes.io/scheme:internet oriented alb.ingress.kubernetes.io/target-group-attributes:stickiness.enabled=true,stickiness.lb_cookie.duration_seconds=86400 alb.ingress.kubernetes.io/target-type:IP kubernetes.io/ingress.class:albaspecification: Standards:-http: ways:-Form:/ type of road:prefix Internal process: Service: Name:To chat Port: Number:80---# Quelle: chat/templates/service.yamlapiVersion:v1liquid:Servicemetadata: Name:To chat hang tags: rudder.sh/chart:Chat-0.1.0 app.kubernetes.io/nombre:To chat app.kubernetes.io/instancia:To chat app.kubernetes.io/versión:"1.16.0" app.kubernetes.io/managed by:Rudderspecification: Write:cluster IP doors:-Port:80 ZielPort:http Protocol:TCP Name:http voters: app.kubernetes.io/nombre:To chat app.kubernetes.io/instancia:To chat---# Quelle: chat/templates/deployment.yamlapiVersion:Applications/v1liquid:missionmetadata: Name:To chat hang tags: rudder.sh/chart:Chat-0.1.0 app.kubernetes.io/nombre:To chat app.kubernetes.io/instancia:To chat app.kubernetes.io/versión:"1.16.0" app.kubernetes.io/managed by:Rudderspecification: The replica:5 voters: match tags: app.kubernetes.io/nombre:To chat app.kubernetes.io/instancia:To chat model: metadata: hang tags: app.kubernetes.io/nombre:To chat app.kubernetes.io/instancia:To chat specification: serviceAccountName:To chat security context:{} container:-Name:To chat security context:{} photo:"242519123014.dkr.ecr.eu-west-1.amazonaws.com/signalr-chatroom:último" imagePullPolicy:All time doors:-Name:http container port:80 Protocol:TCP LivenessProbe: httpObtener: Form:/Health Port:http sample preparation: httpObtener: Form:/Health Port:http Resources:{}

I'm aware that there are other ways to use a SignalR application implemented in EKS, such as with an NGINX input controller, but I won't cover it in this section because I don't have any knowledge of that.

On my GitHub account you can find a repository that contains a small example of how to deploy a SignalR Core application on AWS.

The source code can be found here:

The deposit contains a/cdkFolder where you can find a CDK application that creates the following infrastructure on AWS:

(Video) ASP.NET Core 6 SignalR Fundamentals: Server / Client, Authentication/Authorization, Hosting/Scaling

  • A VPC.
  • Application load balancer.
  • A Fargate service.
  • An ElasticCache instance.
  • A NAT gateway.

Deploying a SignalR Core Application to AWS (4)

FAQs

How many clients can SignalR handle? ›

azure-docs/includes/signalr-service-limits.md
ResourceDefault limitMaximum limit
Concurrent connections per unit for Free tier2020
Concurrent connections per unit for Standard tier1,0001,000
Included messages per unit per day for Free tier20,00020,000
Additional messages per unit per day for Free tier00
6 more rows

Does AWS support SignalR? ›

Azure SignalR Service doesn't have in-built support for other serverless platforms, such as AWS Lambda or Google Cloud Functions.

Does SignalR require sticky sessions? ›

SignalR requires that all HTTP requests for a specific connection be handled by the same server process. When SignalR is running on a server farm (multiple servers), "sticky sessions" must be used. "Sticky sessions" are also called session affinity by some load balancers.

Is SignalR deprecated? ›

This browser is no longer supported. Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.

Is SignalR better than WebSocket? ›

For most applications, we recommend SignalR over raw WebSockets. SignalR provides transport fallback for environments where WebSockets isn't available. It also provides a basic remote procedure call app model. And in most scenarios, SignalR has no significant performance disadvantage compared to using raw WebSockets.

How can I improve my SignalR performance? ›

Reducing message size

Names can be shortened in messages from the client to the server as well, using the same method. Reducing the memory footprint (that is, the amount of memory used for the message) of the message object can also improve performance.

Videos

1. ASP.NET Core SignalR - Get Online Users
(Ammar Sharshara)
2. SignalR in ASP.NET Core Projects (2/3): Angular - Full Course from Wilder Minds
(swildermuth)
3. Deploy your .NET Core Application on Amazon EKS on AWS Fargate
(Amazon Web Services)
4. Learn to deploy a AWS ECS Fargate application for a .NET Blazor App.
(Pragmatic AI Labs)
5. How to create and publish an Azure Function + SignalR + Web app | Serverless Integration
(Jose Async)
6. SignalR Deep Dive: Building Servers - David Fowler & Damian Edwards
(NDC Conferences)

References

Top Articles
Latest Posts
Article information

Author: Tish Haag

Last Updated: 07/15/2023

Views: 6130

Rating: 4.7 / 5 (67 voted)

Reviews: 90% of readers found this page helpful

Author information

Name: Tish Haag

Birthday: 1999-11-18

Address: 30256 Tara Expressway, Kutchburgh, VT 92892-0078

Phone: +4215847628708

Job: Internal Consulting Engineer

Hobby: Roller skating, Roller skating, Kayaking, Flying, Graffiti, Ghost hunting, scrapbook

Introduction: My name is Tish Haag, I am a excited, delightful, curious, beautiful, agreeable, enchanting, fancy person who loves writing and wants to share my knowledge and understanding with you.