gRPC (A high performance, open source universal RPC framework)
gRPC is a modern open source high performance Remote Procedure Call (RPC)
framework that can run in any environment. It can efficiently connect
services in and across data centers with pluggable support for load
balancing, tracing, health checking and authentication. It is also
applicable in last mile of distributed computing to connect devices,
mobile applications and browsers to backend services.
gRPC is based around the idea of defining a service, specifying the methods that can be called remotely with their parameters and return types. By default, gRPC uses protocol buffers as the Interface Definition Language (IDL) for describing both the service interface and the structure of the payload messages. It is possible to use other alternatives if desired.
Contract-first API development, using Protocol Buffers by default, allowing for language agnostic implementations.
Tooling available for many languages to generate strongly-typed servers and clients.
Supports client, server, and bi-directional streaming calls.
Reduced network usage with Protobuf binary serialization.
These benefits make gRPC ideal for:
Lightweight microservices where efficiency is critical.
Polyglot systems where multiple languages are required for development.
Point-to-point real-time services that need to handle streaming requests or responses.
gRPC uses a contract-first approach to API development. Services and messages are defined in .proto files:
syntax = "proto3";
service Greeter {
rpc SayHello (HelloRequest) returns (HelloReply);
}
message HelloRequest {
string name = 1;
}
message HelloReply {
string message = 1;
}
gRPC services can be hosted on ASP.NET Core. Services have full integration with ASP.NET Core features such as logging, dependency injection (DI), authentication, and authorization.
Each gRPC service is added to the routing pipeline through the MapGrpcService method.
using GrpcGreeter.Services;
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddGrpc();
var app = builder.Build();
app.MapGrpcService<GreeterService>();
app.MapGet("/", () => "Communication with gRPC endpoints must be made through a gRPC client. To learn how to create a client, visit: https://go.microsoft.com/fwlink/?linkid=2086909");
app.Run();
Service Code
public class GreeterService : Greeter.GreeterBase
{
private readonly ILogger<GreeterService> _logger;
public GreeterService(ILogger<GreeterService> logger)
{
_logger = logger;
}
public override Task<HelloReply> SayHello(HelloRequest request, ServerCallContext context)
{
return Task.FromResult(new HelloReply
{
Message = "Hello " + request.Name
});
}
}
Client Code
var channel = GrpcChannel.ForAddress("https://localhost:5001");
var client = new Greeter.GreeterClient(channel);
var response = await client.SayHelloAsync(new HelloRequest { Name = "World" });
The process of building and deploying applications is boring, repetitive, and tedious, but needs to be done frequently and 100% right 100% of the time. This makes it a perfect candidate for automation.In this article, I will show you how to automate the deployment of your .NET projects to a machine using GitHub Actions and Docker. I will briefly cover why and how to use Docker, as well as explain CI/CD in simple terms.
Why Docker?
We will be deploying an ASP.NET Core application to a virtual machine with Docker installed.If you’re not familiar with Docker, it’s a containerization tool. We use it to isolate apps from the host machine and simplify deployment.
You could also deploy by copying the project output folder to the VM, or even pulling the git repo and building on the server, but then you would have to maintain the dotnet and asp.net runtimes on the server, as well as the rest of the configuration, which is a huge pain.Dockerizing your app makes deployment and maintenance much easier, even with simple architectures such as a single web app.
The VM should be able to accept requests on port 80. You should have an SSH key set up, preferably a new one specifically for GitHub Actions. I will not be covering how to set up a VM in this article.
Dockerizing an ASP.NET Core App
To Dockerize our ASP.NET Core App, we will create a file named “Dockerfile” at the root of our project.
It should look like this:
# Build FROM mcr.microsoft.com/dotnet/sdk:7.0 AS build WORKDIR /app COPY . . RUN dotnet restore RUN dotnet publish -c Release -o out # Run FROM mcr.microsoft.com/dotnet/aspnet:7.0 WORKDIR /app COPY --from=build /app/out . ENV ASPNETCORE_URLS=http://*:80 CMD dotnet App.dll
CI/CD
The first step (Build) of the Dockerfile uses the official .net 7.0 SDK image as the base to build the app and saves it in the “out” folder. The second step (Run) uses the official ASP.NET 7.0 runtime image as the base, copies the build output from the previous step, and runs the compiled dll. Obviously, you should replace “App” with the name of your project. At the time of writing this article, the Kestrel inside the ASP.NET Core runtime image is by default configured to listen to port 80. However, this is going to change in .NET 8, so we will explicitly set it in our Dockerfile. This Dockerfile will allow Docker to build our container. Let’s say that we want to deploy our app whenever we merge code into the main branch. Whenever we do that, we want to start a “job” that will build our container, upload it to our VM somehow, and start it there.
In other words, this job will continuously check for new code that’s integrated into the main branchand deploy it.
There could be more than one of these jobs, or steps (such as build, test, and deploy), all lined up one after another.
Those jobs would form a Continuous Integration / Continuous Deployment pipeline, or CI/CD for short.
Writing a CI/CD Workflow in GitHub Actions
Let’s write a CI/CD pipeline (GitHub calls them Workflows) in GitHub Actions.
YAML
Workflows in GitHub Actions are written in YAML, which is a simple and elegant data serialization language that relies on indenting for structure and has minimal syntax.
Creating a Workflow File
To create a GitHub Action workflow, we create a file in the root of the repo:
~/.github/workflows/deploy.yml
In the file, we can start by giving the workflow a name:
name: Backend Deployment
Steps
It’s time to write out the pipeline. Let’s quickly reiterate our desired steps:
When code is merged into the main branch
Build the container
Upload it to the VM
Start the server
1. Trigger when code is merged into main
First, we set up the trigger for the main branch:
on: push:
branches: [ "main" ]
Then, we define the job. We will run it on Ubuntu.
jobs: build: runs-on: ubuntu-latest
2. Build the container
Next, we need to define the job steps. These steps are what you would usually do manually. To build the containers, you would first need to pull the git repository. You could write all the commands for this yourself, but there’s a better way.
GitHub Marketplace contains a lot of pre-written actions that you can reuse. One of them is checkout. With it, you can pull the repository in a single line.
steps: - uses: actions/checkout@v3
If you’re wondering how a fresh VM can pull your repository, GitHub Runners (the VMs running the workflows) are automatically set up with an auth token that has access to your repo. It’s stored as a secret and deleted after the job finishes.
Now that we have the main branch pulled, we want to build the container. Docker is pre-installed on GitHub Runners, so all we have to do is run a command.
We define custom steps by giving them a name and a run field. We also need to set the working-directory if the Dockerfile is not in the root of the repository.
Make sure the path is correct. It is the path where the Dockerfile is located, relative to the root of your repository.
The docker-hub-username should be replaced with your DockerHub username. This is because we will be pushing the container image to the DockerHub container registry. The container name can be anything you like.
3. Upload to VM
To get the container into the VM, we will first upload it to the registry, and then pull it from the VM. To upload the image we built to the registry, first, we need to log in. For that, we will use login-action@v2.
In the description, put whatever you like. Give it all access, and click Generate.
Copy it and close the prompt.
This is your DOCKERHUB_TOKEN.
Before we can test the pipeline, we need to add the secrets to our repository.
Navigate to Settings > Secrets and variables > Actions and click New repository secret.
Add the following:
DOCKERHUB_TOKEN : the token you created earlier
DOCKERHUB_USERNAME : your Docker Hub username
SSH_HOST : your VM’s address
SSH_USERNAME : your username on the VM
SSH_KEY : your private SSH key
Running the Workflow
This should be everything you need, time to test!
As we have set the workflow to trigger on pushes to the main branch, all you need to do to test it is commit this file.
I would suggest that you open a new branch for testing workflows, since you will be pushing a lot of commits. After you’re done, simply change the trigger back to “main” and squash merge it to main.
You can find all Workflows in the “Actions” tab.
Conclusion
I have shown you how to set up a simple CI/CD pipeline using GitHub Actions and Docker.