Zum Hauptinhalt springen

part 9 - gRPC

In our usecase we want to use Grpc on the CommandsService startup to reach out to PlatformService and Sync up their states.

  • PlatformService runs on a InMemoryDb. So while it will get and add new Platforms created while it is running. It has no knowledge on the starting state.

Notes on gRPC

  • google Rmove Procedure Call
  • uses HTTP2 to transport binary messages. (faster compared to json)
    • so it requires TLS - HTTPS!
  • Relies on "Protocol Buffers" (aka Protobuf) - these define the contract between the endpoints
    • .proto file has the type information of the data
  • gRPC is used mostly for Server to Server communication

Https workarround

  • since setting up https inside our cluster is quite a bit of work we explicitly only use http inside our cluster
  • at the end of our platforms-depl.yaml we add another Port to the ClusterIpService
    - name: platformgrpc
protocol: TCP
port: 666
targetPort: 666
  • and apply it with kubectl apply -f K8S/platforms-depl.yaml

  • we explicitly tell gRPC to only use http (otherwise it would default to https) in appsettings.Production.json

  "Kestrel": {
"Endpoints": {
"Grpc": {
"Protocols": "Http2",
"Url": "http://platforms-clusterip-srv:666"
},
"webApi": {
"Protocols": "Http1",
"Url": "http://platforms-clusterip-srv:80"
}
}
}

Add packages

  • for our PlatformService we add:
dotnet add package Grpc.AspNetCore
  • for our CommandsService we add:
dotnet add package Grpc.Tools
dotnet add package Grpc.Net.Client
dotnet add package Google.Protobuf

Create the proto-file

  • in PlatformsService/Protos/platforms.proto we define the contract and what can get passed on the grpc-connection:
syntax = "proto3";

// just the top level namespace of this project, gets used for the autogenerated code:
option csharp_namespace = "PlatformService";

// we define the service/endpoint
service GrpcPlatform {
rpc GetAllPlatforms(GetAllRequests) returns (PlatformResponse);
}

// this endpoint's input has no parameters but we still have to name it:
message GetAllRequests {}

// We define the shape of data that gets passed back (we wanna pass an array of those)
message GrpcPlatformModel {
int32 platformId = 1; // the 1 is not the value but the 'index' or position of where in GrpPlatformModel this gets placed
string name = 2;
string publisher = 3;
}

// an array of the above 'objects'
message PlatformResponse {
repeated GrpcPlatformModel platform = 1;
}
  • PlatformService/PlatformService.csproj we add an Item group. This will tell the project the path to the proto-file and what type were running as (server/client)
  <ItemGroup>
<Protobuf Include="Protos/platforms.proto" GrpcServices="Server" />
</ItemGroup>
  • we dotnet build the Platformservice and if everything worked, we should be able to look into the autogenerated code: PlatformService/obj/Debug/net7.0/Protos/Platforms.cs and PlatformService/obj/Debug/net7.0/Protos/PlatformsGrpc.cs

Code in PlatformService

  • we add grpc for dependency injection in our PlatformService/Program.cs and add
builder.Services.AddGrpc();
// ...
app.MapControllers(); // this maps all our Controllers by default
app.MapGrpcService<GrpcPlatformService>(); // the grpcService we have to Add manually
// we (this is optinal) serve the protobuf file to the client, so they could infer everyhing from it:
app.MapGet(
"/protos/platforms.proto",
async context => {
await context.Response.WriteAsync(File.ReadAllText("Protos/platforms.proto"));
}
);
  • we add a mapping to the grpc model in PlatformService/Profiles/PlatformsProfile.cs
// mapping for gRPC:
CreateMap<Platform, GrpcPlatformModel>()
// even platformId is camelcase in .proto the generated one gets 'csharped' to PlatformId
.ForMember(dest => dest.PlatformId, opt => opt.MapFrom(src => src.Id))
// the other ForMembers would get inferred (because same name) but we do it just to show it more clear
.ForMember(dest => dest.Name, opt => opt.MapFrom(src => src.Name))
.ForMember(dest => dest.Publisher, opt => opt.MapFrom(src => src.Publisher));
  • PlatformService/SyncDataService/Grpc/GrpcPlatformService.cs
public class GrpcPlatformService : GrpcPlatform.GrpcPlatformBase {
private readonly IPlatformRepo _repository;
private readonly IMapper _mapper;

public GrpcPlatformService(IPlatformRepo repository, IMapper mapper) {
_repository = repository;
_mapper = mapper;
}

public override Task<PlatformResponse> GetAllPlatforms(GetAllRequests request, ServerCallContext context) {
var response = new PlatformResponse();
var platforms = _repository.GetPlatforms();
foreach (var p in platforms) {
// map from our.Platform -> grpc.Platfrom and add those to grpc.Response
response.Platform.Add(_mapper.Map<GrpcPlatformModel>(p));
}
return Task.FromResult(response);
}
}

Code in CommandsService

  • appsettings.Development.json we take the adress from our PlatformService/Properties/launchSettings.json -> https.url
  "GrpcPlatform": "https://localhost:5001"
  • appsettings.Production.json
  "GrpcPlatform": "http://platforms-clusterip-srv:666"
  • we copy over cp PatformService/Protos/platforms.proto CommandsService/Proto/platforms.proto
  • CommandsService/CommandsService.csproj we also add (this time with Yype Client)
  <ItemGroup>
<Protobuf Include="Protos/platforms.proto" GrpcServices="Client" />
</ItemGroup>
  • we add Mappings to CommandsService/Profiles/CommandsProfile.cs
// gRPC-Mappings:
CreateMap<GrpcPlatformModel, Platform>()
.ForMember(dest => dest.ExternalId, opt => opt.MapFrom(src => src.PlatformId))
.ForMember(dest => dest.Name, opt => opt.MapFrom(src => src.Name))
// we explicitly tell that we want noting to map to our.Platform.Commands
.ForMember(dest => dest.Commands, opt => opt.Ignore());
  • we create CommandsService/SyncDataServices/Grpc/IPlatformDataClient.cs
public interface IPlatformDataClient {
IEnumerable<Platform> ReturnAllPlatforms();
}
  • we add the Scoped dependency injection:
builder.Services.AddScoped<IPlatformDataClient, PlatformDataClient>();
  • and implement the aboveCommandsService/SyncDataServices/Grpc/PlatformDataClient.cs
public class PlatformDataClient : IPlatformDataClient
{
private readonly IConfiguration _config;
private readonly IMapper _mapper;

public PlatformDataClient(IConfiguration config, IMapper mapper)
{
_config = config;
_mapper = mapper;
}
public IEnumerable<Platform> ReturnAllPlatforms()
{
Console.WriteLine($"--> Calling gRPC Service {_config["GrpcPlatform"]}");
var channel = GrpcChannel.ForAddress(_config["GrpcPlatform"]!);
var client = new GrpcPlatform.GrpcPlatformClient(channel);
var request = new GetAllRequests(); // even though this is empty we still have to build it and send it over

try
{
var reply = client.GetAllPlatforms(request);
return _mapper.Map<IEnumerable<Platform>>(reply.Platform);
}
catch (Exception e)
{
Console.WriteLine($"--> Could not call gRPC Server! {e.Message}");
return Enumerable.Empty<Platform>();
}
}
}

testing localy

  • because of grpc we need to use https now. So we use the "https"-Profile from Properties/launchSettings.json
  • we pass down the -lp https flag to dotnet dotnet run --project ./CommandsService --launch-profile https
  • we spin first the Platform- then the CommandsServer up. We should get a log of:
--> Seeding new platforms
  • indicating that CommandsService connected via gRPC, got it's data and used that to seed itself. Thus synchronizing the state of the two services.
  • we hit http://localhost:6000/api/c/platforms with a GET request resulting in all those Platforms.

rebuilding and rolling out to Kubernetes

docker build -t vincepr/platformservice ./PlatformService
docker push vincepr/platformservice
kubectl rollout restart deployment platforms-depl

docker build -t vincepr/commandservice ./CommandsService
docker push vincepr/commandservice
kubectl rollout restart deployment commands-depl

Finished State

So both servers are now totally in Sync. (assuming the PlatformService is up when the Commands one starts up and on startup connects via gRPC trying to sync the data up)

  • Afterwards the Message Bus keeps the Services synced up
  • We dont handle the case gracefully of the CommandsService starting first. But a simple retry policy should solve this.

Missing from this Project

  • HTTPS/TLS inside the Kubernetes-Cluster
  • Service Discovery. Without having to hardcode the Adresses in config files, making the Services connect automatic.
  • In CommandsService Error Recovery for SeedingData, when PlatformService is initially unresponsive. Atm we would just do nothing then only fill the ones we get later from MessageBus. Better would be to reattempt Syncing of data.

Cleanup

  • following command removes all Kubernetes containers etc:
make clean
  • then we just delete the DockerContainers with for example DockerDesktop