28
votes

All of the service fabric examples depict single-solution service fabric examples. This seems to go counter to the philosophy of microservices, where you want complete dependency isolation between your services. While you can manually follow this pattern, the more common practice is to enforce it by making each service it's own repository and solution/project.

How do you manage and deploy service fabric services, and enforce service contracts (ServiceInferfaces) using multiple solutions (in multiple Git repositories)?

E.g.

Service Fabric Solution
      App1 - Customers
         - Service1 [Carts] From Other Solution
         - Service2 [LocationInfo]From Other Solution
         - Service3 [REST WebAPI for Admin] From Other Solution
      App2 - Products
         - Service4 [Status]From Other Solution
         - Service5 [Firmware]From Other Solution
         - Service6 [Event Stream] From Other Solution

External Solution
   - Service1
External Solution
   - Service2
External Solution
   - Service3

External Solution
   - Service4
External Solution
   - Service5
External Solution
   - Service6

1) As a developer, I want to check out and build all the current versions of apps/services. I want to fire up my Service Fabric project that manages all the manifests, and deploy it to my local dev cluster. I want to enforce the same service interfaces between solutions. I don't understand how you'd do this, because the application is external to the services.

2) As a DevOps team, I want to automate pulling down the apps, building them and deploying to Azure.

How do we "enforce" isolation via separate solutions, but make it easy to pull them together and deploy into the cluster, while also making it easy to make pipelines to deploy each cluster configured uniquely for DEV, QA, and PROD environments.

What is the workflow/process/project structure to enable this? Is it even possible?

2
a year on, how has this gone for you? We're looking at doing something similar - but i can't figure out/find examples on how to do it. Anything you can give advice on?RPM1984
would like to know also how you did with this?Alex Gordon

2 Answers

17
votes

Yep, it's possible - I've done something along these lines before. These are the thoughts that spring to mind immediately...

In each Service Fabric solution have a "public" project containing just the interfaces that you want to expose from the services in that application. The output from this project could be packaged as a nuget package and pushed onto a private repository. You could call it the "interfaces" project I guess, but you wouldn't have to expose all the interfaces if you wanted to consider some of them internal to your application; these could be defined in a separate, unexposed project.

Other solutions that want to reference the services exposed by another application just had to pull down the relevant nuget package to get a reference to the service interfaces.

Now this isn't without problems:

  • The consuming application still need to know the addresses for the services in order to construct proxies for them. You could either expose them as constants defined somewhere in the nuget package, or if you're going down the full DI route and don't mind coupling yourself to a DI container everywhere (or fancy trying to abstract it away), you could expose a module from the nuget package that can register the services interfaces as a lambda that does the service proxy creation on behalf of the dependent applications.
  • You are much more vulnerable to breaking contracts. You need to be really careful about updating method signatures as you are now responsible for the granularity and co-ordination of application/service deployments.
  • You can go too granular - as you mention, the Service Fabric tooling guides you toward having multiple services in one solution. Even with the above approach I would still try to logically group my services to some extent, i.e. don't go for a one-to-one mapping between an application and service - that will definitely be more pain than it's worth.

Hope that helps.

EDIT:

An example of registering a service interface in a DI module, (Autofac style)...

This would be the DI module you expose from the public nuget package:

using System;
using Autofac;
using Microsoft.ServiceFabric.Services.Remoting.Client;

public class MyAppModule : Module
{
    protected override void Load(ContainerBuilder builder)
    {
        builder.Register(component => ServiceProxy.Create<IMyService>(new Uri("fabric:/App/MyService"))).As<IMyService>();
        // Other services...
    }
}

And in the Program.cs of your consuming application, you'd include something like this:

public static void Main()
{
    try
    {
        var container = ConfigureServiceContainer();

        ServiceRuntime.RegisterServiceAsync(
            "MyConsumingServiceType",
            context => container.Resolve<MyConsumingService>(new TypedParameter(typeof(StatefulServiceContext), context))).GetAwaiter().GetResult();
        ServiceEventSource.Current.ServiceTypeRegistered(Process.GetCurrentProcess().Id, typeof(MyConsumingService).Name);

        Thread.Sleep(Timeout.Infinite);
    }
    catch (Exception e)
    {
        ServiceEventSource.Current.ServiceHostInitializationFailed(e.ToString());
        throw;
    }
}

private static IContainer ConfigureServiceContainer()
{
    var containerBuilder = new ContainerBuilder();

    // Other registrations...

    containerBuilder.RegisterModule<MyAppModule>();

    return containerBuilder.Build();
}

Of course, this approach will only work if you aren't partitioning your services...

1
votes

You can also use less loosely coupled protocols eg http based using either xml\wsdl or json\swagger and autogenerated or manually created httpclient proxies.

The cost of managing a nugget lib ,nuspec etc is high.