Integrations and testing in a vertical slice architecture
A recent post/question in the r/dotnet subreddit got me thinking a bit about how we implement and unit test integrations in our platform.
The SATS backend was initially built on the very common but (IMO) misguided controller-service-repository pattern, an approach which we are steadily moving away from as we maintain and upgrade our services.
We are instead basing our services on the lighter-weight vertical slices architectural approach. As a result of this, abstractions aren’t introduced until they are needed, and very often across different boundaries than what would typically happen in a controller-to-service-to-repository architecture. When an integration is introduced, it is usually used directly from the handlers which needs it. Additional abstraction is not introduced until is needed.
Introducing a new integration
SATS is currently moving to a new CRM system. As part of that, we will allow our lovely members to control their communication preferences (i.e whether they will allow us to personalize marketing messages, recommendations, etc.) through self-service. The technical requirements are rather simple:
- The CRM system has a REST API to control the communication preferences for a known individual, based on the internal ID. The REST API (naturally) exposes data in the CRM system’s internal format.
- Our API needs to provide the consumers with an endpoint to read the current settings, and another endpoint to update said preferences for an authenticated member, with terminology which makes sense in the SATS domain.
Note: There will be references to a number of 3rd party libraries in the code examples. I will not go into details on how they are used, but a list of them can be found at the end of this article.
Using an Azure Function App1 and mediatr (and some magic glue in between), the handler itself is very simple:
public CommunicationConsentsRequestHandler(
ICrmClient crmClient)
{
_crmClient = crmClient
}
public async Task<CommunicationConsentsResponse> Handle(
GetCommunicationsConsentsRequest request,
CancellationToken cancellationToken)
{
var consents = await _crmClient.GetCommunicationConsents(request.CustomerId);
return new CommunicationConsentsResponse
{
NewsAndOffers = consents?.CommunicationConsents.NewsAndOffers ?? false,
PersonalizedMarketing = consents?.CommunicationConsents.ThirdPartyOffers ?? false,
PersonalizedRecommendations = consents?.CommunicationConsents.TrainingRecommendations ?? false,
};
}
A CRM client is injected as a interface, to allow for mocking/stubbing when unit testing.
The important part here is how the dependencies expose their models (in other words, how the integrations are modelled within the application).
Crm client
The crm client itself is kept as thin as possible. Its only responsibility is to communicate with the CRM service itself, and expose the models of the CRM service to the rest of the application.
Note: This approach requires that models (classes) specific to the integration in question are exposed to the dependent assemblies. To some, this is anathema, but in my experience it allows for much more maintainable code.
public class CrmClient : ICrmClient
{
private readonly HttpClient _httpClient;
public CrmClient(HttpClient httpClient)
{
_httpClient = httpClient;
}
public async Task<CommunicationConsentsResponse?> GetCommunicationConsents(string crmId)
{
var uri = new Uri($"GetMemberConsents/{crmId}", UriKind.Relative);
var response = await _httpClient.GetAsync(uri);
if (response.StatusCode == HttpStatusCode.NotFound)
{
return null;
}
response.EnsureSuccessStatusCode();
var json = await response.Content.ReadAsStringAsync();
return JsonConvert.DeserializeObject<CommunicationConsentsResponse>(json);
}
}
By keeping the integration as thin as possible, we not only allow the handler to be testable; we are also able to actually test the important business logic of the handler. In our case, the logic is not particularly complex (it’s simply a matter of mapping a property name in our domain to a slightly different property name in the CRM’s domain), but I have seen far to often similar scenarios where the actual complexity is hidden in the client implementation itself. In my opinion, this is the most important element when designing an integration and abstraction approach: Being able to test the actual integration logic without having to mock http calls or soap services or databases or other heavy and complex infrastructure. Such tests should be considered integration or end-to-end tests, and usually end up being much heavier than unit tests (and, as a consequence, are not run as frequently.)
An example unit test (which verifies the mapping):
[Fact]
public async Task GetConsents_MemberReturned_ConsentsMappedCorrectly()
{
var request = _fixture.Create<GetCommunicationsConsentsRequest>();
var crmConsents = _fixture.Create<Integration.Crm.Models.CommunicationConsentsResponse>();
_crmClient.GetCommunicationConsents(request.CustomerId)
.Returns(crmConsents);
var response = await _sut.Handle(
request,
default);
response.NewsAndOffers.Should().Be(crmConsents.CommunicationConsents.NewsAndOffers);
response.PersonalizedMarketing.Should().Be(crmConsents.CommunicationConsents.ThirdPartyOffers);
response.PersonalizedRecommendations.Should().Be(crmConsents.CommunicationConsents.TrainingRecommendations);
}
Now, it is definitely possible that other parts of the service at some point will need to read and utilize the consents from the CRM system. Then, and only then, it might (might!)2 be time to abstract that part into a separate service.
In UML terms, that’s when we will move from this rather flat structure:
To this:
In coding terms, this new abstraction will usually affect the composition of the handler(s), going from
public CommunicationsConsentsRequestHandler(
ICrmClient crmClient)
{
_crmClient = crmClient;
}
to
public CommunicationsConsentsRequestHandler(
ICrmConsents crmConsents)
{
_crmConsents = crmConsents;
}
and, of course, the part of the code directly utilizing the crm client, from
private async Task<CommunicationConsentsResponse>(string crmId)
{
var consents = await _crmClient.GetCommunicationConsents(crmId);
return new CommunicationConsentsResponse
{
NewsAndOffers = consents?.CommunicationConsents.NewsAndOffers ?? false,
PersonalizedMarketing = consents?.CommunicationConsents.ThirdPartyOffers ?? false,
PersonalizedRecommendations = consents?.CommunicationConsents.TrainingRecommendations ?? false,
};
}
to
private async Task<CommunicationConsentsResponse> GetConsentsForMember(string crmId)
{
return _crmConsents.GetConsentsForMember(crmId);
}
On the testing side of things, only very slight changes are usually needed, since the abstractions are kept just as thin. In fact, only the code which constructs the SUT (the system-under-test) is affected, going from
public CommunicationsConsentsRequestHandlerTests()
{
_crmClient = Substitute.For<ICrmClient>();
_sut = new CommunicationsConsentsRequestHandler(_crmClient);
}
to
public CommunicationsConsentsRequestHandlerTests()
{
_crmClient = Substitute.For<ICrmClient>();
_sut = new CommunicationsConsentsRequestHandler(
new CrmConsents(_crmClient));
}
In fact, this is a prime example of how favoring composition over inheritance allows for easier maintenance down the line.
Hopefully, that gives an idea on how integrations can be written to keep the logic testable, and how abstractions emerge from functional requirements.
3rd party libraries used
- mediatr by Jimmy Bogard, who is also an awesome resource for Vertical Slice Architecture
- NSubstitute for dependency mocking
- AutoFixture for generating test data
- FluentAssertions to assert those beautiful XUnit tests
-
For scalability reasons, we use Azure Function Apps as the hosting mechanism for most of our services. The exact same approach can be used with App Services (or .net apis hosted elsewhere, for that matter). Just make sure those controllers are thin, thin, thin! ↩
-
It is very difficult, if not impossible, to come up with a hard and fast rule of when to extract a piece of code for re-use. The DRY crowd will scoff at duplicated code anywhere, while in my experience attempting to introduce reuse at all costs leads to code bloat and low maintainability. I had a co-worker once who said that “more than two usages” of a a piece of code should lead to a discussion on whether it should be extracted, and I think I like that approach. ↩