Commit db7734f7 authored by Julien Topçu's avatar Julien Topçu
Browse files

Finishing the documentation of the tests of the domain

parent 7f811e1a
......@@ -4,6 +4,8 @@
TalkAdvisor is a [hexagonal architecture]( demo application developed with Kotlin and SpringBoot.
This application recommends IT Talks recorded on YouTube given some criteria
![Hexagonal Architecture](images/hexagonal-architecture.png)
## Build
To build TalkAdvisor, run the following command:
......@@ -119,4 +121,15 @@ The end to end tests can also be launched against a deployed instance plugged wi
## Testing Strategy
If you wan to learn more on the testing strategy applied in TalkAdvisor, [here]( is the dedicated documentation.
\ No newline at end of file
If you wan to learn more on the testing strategy applied in TalkAdvisor, [here]( is the dedicated documentation.
## Contributors
Julien Topçu - @JulienTopcu
Jordan Nourry - @JkNourry
Juliette de Rancourt - @ju_derancourt
## Special Credit
TalkAdvisor Project Icon made by Freepik from
\ No newline at end of file
# Testing Strategy
> Work In Progress !!! Infrastructure Level tests are not documented yet.
TalkAdvisor is following the [microservice testing philosophy](
We will try here to explain how to get a cleaner test strategy in a microservice implemented according to the Hexagonal Architecture.
......@@ -28,7 +32,7 @@ But testing the contract of a web API with it, can be really cu-cumbersome and t
Back to the basics, since the aim of a functional test is "testing the business logic", putting it inside the domain of our application looks like a good idea. As a result instead of calling the endpoints, those tests are plugged in top of the API of the domain (not the Web API one).
![Functional Tests in the Hexagonal Architecture](images/hexagon-implementation.png)
![Functional Tests in the Hexagonal Architecture](images/hexagon-stubbed.png)
In TalkAdvisor, [Cucumber]( is used to define the features and the scenarios of our business logic. As you can see the feature files are located in [the tests packages of the domain]( beside the [step definitions](
Using the [Gherkin language](, we express the scenarios of a feature [creating-a-profile.feature](
......@@ -140,24 +144,221 @@ Why not only a Mock inside the tests ? The domain stub is more that just a testi
Actually in TalkAvisor, only the SPI part related to the provisioning of the Talks has been implemented through a YouTube client. The repositories which are used to store our profiles and recommendations are implemented using HashMaps e.g. [InMemoryProfiles](
This way we can focus on the main purpose of the application - recommending talks - and delay some technical concern like "what will be the best database system for my software ?".
![Functional Tests in the Hexagonal Architecture](images/hexagon-implementation.png)
![Functional Tests in the Hexagonal Architecture](images/hexagon-stubbed.png)
### Low-Level Assertions Caveat
Assertions frameworks like [AssertJ]( are widely spread now. They are offering a fluent way of writing our acceptance criteria.
TalkAdvisor has to make sure that talks which belong to a recommendation are related to the topics of the user preferences. In the MVP of our application, we will consider a [talk]( is related to a topic
if its title contains the given topic. In the different tests where we want to check this post-condition, we will end-up with something like this:
//Talk level tests assertion
//Recommendation level tests assertion
assertThat( { it.title }).anyMatch{ it.contains("topic") }
## Unit Tests and Test Composition:
First we lost the intend of the test, it will require some thought for someone who doesn't know the project why - the hell - we are checking a talk is containing the requested topic.
But that's not the only caveat. If now we have this new requirement "We consider a talk is related to a topic, if its title AND its description contains the topic", we will have to update all the tests
which are responsible to verify (may be at different levels) this new requirement. We can add the new assertion on the first test but leave the second one as it and all the tests will pass.
In that case we may have a functional hole on the recommendation side, but everyone is reviewing all the tests of the application each time a new business rule is added right?..
To fix the problem, with the help of some Domain-Driven Design & Clean Code concepts, we will **encapsulate** the acceptance criteria - the second assertion is btw violating the encapsulation of the recommendation.
This way those "encapsulations" will be reused so it will ensure that every tests which requires the same acceptance criteria will be checked the same way.
But where ? In the production code? No, we will use custom assertions.
For example in the resources, when testing the mapping of a Profile Domain to a Profile Resource,
we don't add a unit test inside resources.PreferencesTest to verify the mapping of a Preferences Resource
since the Profile, which contains it, will test it by composition
### Custom Assertions
[TALK] talk about custom assert and factories
Custom asserts in the adapters: Mapping a domain object to an adapter one can be done in several places
Storing the mapping validation inside a custom assert will ensure no mapping tests will miss a new acceptance criteria.
Use as well in the domain unit tests, the functional tests, and in the infrastructure.
AssertJ is extendable, you can write [custom assertions]( dedicated to your own domain in a fluent way like this:
To do such thing you need to create some Assertions class like [TalkAsserts](
class TalkAssert(actual: Talk) : AbstractAssert<TalkAssert, Talk>(
) {
infix fun `is related to topic`(topicName: String) {
matches({ it.title.contains(topicName) }, "is related to topic $topicName")
infix fun `is in the format`(talkFormat: TalkFormat) {
matches({ it.format == talkFormat }, "correspond explicitly to the format $talkFormat")
If you look also to [RecommendationAssert](
class RecommendationAssert(actual: Recommendation) : AbstractAssert<RecommendationAssert, Recommendation>(
) {
infix fun `has talks related to`(topicName: String) {
it.criteria.topics.any { topic -> == topicName }
}, "recommendations criteria has the topic $topicName")
actual.talks.those `are related to topic` topicName
You can also see that there is a real encapsulation of the acceptance criteria, because a recommendation related to a specific topic means at least one of its talk is related to that topic and also the stored user criteria are also related to it.
Otherwise there will be an inconsistency. So next time we have to write a test where we want to check this acceptance criteria, we won't have to recode all of them - we will keep in our tests a **single level of abstraction on the assertions** as well.
And TJWHEN!!! *(Thanks JetBrains We Have Extensions Now)* We can write it in a sexier way than the assertThat. Once your assertions classes are created, you can extend your class in your test to attach the assertions to it.
So you'll be able to write stuffs like:
@Then("^the recommended talks correspond to his preferences$")
fun `the recommended talks correspond to his preferences`() {
val recommendation = testContext.recommendation
val profile = testContext.createdProfile
val preferences = profile.preferences
recommendation.that `corresponds to the criteria` preferences
recommendation.that `has talks related to` preferences.topics
recommendation.that `has only talks in the formats` preferences.talksFormats
To do such things, take a look at [DomainAssertions](
val Recommendation.that: RecommendationAssert
get() = RecommendationAssert(this)
val Talk.that: TalkAssert
get() = TalkAssert(this)
val Iterable<Talk>.those: TalksAssert
get() = TalksAssert(this)
val Profile.that: ProfileAssert
get() = ProfileAssert(this)
We also have custom assertions inside the infrastructure [ResourcesAssertions](
They are mainly used to make sure the "acceptance criteria" of a mapping of a domain object to an adapter one (and also the opposite) will be shared by all the mappers.
> AssertJ also provide some [assertions generators](, in order to get automatically domain-field based assertions like ``assertThat(talk).hasTitle(title)``.
> This feature is unfortunately not used on the current project.
## Unit Tests
### Domain Object Factories
In Domain-Driven Design, the domain **is not composed of POJOs!** It means the domain objects should not exposes their states but their behaviors through the encapsulation. So mocking a domain object is prohibited.
Why ? Let's imagine you are mocking a Talk object so it will say that it has a duration of 1 hour and its [format]( is an IGNITE.
Which doesn't make sense right ? And what's about [the test which is built on top of it](
A domain object, thanks to the validation logic of its constructor, will always ensures that it is coherent, no need to check it after the creation. It saves us from a lot of bugs!
But the counterpart is it makes the tests harder to write. Because each time we want to do a test where a recommendation is needed, it has to be built correctly and you have to think about all the business rules...
The best way to fix that is the usage of domain object factories. In those factory, we put once for all the creation logic of a domain object which is reused in every tests which needs an instance.
In TalkAdvisor, we have for example [TalkFactory](
fun createTalk(criteria: Criteria): Talk {
return prepareBuilder()
.apply { duration = durationFrom(criteria) }
private fun durationFrom(criteria: Criteria) = criteria.talksFormats.random().randomDuration()
fun createTalk(): Talk {
return prepareBuilder().apply { duration = ofMinutes(Random.nextLong(2, 120)) }.build()
As you can see here, we have a factory of Talks which is taking as parameters some criteria. The reason is when you want to create a Recommendation, you have to make sure the stored criteria and the talks are aligned.
So in the [tests]( we are reusing them to create a valid Recommendation.
fun `should create a recommendation`() {
val (criteria, talks) = bootstrap()
val recommendation = Recommendation(criteria = criteria, talks = talks)
private fun bootstrap(): Pair<Criteria, Set<Talk>> {
val criteria = createCriteria()
val talks = createTalks(criteria)
return Pair(criteria, talks)
**IMPORTANT NOTICE:** You should never use this kind of factory when you care about the values inside - because if someone is changing the creation logic, your test will fail. It should be used only like black boxes in order to quickly bootstrap data for the tests.
When you are expecting some specific values for domain objects, **you should create it explicitly in your test**, like done in [RecommendationControllerTest](
Since the controller test if actually verifying the values inside the returned JSON are the expected ones - more precisely the preferences of stored the profile, we explicitly create the profile.
> You can also share pre-initialized builders if needed.
### Test Composition
Opening the debate...
Let's now take a look at the [Profile]( resource inside the REST adapter.
data class Profile(private val id: String, val preferences: Preferences) : Identifiable<String> {
override fun getId() = id
fun DomainProfile.toResource(): Profile = Profile(, this.preferences.toResource())
Profile is a top-level resource which contains a sub-resource named preferences (same composition than the domain). In order to convert a Profile of the domain to a resource, an extension method ``toResource`` has been defined.
You can also see the resource transformation is cascaded to the inner preferences. When [testing the Profile resource transformation](, we will also test by transitivity the transformation of the preferences into a resource.
So there is no need to repeat this test at [Preferences level](
That's totally fine since the Preferences resource is never used outside of the transformation of a Profile or a Recommendation (the only objects using it). When adopting a black-box test approach, e don't really care about the implementation, but only the exposed behavior.
Fortunately with the encapsulation principle, we expose the behavior only through a limited number of classes: domain aggregates for entities and value objects and domain services implementing the API.
It means you only have to write a test where the behavior is exposed. You'll limit this way the number of (useless) tests you'll have to maintain.
On the other hand, edge-cases should be tested directly on the object which has the effective responsibility of dealing with those cases. For example [PreferencesTest]( is testing for the mapping of an unknown TalkFormat (enum):
fun `should throw IllegalArgumentException when trying to map an unknown TalkFormat`() {
val topics = listOf(Topic("topic"))
val talksFormats = listOf("UNKNOWN")
val preferences = Preferences(topics, talksFormats)
assertThatThrownBy { preferences.toDomainObject() }
.hasMessage("No enum constant ${}.UNKNOWN")
You'll not find this test in the upper-levels of the hierarchy of the Preferences resource, like [Profile]( and [Recommendation](
The following reasons are:
* If you move all the edge-cases of the children hierarchy at the higher-level (and everywhere in the hierarchy), you'll end-up with a lot of hard-to-maintain tests.
* In the previous example, you'll have to put the same test inside ProfileTest and RecommendationTest, and it won't change at all the test coverage of your application.
That's basically the concept of test composition.
## Integration Tests, Contract Testing & End-To-End Tests
TODO: WORD ON Test Composition ? here somewhere else ?
>The documentation is coming soon !
## Documentation

110 KB | W: | H:


127 KB | W: | H:

  • 2-up
  • Swipe
  • Onion skin
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment