Patterns for Testing Laravel Projects

In the last decade the single biggest 180 I've done is on the importance of testing, specifically unit, integration, and feature testing. There was certainly a time where I found it more valuable to manually ensure my code works, but as I gained more enterprise experience and the projects that I worked on grew larger, it became evident that unit/integration testing would not only prove to be immensely valuable from a developer standpoint (huge time saver), but also from a "best practices" standpoint.

Something to keep in mind throughout this post: unit/integration/feature testing is not necessarily a replacement for user experience or user acceptance testing. This is very circumstantial, so just understand that you will likely still need to make changes after releasing features to users - no matter how much testing you've done.


I recently had the opportunity to argue with someone on Reddit about testing in Laravel. It all really started when I wrote, "replace your integration tests with unit tests".

At first glance, these are certainly fighting words. However I would expect any seasoned developer to, instead of attacking this statement, delve into the why and how of what makes it true.

Testing is rarely used correctly

By and large, Laravel developers don't actually understand what unit/integration testing is used for, and most importantly, don't quite understand the "right" way to scaffold and implement a test suite.

Oftentimes test cases are "duplicating" test coverage. Similarly to how we should be trying not to duplicate code within the codebase, likewise we should take care to have as few testing duplicates as possible. While there isn't really a downside to hitting the same code multiple times during testing, it can still have a negative effect on the overall test suite performance: for example, the suite may slow to a crawl, coverage of other important areas may be missed, and your codebase will inevitably continue to grow in size as tests are added.

Many developers dive into testing their code head first, quickly creating new test cases for things they've just written or are about to write. This is far too hasty in my opinion, and will lead to a large and inefficient test suite.

Unit Testing vs Integration Testing vs Feature Testing

In the context of Laravel, I generally refer to integration and feature tests somewhat synonymously. This is because, in terms of dependency interaction, both integration and feature tests essentially do the same thing: they test multiple components integrating together.

So what we really need to define is the differences between integration and unit testing.

If you can use a unit test, use it.

This is the golden rule for testing in my opinion: do not use an integration test unless you absolutely have to. If a component can be tested independently, then there is really no reason to create an integration test for it.

To draw a clearer line, I like to think of "integration" tests in Laravel as those test cases that will consume a database. This could be a sqlite, mysql, or any other type of database, but if that external service is needed to test something, then it is automatically an integration test.

Another simple rule: do not touch external services in unit tests. This includes databases, external APIs, caches, etc. You should theoretically be able to run your unit-test suite without a database, redis or other cache, and also without internet on your development machine.

Unit tests by definition deal with assertions on a single logical component. The biggest challenge with constructing these tests is actually remembering the fact that we're only testing a single logical component. It is very easy to start to pull in other dependencies to speed up your test case development.

Splitting Logic between Integration and Unit tests

It is perfectly acceptable to have two tests for the same component: one integration test and one feature test. Depending on the complexity of both of these cases, you may be able to merge them both into a single integration test. If you're unable to do this, however, have no fear.

The best example for this in the context of Laravel is model testing. There may be a case where you need to test that a custom attribute is built on a model based on the data in the database, and also a case where you just need to ensure a model returns a certain static array. In this case you could create an integration test that consumes the app's database and write a test for your custom attribute. In the case of the static array, you could write a unit test that asserts the array returns the expected data.

This example is simple enough that it could be merged into the integration test case. However you should be able to see that as that model grows, it may be prudent to split it's testing between a unit test and an integration test.

Stop testing the framework

This is a big pet peeve of mine: Laravel is a well-tested framework, there is absolutely no need for us to assert that Eloquent for example works as expected. It is perfectly reasonable to assume that the framework will function as intended if you are using a stable release.

What this means in simpler terms is that Laravel itself already contains a suite of unit, integration, and feature tests. Those test cases cover the functionality of the framework, and thus any further testing of framework components within our application is simply duplicating those test cases (and we know that's a bad thing). You can save yourself a lot of time (both in writing test cases and in the run-time of the test suite) by skipping these duplicated tests and focusing only on your application code.

One of the biggest offenders here is the Eloquent ORM. Countless times, I've seen tests that simply assert that $model->save() saves a record to the database. Again: we already know this will happen correctly. Instead, those same tests should be focused on something like complex model queries.

There is certainly a case to be made for testing parts of eloquent. For example if you have a complex query on one of your models, you should write an integration test that asserts that the results returned from that query are as expected. While eloquent is responsible for running the query, it is not responsible for writing it. As you may be able to guess, the general rule of thumb is then: write tests for custom queries or operations.

Another decent example that isn't eloquent related could be the LengthAwarePaginator. There's no need to test that the items in the paginator are accurate (as long as you're using the $model->paginate() method), or that the paginator metadata has a certain structure. Instead, you should test that: 1) the data you expect to be paginated is in the response, 2) the number of pages is what you'd expect with the given request, and 3) that the structure of the data matches what you'd expect based on the request. We are still "testing" the paginator, but are saving ourselves the hassle of re-testing it's foundation which is already present in the framework.

Bottom-Up Approach

When I write tests, I like to start at the "bottom" and then go "up". You could almost think of this as some form of "test abstraction", which may make more sense momentarily.

What are your core application dependencies? For example, do you have an external service that calls some API? These types of things are at the "bottom". Controllers and similar components would be things at the "top". So, the "top" is essentially all of the components that almost exclusively depend on other components, and the "bottom" is all of the components that are being depended on.

Start by unit-testing stuff like your external API services. Ensure that every method is covered and every branch (if statements for example) is covered as well. What you'll end up with is a very high confidence that your external service will function as intended. With that high confidence, we can effectively stop testing that service. This means we no longer need to create tests that the dependencies of that service perform a certain way when the service is called, and rather can mock the service and simply assert that it's expected method is called.

By asserting that the service's method is called, we can reasonably say that the call will be successful since the process within that method is unit-tested independently.

Mocking

You can never have enough mocks. Well, you can, but try not to think about that for a while. As mentioned above, when testing an external service dependency within another component, we can just mock that external service and assert that certain methods are called on it.

For example, if we have two services - one of which is an external API service, and the other of which is a formatter for the results from the first service - we could actually write just two unit tests - no integration testing required. First, you'd write a robust unit test for your external API service. This will ensure that it functions as expected. Next, you'll write a unit test for the formatter service, and instead of including the actual external API service you should mock the external API service and provide that mock as the constructor argument to the formatter service.

Mocking is really what brings us into the "abstracted testing" concept. I like to think of it as "abstracted" since the "mocked" implementations of already-tested components are sort of like "parents" to the components being tested.

Creating Useful Integration Tests

As mentioned before, for the most part my definition of an integration test in Laravel is a test that touches the database. Technically speaking, an integration test could be any test that integrates and tests two or more components. Therefore there is a fine line within your application tests between what constitutes an integration test and what constitutes a unit test.

If you're using the "bottom-up approach" efficiently, there isn't much of a need to integrate multiple well-unit-tested components without mocking. That is to say, tests of Component B that consume Component A don't need to actually instantiate Component A, but instead can mock Component A and provide the mock to Component B.

However if you stick with this pattern exclusively, you will likely end up with few to no integration tests at all... And that is not ideal.

Depending on the business logic within your application, you will definitely have a varying number of type of tests. In some cases this could result in no integration or unit tests, but in most cases (especially cases where any external service interaction takes place, including database interaction) you should have some integration and feature tests.

After building out the "foundation" of your unit-test suite using the bottom-up approach, you should then begin considering which processes within your application are highly dependent on different types of data or behavior from their specific dependencies. What this means is basically asking yourself: now that I know my individual components work, what pieces of my application need to be end-to-end tested with as-close-to-production-as-possible data?

Generally, that answer is one or more of the following pieces:

  • HTTP Request Controllers - testing that incoming requests and/or data is validated and processed as expected, from the perspective of a client
  • Database Interaction - taking care not to re-test the ORM, this would involve testing queries within your application that will return varying results based on the data in the database. Hint: this is usually complex eloquent queries, or "repository" tests if you're using the repository pattern.
  • Data Transformation Pipelines - testing that a series of components that depend on each other to transform varying pieces of data work as expected, with multiple tests for different data structures, etc.

HTTP Request Controllers

The biggest benefit here is being able to perform assertions on data that you'd expect to see from the client side of your application. If your Laravel app is an API, then you can perform assertions on JSON data structures, response codes, etc. This is valuable from a backend development standpoint for the purpose of validating that the advertised API schema/documentation matches what the client will actually be receiving.

Additionally you gain the obvious benefit of integration/feature testing most of your application framework in one fell swoop. When testing HTTP controllers you're making requests to an endpoint, which will spin up the entirety of your application's context. This implicitly ensures that providers, configurations, etc are all setup to at the very least allow your application to serve a request.

Database Interaction

As mentioned before, tests for processes that interact with the database should be written when those processes will return varying data based on the data stored in the database. Additionally, complex queries (which can be covered by the previous explanation) should be tested against a realistic database instance.

This can be a testing-specific MySQL database, SQLite, or any other database of your choice. But in general a database integration test should rely on a database service being available.

Remember: don't re-test eloquent.

Data Transformation Pipelines

This is by far the vaguest category in the list. "Data transformation pipeline" is a term that refers to any abstract process that 1) depends on the data being provided and 2) depends on two or more components to interact together. So an easy example here is SQL database interaction: it relies on the data in the database and relies on multiple components (a database connector and the process to generate a query, possibly more) to work.

This could also refer to processes that rely on cached data, stored files, or anything similar.


Coverage, and Conclusion

I want to wrap this up with a note on code coverage: why it matters, and why it doesn't matter. Code coverage reporting is a powerful but dangerous tool. It empowers developers to understand how effective their test suite is, but can also create a complacency within the development team in regards to the test suite. You should avoid using coverage as a ticket to calling a test "effective", and rather use it to inspect which parts of your code are being hit by tests, and how those tests themselves are written.

Getting a green check mark on a line in a code coverage report is not difficult, so it's important that in addition to just running that code as part of a test case, we're performing assertions that effectively ensure the code works as the developer expects - under as many cases as possible. It can be useful to continuously inspect the test cases impacting your most sensitive business logic for possible regressions and effective coverage whenever new production use cases pop up.

In conclusion: testing is abstract, it's circumstantial, and your implementation of a test suite will depend on your app's needs, your team's needs, and your business's needs. I believe that a mark of a good developer is the ability to transfer abstract concepts from implementation to implementation and use those concepts effectively, and that is something I've tried to do with testing no matter what type of project I'm working on.

As with all other types of software development, good test case development requires practice and trial/error. I encourage you to take an introspective look at how your application's tests are currently structured and ask yourself, "how can this be better?". If we all ask ourselves that question daily, we'll end up writing better software across the board.

Happy testing!

Nicole Wilging

Nicole Wilging