Integration Testing: The Complete Guide for Teams
Unit tests pass. The build is green. You deploy, and within an hour a customer reports that the checkout flow is broken. The payment service responds correctly in isolation, the order service handles its logic fine on its own, but when the two talk to each other in production, something falls apart. This is the exact class of failure that integration testing exists to catch. It verifies that the pieces of your system work together, not just individually, and it is one of the highest-value investments an engineering team can make in release confidence.
What integration testing actually means
Integration testing validates the interaction between two or more components, modules, or services. It sits between unit tests (which exercise a single function or class in isolation) and end-to-end tests (which simulate a full user journey through the entire application). The scope of an integration test is the boundary: the API call between services, the query that hits the database, the message that lands on the queue.
A unit test for a payment function might verify that it calculates tax correctly given a hardcoded input. An integration test for the same function verifies that it sends the right payload to the Stripe API, receives a response, and writes the correct record to the database. The logic is the same. What changes is the presence of real dependencies rather than mocks.
This distinction matters because a surprising number of production bugs live at boundaries. The serialization format changed but only one side was updated. The database migration ran but the ORM cache still holds the old schema. The queue consumer expects a field that the producer stopped sending two releases ago. These are not logic errors. They are integration errors, and no amount of unit testing will surface them.
Why CI pipelines miss integration failures
Most CI pipelines are optimized for speed, which means they lean heavily on unit tests and lightweight checks. That is the right default for the pre-merge gate. But the result is that integration failures only surface when code reaches a deployed environment, by which point the feedback loop is slower and the blast radius is larger.
The root cause is environmental. Unit tests run in memory with mocked dependencies. Integration tests need a database, a message broker, an external API, or at minimum a test double that behaves like the real service. Spinning up those dependencies in CI adds time and complexity, so teams defer integration coverage to "later" and later never arrives.
Container-based test infrastructure has closed much of this gap. Tools like Docker Compose and Testcontainers let you spin up Postgres, Redis, or Kafka as part of the test run and tear them down when the suite finishes. The overhead is real but manageable. A well-structured integration suite for a mid-size service can run in under five minutes with containerized dependencies.
Types of integration tests
Not all integration tests serve the same purpose. Understanding the types helps you decide where to invest and what to expect from each layer.
- Component integration tests verify that modules within a single service work together correctly. For example, testing that the controller layer calls the service layer, which calls the repository layer, and the full chain produces the expected result against a real database. These tests catch wiring errors, misconfigured dependency injection, and broken query logic.
- Service integration tests verify that two separate services communicate correctly over their shared interface. If your order service calls your inventory service over HTTP, a service integration test sends a real request and validates the response contract. Contract testing tools like Pact formalize this by letting each side independently verify its half of the agreement.
- Data integration tests verify that read and write operations against shared data stores produce consistent results. These are critical when multiple services write to the same database or when a pipeline transforms data between systems. A data integration test might insert a record, run a migration, and confirm the transformed output matches expectations.
Most teams benefit from investing heavily in component integration tests first, since they catch the highest volume of issues with the lowest infrastructure cost. Service integration tests become essential as your architecture moves toward microservices or third-party API dependencies.
Writing effective integration tests
Good integration tests share a few characteristics that distinguish them from unit tests dressed up with a database connection.
Test the boundary, not the logic. If you are re-testing business logic that a unit test already covers, the integration test is not adding value. Focus on the handoff: does the request reach the right endpoint? Does the response deserialize correctly? Does the database transaction commit and become visible to subsequent reads?
Use real dependencies when practical. The point of integration testing is to exercise real interactions, so replacing every dependency with a mock defeats the purpose. Use a real database (containerized in CI), real queue infrastructure, and real serialization. Mock only what you cannot control, such as third-party payment APIs or rate-limited external services, and use recorded responses or sandbox environments for those.
Isolate test data. Each test should create its own data, run its assertions, and clean up afterward. Shared test data across integration tests is the single most common source of flaky failures. Use transactions that roll back, unique identifiers per test run, or truncation between tests to keep the environment clean.
Here is a minimal example of a component integration test using a real database:
describe("OrderService", () => {
let db: Database;
beforeAll(async () => {
db = await createTestDatabase();
await db.migrate();
});
afterAll(async () => {
await db.close();
});
afterEach(async () => {
await db.truncateAll();
});
it("creates an order and updates inventory", async () => {
const product = await db.products.create({
sku: "TEST-001",
stock: 10,
});
const order = await orderService.place({
productId: product.id,
quantity: 2,
});
expect(order.status).toBe("confirmed");
const updated = await db.products.findById(product.id);
expect(updated.stock).toBe(8);
});
});This test exercises the full path from service call to database write and back. If the query is wrong, the transaction fails to commit, or the inventory update has an off-by-one error, this test catches it. A unit test with a mocked repository would not.
Where integration tests fit in the pipeline
The question of when to run integration tests depends on how fast they are. If your integration suite finishes in under five minutes, run it in the pre-merge gate alongside your unit tests. If it takes longer, run it in the post-deploy stage against staging so it does not block developer workflow.
A practical pipeline structure for a team with meaningful integration coverage looks like this:
- Pre-merge: unit tests plus fast component integration tests (under five minutes total). These block the merge on failure.
- Post-deploy to staging: service integration tests and slower component tests. These run against the deployed environment and report results without blocking.
- Pre-production gate: critical integration scenarios combined with regression coverage to verify that nothing has broken between staging and the promotion window.
This layered approach is covered in more detail in the guide on adding QA to your CI/CD pipeline, which explains how to wire each gate to deployment events so tests run automatically.
Common mistakes to avoid
Teams that are new to integration testing tend to make a few predictable mistakes that undermine the value of the suite.
Testing too much through the integration layer. If your integration tests are verifying every edge case of a calculation, they will be slow and fragile. Push edge-case coverage down to unit tests and use integration tests to verify that the wiring works for the common paths.
Sharing state between tests. When one test inserts a user record and another test depends on that record existing, any change in execution order breaks the suite. This is the leading cause of flaky integration tests and it is entirely preventable by isolating test data.
Ignoring flaky tests instead of fixing them. A flaky integration test is a test telling you that the boundary it covers is unreliable. Sometimes the flakiness is in the test setup, sometimes it is in the system under test. Either way, skipping it or adding a retry teaches the team to distrust the suite. Fix the root cause or remove the test entirely.
Mocking the thing you are supposed to test. If your "integration test" mocks the database, the HTTP client, and the message broker, it is a unit test with extra steps. The entire point is to exercise real interactions. Keep the mocks to a minimum and use them only for dependencies that are genuinely out of your control.
Getting started with integration testing
If your team currently has little or no integration coverage, the path forward is incremental. Start with the single most failure-prone boundary in your system. For most teams, that is the database layer, specifically the queries and transactions that power the primary user workflow.
Write five to ten integration tests covering that boundary. Run them in CI using a containerized database. Measure the execution time and verify that it fits within your pre-merge budget. If it does, add it to the required checks. If it does not, move it to the post-deploy gate and continue expanding coverage from there.
Over time, the suite becomes a safety net that catches boundary failures before they reach production. Tracking the outcomes of these tests feeds into the QA metrics that matter most for engineering leaders, specifically escaped defect rate and mean time to detection for integration-level bugs.
Integration testing is not glamorous. It does not produce dramatic screenshots or impressive dashboards. What it does is catch the class of bug that unit tests structurally cannot reach, which happens to be the same class of bug that generates production incidents and on-call pages. If your CI pipeline is green but your staging environment keeps surprising you, the gap is almost certainly at the integration layer. Building that coverage does not require a new framework or a dedicated QA team, but if your engineering bandwidth is stretched thin, a managed QA service can help build and maintain the suite while your developers focus on shipping features.
Ready to level up your QA?
Book a free 30-minute call and see how Pinpoint plugs into your pipeline with zero overhead.