Are You Accidentally Killing Your Database?

Are You Accidentally Killing Your Database?

Imagine you're building a simple admin portal feature—nothing too complex...just a page where internal staff can view notifications that have been sent out. The goal is to let them review what was sent, when it went out, and who received it. There's a Notification model and a UserNotification model that links each notification to one or more recipients. The structure is clean, the relationships are well defined, and the feature works fine during development.

However, the data then begins to grow.

Dozens of notifications become hundreds, then thousands, each targeting dozens or maybe hundreds of users. Suddenly, that simple feature starts feeling sluggish. The page takes longer to load, queries begin to pile up, and you're left wondering what went wrong with a feature that didn't seem complicated to start with.

This is precisely the kind of problem I've run into more than once. The system works perfectly under light usage, but falls apart when exposed to real-world data. The issue isn't in the business logic. It almost always comes down to how the database is being accessed. And it’s usually fixable.

In this article, I aim to walk through a scenario of this kind. I'll highlight where the problems tend to show up, how I've learned to spot them early, and the changes I now make to avoid silently overloading the database. These patterns can be applied to any feature that works with large volumes of related data.

The Admin Notification Feature

In this scenario, the feature is part of an internal admin portal. The purpose is to provide staff visibility into the notifications that have been sent and to whom they were delivered. The interface lists notifications with their content, the time they were sent, and a breakdown of recipients, along with whether those recipients have read the message or not.

The underlying structure is pretty standard. There is a Notification model that stores the actual message and metadata, such as the subject line, delivery mode, and timestamps. Each notification may be sent to a single user, a handful of specific users, or every user associated with a customer account. To track this, there is a related UserNotification model, which links a notification to individual recipients and stores information like read status and delivery state.

Something like this:

// Notification.php
public function userNotifications()
{
    return $this->hasMany(UserNotification::class);
}

// UserNotification.php
public function user()
{
    return $this->belongsTo(User::class);
}

It's a reasonable structure, as it maps cleanly to the business requirements and is easy to build out views and reports based on this relationship. However, as the data grows, this model can become problematic if care is not taken with how it's queried and rendered.

The remainder of this article explores the various performance issues that can arise in a setup like this, explains how to identify them using the appropriate tools, and outlines my approach to resolving them by modifying how the data is accessed.

Problem 1: Too Many Queries in One Request

One of the first signs of trouble with this feature is that the notification list page becomes slower over time. Everything loads fine when there are a few records, but once you display dozens of notifications, each with their recipients, the request time starts to increase. What's happening under the hood is often a flood of minor queries triggered by lazy loading.

The usual culprit is a loop that renders a list of notifications and, for each one, accesses a related model, such as userNotifications. That access may not appear dangerous at first, as it works and returns the expected data. But without eager loading, Laravel runs a new query for every single notification.

If you have 50 notifications, and each one triggers a query to load recipients, that's 100 queries in one request... And if any of those relationships also trigger their queries, things get even worse.

This is the classic case where eager loading makes all the difference. By using with(), you can load all the required data in just a few queries, regardless of the number of notifications being rendered.

$notifications = Notification::with('userNotifications.user')->paginate(25);

Now you're asking the database once for the notifications, and once for the related recipients and their user records. This eliminates unnecessary round trips and eliminates surprises when usage increases.

Whenever I'm working with relationships in list views or reports, I default to eager loading. Even if the performance isn't an issue during development, I assume the data will grow, and I'd rather not rely on query caching or hope for the best.

Problem 2: N+1 Queries Sneaking In

The first time I ran into a serious N+1 issue, I didn't even realize what was happening. The feature seemed fast enough, and the queries didn't appear to be broken. But when I opened Laravel Telescope and looked at a single request, I saw dozens of nearly identical queries firing off one after another. It was the same SELECT statement repeated over and over, just with a different user_notification_id or user_id in the WHERE clause.

This is the classic N+1 query problem.

In a setup like this (where each notification can be linked to many users through a UserNotification model), it's easy to load the related data one row at a time accidentally. Even if you've added with() for the primary relationship, it can still occur deeper in the tree, such as when rendering user details or filtering based on read status.

Here's an example of what that might look like without proper eager loading:

@foreach ($notifications as $notification)
    @foreach ($notification->userNotifications as $userNotification)
        {{ $userNotification->user->name }}
    @endforeach
@endforeach

Even though this code appears clean, it quietly triggers a new query every time it accesses userNotification->user, unless that relationship has also been eagerly loaded.

The fix is straightforward. Load what you need up front:

$notifications = Notification::with('userNotifications.user')->paginate(25);

Sometimes I also need to apply constraints to a nested relationship, like filtering out read notifications:

$notifications = Notification::with(['userNotifications' => function ($query) {
    $query->whereNull('read_at');
}])->paginate(25);

This pattern provides the performance benefit of eager loading without loading unnecessary data.

Whenever I'm working with nested relationships, I check Telescope while developing. If I see a repeating pattern in the query log—same table, same structure, changing only by ID—I know I've got an N+1 issue to clean up.

Problem 3: Loading Too Much Data at Once

The previous issues were all about the number of queries being executed. This one is different; it's about how much data you're pulling in at once and what you're doing with it.

In admin-facing features, it's common to need to process or display a lot of records. You may be generating a report of all unread notifications, or exporting notification logs for a compliance request. The temptation is to write something like this:

$notifications = Notification::all();

foreach ($notifications as $notification) {
    // process or modify each one
}

The problem is that all() loads every record into memory. If you're dealing with hundreds or thousands of rows, especially those with nested relationships, this can easily exceed your memory limits or cause timeouts. Additionally, if you’re modifying records as you loop through them, you might inadvertently affect your result set.

🥴
You know that someone has seen this issue before when you spot an ini_set('memory_limit', '2G') in your codebase 🙈

I've encountered this issue a few times when writing data cleanup jobs or background tasks. The safer approach is to use chunkById(), which loads records in fixed-size batches using indexed IDs. This approach avoids loading everything at once and prevents looping forever if records are being updated during the process.

Notification::chunkById(100, function ($notifications) {
    foreach ($notifications as $notification) {
        $notification->update(['archived' => true]);
    }
});

This pattern is efficient, predictable, and much easier to reason about under load. It's also safer for job retries, especially when you're updating or deleting records along the way.

But using chunkById() isn't a magic fix. Even with good structure, bad logic can still be present 😅.

Here's an example of that.

Let's say you're reviewing notifications and want to check:

  • Which users received the notification
  • Whether their customer account has a specific feature enabled
  • Whether that user has made any posts in the last 3 days

You're trying to stay efficient, so you're chunking the notifications. But you nest another chunk for UserNotification records, and inside that, you start running manual queries for each user.

$importantNotificationIds = [];

Notification::chunkById(50, function ($notifications) use (&$importantNotificationIds) {
    foreach ($notifications as $notification) {
        UserNotification::where('notification_id', $notification->id)
            ->chunkById(10, function ($userNotifications) use (&$importantNotificationIds, $notification) {
                foreach ($userNotifications as $userNotification) {
                    $user = $userNotification->user;

                    // Query 1: Check if the user's customer has a specific feature
                    $hasFeature = DB::table('customer_features')
                        ->where('customer_id', $user->customer_id)
                        ->where('feature', 'notifications_v2')
                        ->exists();

                    // Query 2: Check if the user has made any posts in the past 3 days
                    $hasRecentPosts = DB::table('posts')
                        ->where('user_id', $user->id)
                        ->where('created_at', '>=', now()->subDays(3))
                        ->exists();

                    if ($hasFeature && $hasRecentPosts) {
                        $importantNotificationIds[] = $notification->id;
                    }
                }
            });
    }
});

At a glance, this feels safe: you're chunking both notifications and user records. But inside the loop, you're running two queries per user. So, how bad is it?

Let's do the math.

If you're processing a page of 50 notifications, and each notification has 10 users linked (a conservative example):

  • 50 notifications x 10 users = 500 users
  • 2 manual queries per user = 1,000 queries total

That's in addition to the queries used to fetch the notifications and user notification records themselves.

Although chunkById() effectively controlled memory usage, this logic still requires database access 1,000 times. You've solved one layer of the problem (bulk loading) but reintroduced another (N+1-style querying inside a loop).

In this case, I would consider eagerly loading user and customer relationships upfront or loading related data into an indexed array for quick reference inside the loop.

Problem 4: Unpaginated Views and Over-fetching Data

Not every performance problem shows up in the database logs. Sometimes, the issue lies in the amount of data being sent back to the client, particularly in admin tools or API responses.

One of the easiest ways to accidentally overload a feature is by skipping pagination. I've seen admin pages that load every notification in the system, just to render a table or feed into a JavaScript component. When the dataset is small, it feels instantaneous. But once the number of records starts climbing into the hundreds or thousands, response times get longer, the UI starts lagging, and everything feels heavier.

The fix is straightforward: paginate everything that could grow over time.

$notifications = Notification::paginate(25); // Supports ?page=2

This not only keeps the query efficient but also limits the amount of data Laravel has to hydrate and the amount your frontend has to render. If you're dealing with real-time or streaming interfaces, Laravel's cursorPagination() can also be helpful, especially when you want to avoid offset-based pagination performance issues.

Pagination is just one part of it, though. The other common problem is returning too many fields. I've seen API endpoints return complete notification records, including columns that weren’t needed in the UI, such as internal metadata, whole message bodies, or even timestamps that weren't being displayed.

When I know what data the client actually needs, I use select() to reduce the size of the response:

Notification::select(['id', 'title', 'created_at'])->paginate(25);

This is especially useful for large tables or high-traffic endpoints, as it keeps the payload smaller, serialization faster, and response time more predictable.

My approach here is similar to chunking: if a feature is listing or returning many records, I use pagination and only select what I need. It's one of those small decisions that make a big difference in how the app behaves under load.

Preventative Measures

One of the best things Laravel gives you is the ability to catch bad data access patterns during development, before they cause real problems in production. But you have to enable that feedback loop on purpose.

The first thing I turn on in any new project is Laravel's preventLazyLoading() feature. It helps catch N+1 queries by throwing an exception when your code tries to lazy load a relationship that wasn’t explicitly loaded in advance.

You can enable it in your AppServiceProvider like this:

use Illuminate\Database\Eloquent\Model;

public function boot()
{
    Model::preventLazyLoading(!app()->isProduction());
}

By wrapping it in a check for !app()->isProduction(), you make sure that lazy loading is only blocked in development. This way, it won't throw exceptions in production environments where performance is critical, but stability is paramount.

If you still want to know when lazy loading happens in production, Laravel also allows you to log the violations instead of failing:

Model::handleLazyLoadingViolationUsing(function ($model, $relation) {
    logger()->warning("Lazy loading detected: {$relation} on model " . get_class($model));
});

This allows you to track and resolve the issue later, without disrupting users in real-time.

This setup is one of the simplest ways to improve code quality on teams. Once it's in place, you immediately start writing more intentional queries, and it's easier to spot when someone forgets to eager load something.

It doesn't solve everything, but it makes lazy inefficiencies a lot more visible, and that alone can save a lot of time and frustration down the road.

How I Actually Found the Problems

It's one thing to know what problems could happen; it's another to actually notice them while working on a real feature. Over time, I've settled on a few tools and habits that help me catch data access issues before they get out of control.

Laravel Telescope

Telescope is usually the first place I go when something feels off in development. If a page seems slower than it should be, I'll open up Telescope and check the query log for that request. When I see dozens of nearly identical SELECT statements—usually just differing by an ID—I know I’ve got an N+1 issue on my hands.

Telescope also makes it easy to spot slow queries and compare how different eager loading setups affect the number of total queries per request.

Sentry (or any error tracking system)

While Sentry is best known for catching exceptions, I also use it to flag performance issues. It's good for tracking long-running jobs, slow HTTP requests, or memory-related crashes. If a background process silently fails because it's attempting to process too much data, Sentry often provides the first hint.

Combined with custom context or breadcrumbs, it becomes a valuable tool for identifying which features are experiencing the most issues in production.

Cloud metrics and dashboards

Depending on the hosting setup, I rely on Northflank, Azure, or AWS dashboards to monitor resource usage. CPU spikes, sudden jumps in memory usage, or an increase in database connections usually point to a specific feature that's consuming more than its fair share. When I see a pattern like that, I go back and review the queries involved.

It's not about having the perfect setup. It's about checking the right indicators. If I notice a sharp increase in load or memory usage, it’s almost always connected to something like unpaginated queries, poor chunking, or an inefficient report.

Database query logs

In staging environments, I sometimes turn on database query logging, especially when working on batch jobs or new admin views. It's a low-level way to see what’s really happening between Laravel and the database, and it often helps confirm whether eager loading or indexing changes are working as expected.

A Simple Checklist

After encountering these issues too many times, I've developed a mental checklist that I follow whenever I'm working on a feature that involves large or growing datasets.

Here's what I ask myself before pushing anything to production:

  • Am I eager loading all necessary relationships?
    If I'm looping over related models, I make sure they’re loaded with with() before the loop begins.
  • Have I checked for N+1 queries?
    I scan the query log in Telescope or inspect the number of queries fired per request. If I see repetition, I know I missed something.
  • Am I using chunkById() for large processing jobs?
    Whenever I'm processing more than a few dozen records (especially in background jobs), I avoid all() and use chunking with an indexed ID.
  • Is pagination enforced on all large result sets?
    For any admin view, API, or export feature, I don't load everything at once. I use paginate() or cursorPaginate() and support page navigation via query parameters.
  • Am I limiting fields with select()?
    I avoid returning entire rows unless the UI needs all of it. If I just need an ID, title, and timestamp, that’s all I fetch.
  • Is preventLazyLoading() enabled in development?
    This is my safety net, helping to catch the exact kind of silent problems that are easy to miss during local testing.

It's not about perfection. It's about not leaving performance up to chance. These checks don't take long, and they've saved me a lot of time debugging slow pages and failing jobs after a feature has already shipped.

Build Features That Scale

It's easy to assume that if a feature works in development, it'll be fine in production. But I've learned that assumptions like that don't hold up when the data starts growing. Features that work perfectly during testing can quietly become bottlenecks once usage increases, especially if they rely on inefficient data access patterns.

This admin-facing notifications feature is a good example, but the lessons apply to almost anything that deals with high-volume data. Whether you're building reports, background jobs, exports, or dashboards, the same principles consistently emerge: reduce the number of queries, avoid loading too much data at once, and be deliberate about what you send back to the client.

The good news is that these are all solvable problems. Laravel provides the tools to handle them, and with the proper checks in place, such as eager loading, chunking, pagination, and field selection, you can build features that scale without surprises.

Performance isn't just about infrastructure. It's also about how you write code. And the earlier you consider this, the easier everything else becomes.