← Back to all snippets
PHP

Process Large Datasets Efficiently Using `chunkById`

Avoid memory exhaustion when iterating over thousands of records by fetching and processing models in smaller chunks based on their primary key in Laravel Eloquent.

use App\Models\Product;
use Illuminate\Support\Facades\Log;

// Process 1000 products at a time, ordered by ID to prevent skipping/duplication
Product::chunkById(1000, function ($products) {
    foreach ($products as $product) {
        // Perform some heavy operation on each product
        $product->update(['status' => 'processed']);
        Log::info("Processed product: ".$product->id);
    }

    // Optional: Sleep for a bit to prevent overwhelming resources
    // sleep(1);
});

Log::info('Finished processing all products in chunks.');
How it works: When dealing with a very large number of records, retrieving all of them at once can consume significant memory and lead to performance issues. The `chunkById()` method addresses this by retrieving a subset (chunk) of models, passing them to a callback, and then fetching the next chunk. It efficiently iterates through records by their primary key, making it suitable for tasks like data migration, bulk updates, or generating reports without exhausting your application's memory limit.

Need help integrating this into your project?

Our team of expert developers can help you build your custom application from scratch.

Hire DigitalCodeLabs