This article touches on parallel API calls performance optimization in case you develop a frontend application along with an API service. In my case it is:
- Next.js application which renders server-side,
- Laravel API which serves data for Next.js application.
During initial loading, frontend requests several things (e.g. theme, content blocks, cart items, etc) from separate API endpoints which are also used independently later when the user navigates the app. Initial set of requests is done concurrently.
Characteristics of the problem to solve
For each of parallel requests, the API server does some operations repeatedly - bootstraps the scripts at the beginning and transfers the response over the network. What if we could do this once for all parallel requests?
Typical web application request timeline breakdown:
Bootstrap~15%Request Processing~45%Response & Network~40%
Note: These percentages are approximate and can vary significantly based on:
- application complexity,
- server configuration,
- network conditions,
- caching mechanisms,
- database queries.
Solution - GraphQL-like, but still the same API
The solution is inspired by what I know from GraphQL, but I won't be using it. Let's stick to the same API we already have, but extend it by one endpoint which allows gathering responses from multiple internal services at once.
GET https://api.myapp/aggregated?giveme[]=theme&giveme[]=content&giveme[]=cart
This will benefit in saving time by cutting off a piece of Bootstrap and Response & Network timeframes cumulative.
API side changes
In such approach we need to do following changes to the API service:
- Extract logic from existing controllers to service classes which could be re-used by single endpoints and “aggregated” endpoint.
- Create a controller for “aggregated” API endpoint.
Errors
It's important to handle errors for each service call separately in aggregated API endpoint, since we don't want to interrupt rest of the logic if some service fails.
THEME - OK âś“
CONTENT - ERROR âś•
CART - OK âś“
Simple try/catch statements should be enough:
<?php
declare(strict_types=1);
namespace App\Http\Controllers;
use App\Http\Requests\AggregatedRequest;
use Illuminate\Http\Resources\Json\JsonResource;
use Facades\App\Services\ContentService;
class AggregatedController
{
public function get(AggregatedRequest $request): JsonResource
{
$requestedResources = $request->validated('giveme', []);
$data = [];
if (in_array('content', $requestedResources)) {
try {
$data['content'] = ContentService::get();
} catch (\Exception $e) {
$data['content'] = null;
}
}
// Other service calls...
return new JsonResource($data);
}
}
Frontend side changes
Start modifying the frontend application by making a decision where it makes sense to aggregate calls and where it does not. Then adjust the logic wherever you need to get multiple responses in one shot.
Parallel requests:
const [theme, content, cart] = await Promise.all([
fetchTheme(),
fetchContent(),
fetchCart()
]);
One shot:
const { theme, content, cart } = await fetchAggregated([
'theme',
'content',
'cart'
]);
// Does a GET request to https://api.myapp/aggregated?giveme[]=theme&giveme[]=content&giveme[]=cart
Results and possibilities
In some cases it’s possible to save up to 50% of the API-communication time during initial loading of the application. 🚀 E.g. If your database operations are lightning-fast and you use caching mechanisms, bootstrapping the scripts and networking can consume more than average presented in the graph in the top part of the article.
đź’ˇ Want more? This can be even better when we combine it with PHP Opcache, so precompiled scripts stored in memory are reused for each request.