
I was working on such a solution when it came to a need of archiving multiple files and serving an resulting ZIP file directly on the fly. The streaming approach avoids loading the entire file into memory on either the server or client, which is critical for large files.
Requirements
So... the client (Next.js) requests some set of files and expects that:
- Laravel archives files without putting all of them into memory at once.
- The resulting file isn't stored on disk.
- Download starts immediately, when archiving is still in progress.
- Client receives expected filesize and properly calculates progress and estimated time to finish download.
- The whole process is properly handled without calling external API directly from web browser.
Backend side
The ZIP streaming uses the ZipStream-PHP package via its Laravel wrapper laravel-zipstream.
How it works
-
Streaming Response (No Temp File) ā The
Zip::create()facade returns aBuilderobject that implements Laravel'sResponsableinterface. When returned from a controller, Laravel automatically callstoResponse(), which streams the ZIP directly to the client without creating a temporary file on disk. -
On-the-Fly Compression
use Illuminate\Support\Facades\Storage; use STS\ZipStream\Facades\Zip; $storage = Storage::disk('my-disk'); $zip = Zip::create('my-filename.zip'); foreach ($files as $file) { $zip->add($storage->path($file['id']), $archivePath); } return $zip;Each file is read and compressed incrementally as the response streams.
-
Memory Efficiency ā Files are read in chunks from the storage disk and compressed on-the-fly. This avoids loading entire files into memory - critical for large downloads.
-
Tracking progress ā Content length is calculated ahead of time, so the client knows the final file size of file and downloading progress can be presented e.g. in the web browser's UI.
Frontend side
In my (project-specific) case the frontend doesn't talk to the backend directly. All communication is being handled through server components and server actions. For ZIP file streaming I have chosen using Next.js API route handler as a proxy since it gave me the best control over streamed response.
Web browser
const requestDownload = async () => {
setIsDownloadRequested(true);
try {
// For multiple files, use streaming API route with direct download
if (fileIds.length > 1) {
// Create a form and submit it to trigger native browser download
// This starts the download immediately when the server responds
const form = document.createElement("form");
form.method = "POST";
form.action = "/api/download";
form.style.display = "none";
const fileIdsInput = document.createElement("input");
fileIdsInput.name = "file_ids";
fileIdsInput.value = JSON.stringify(fileIds);
form.appendChild(fileIdsInput);
// Add filename
const filenameInput = document.createElement("input");
filenameInput.name = "filename";
filenameInput.value = "your-filename.zip";
form.appendChild(filenameInput);
document.body.appendChild(form);
form.submit();
document.body.removeChild(form);
console.log(
`š Browser will start downloading as soon as server responds`
);
return;
}
// Non-archived single file case which is not covered in this blog post
} catch (error) {
console.error("Download error:", error);
alert("Error downloading file. Please try again.");
setIsDownloadRequested(false);
}
};
API proxy handler
import type { NextRequest } from "next/server";
export async function POST(request: NextRequest) {
try {
let file_ids: string[], filename: string;
// Handle form data
const formData = await request.formData();
file_ids = JSON.parse(formData.get("file_ids") as string);
filename = (formData.get("filename") as string) || "download.zip";
console.log(
`[Download API] Starting download for ${file_ids.length} files`
);
// Prepare headers for backend request
const backendHeaders: Record<string, string> = {
"Content-Type": "application/json",
Accept: "application/zip",
};
// Make request to backend API
const baseURL = process.env.BACKEND_API_BASE_URL;
// Use fetch instead of axios to allow streaming
const response = await fetch(`${baseURL}/your-backend-path/download`, {
method: "POST",
headers: backendHeaders,
body: JSON.stringify({
file_ids,
}),
});
if (!response.ok) {
return new Response(
JSON.stringify({ error: `Backend error: ${response.status}` }),
{
status: response.status,
headers: { "Content-Type": "application/json" },
}
);
}
if (!response.body) {
return new Response(JSON.stringify({ error: "No response body" }), {
status: 500,
headers: { "Content-Type": "application/json" },
});
}
// Stream the response directly to the client
const contentLength = response.headers.get("content-length");
const responseHeaders: Record<string, string> = {
"Content-Type": "application/zip",
"Content-Disposition": `attachment; filename="${filename}"`,
"Cache-Control": "no-cache",
};
// Forward Content-Length if available for progress indication
if (contentLength) {
responseHeaders["Content-Length"] = contentLength;
}
return new Response(response.body, {
headers: responseHeaders,
});
} catch (error) {
console.error("[Download API] ā Error:", error);
return new Response(JSON.stringify({ error: (error as Error).message }), {
status: 500,
headers: { "Content-Type": "application/json" },
});
}
}
Important Headers
Content-Typeāapplication/zipContent-Dispositionāattachment; filename="name.zip"ā triggers browser downloadContent-Lengthā Optional but helps show download progress
Summary
Instead of buffering the entire file in memory, you pipe the response stream directly from the backend to the client. The streaming approach avoids loading the entire file into memory on either the server or client, which is critical for large files.
