
The case
Thatβs the case I had a chance to deal with recently in a project. Letβs imagine it. You have two Next.js applications and you want to share some components between them - consistent UI parts or whole functionalities. These applications are located in separate git repositories at the moment. You want to somehow put them in one but still have them as two independent applications while sharing some pieces.
Why Turborepo works well here
Turborepo is an excellent choice for this use case. It's specifically designed for JavaScript/TypeScript monorepos and works seamlessly with Next.js (both are Vercel products). Key benefits of using Turborepo are:- Independent apps: Each Next.js app stays in its own directory with its own package.json
- Shared packages: You can create a packages/ folder for shared components, utilities, configs
- Dependency deduplication: Single node_modules at root via workspaces (npm/yarn/pnpm)
- Smart caching: Only rebuilds what changed, speeds up CI/CD significantly
- Parallel execution: Runs tasks across apps/packages in parallel
Typical structure:
monorepo/
βββ apps/
β βββ first-app/ # First Next.js app
β βββ second-app/ # Second Next.js app
βββ packages/
β βββ ui/ # Shared React components
β βββ config/ # Shared ESLint, TypeScript configs
β βββ utils/ # Shared utilities
βββ package.json # Root workspace config
βββ turbo.json # Turborepo config
βββ pnpm-workspace.yaml # (if using pnpm)
Git considerations
Since you have separate git repos, you'll need to:
- Create a new monorepo.
- Move both apps into it (you'll lose individual git history unless you use git subtree or similar).
- Or keep one repo as the base and merge the other.
Ok, so where to start?
We should definitely create a new directory structure and align dependencies versions so both applications are compatible with shared components. So whatever you have in commonβ¦ Next.js, React, react-query, axios, etc. Thanks to such an approach, we will be sure that our shared parts are going to behave correctly in both applications. Itβs also a good opportunity to upgrade the dependencies to recent versions.
Sounds boring? Yeah, definitely it is. In my case I decided to ask Claude Sonnet 4.5 for help. The AI prepared a plan and created a full base for the new monorepo. It also aligned dependencies versions.
Workspaces - usage
Basic cheat sheet
When we already have a base structure and applications in place, below commands are enough to start with a minimal setup.
# Install dependencies
pnpm install
# Run all apps in development mode
pnpm dev
# Run a specific app
pnpm dev --filter=first-app
pnpm dev --filter=second-app
# Build all apps
pnpm build
# Build a specific app
pnpm build --filter=first-app
pnpm build --filter=second-app
Dockerfile example
If you run your apps in a containerized environment, we need to make Dockerfiles aligned with our new setup. Thatβs how I did it.
FROM node:22.21.1-alpine AS base
ENV NEXT_TELEMETRY_DISABLED=1
RUN corepack prepare pnpm@10.0.0 --activate && corepack enable pnpm
RUN apk add --no-cache libc6-compat
# Enable `pnpm add --global` on Alpine Linux by setting
# home location environment variable to a location already in $PATH
# https://github.com/pnpm/pnpm/issues/784#issuecomment-1518582235
ENV PNPM_HOME=/usr/local/bin
# Prune stage - create minimal monorepo for this app
FROM base AS pruner
WORKDIR /app
RUN pnpm add -g turbo
COPY . .
RUN turbo prune --scope=first-app --docker
# Install dependencies
FROM base AS installer
WORKDIR /app
COPY --from=pruner /app/out/json/ .
RUN pnpm install --frozen-lockfile
# Build stage
FROM base AS builder
WORKDIR /app
ARG BACKEND_API_BASE_URL
ENV BACKEND_API_BASE_URL=$BACKEND_API_BASE_URL
COPY --from=installer /app/ .
COPY --from=pruner /app/out/full/ .
ENV NODE_ENV=production
RUN pnpm turbo build --filter=first-app
# Production runner
FROM base AS runner
WORKDIR /app
# Runtime environment variables
# These must be re-declared in the runner stage because multi-stage Docker builds
# do not carry over ARG/ENV values from previous stages. Server-side env vars
# need to be available at container runtime, not just at build time.
# Note: NEXT_PUBLIC_* variables don't need this treatment because Next.js inlines
# them into the JavaScript bundle at build time.
ARG BACKEND_API_BASE_URL
ENV BACKEND_API_BASE_URL=$BACKEND_API_BASE_URL
ENV NODE_ENV=production
ENV APP_PORT=3000
EXPOSE 3000
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
COPY --from=builder /app/apps/first-app/.next/standalone ./
COPY --from=builder /app/apps/first-app/.next/static ./apps/first-app/.next/static
COPY --from=builder /app/apps/first-app/public ./apps/first-app/public
USER nextjs
CMD ["node", "apps/first-app/server.js"]
Our new Dockerfile is going to play well with CI/CD.
The cherry on the cake - dependency injection
For the cherry on the cake role, here comes an aspect of the whole approach which isn't obvious. What if I want to inject application specific dependencies into a shared component?
Translations? Layout parts? Functionalities? Next step of checkout flow? Whatever? Here's an elegant way to inject them without creating a mess in parent components. Let's create a wrapper which injects whatever we want into a shared component.
<DeliveryStepAppSpecificWrapper>
{(props) => <SharedDeliveryStep {...props} />}
</DeliveryStepAppSpecificWrapper>
Our wrapper could look like this:
export default function DeliveryStepAppSpecificWrapper({
children,
}: DeliveryStepAppSpecificWrapperProps) {
const colors: DeliveryStepAppSpecificColors = {
// Set up colors from palette
};
const translations: DeliveryStepAppSpecificTranslations = {
// Set up translations
};
const onGoToNextStep = () => {
// Set up next step logic
};
return <>{children({ onGoToNextStep, colors, translations })}</>;
}