Skip to content

feat(ws): containerize frontend component #394

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conversation

Noa-limoy
Copy link

@Noa-limoy Noa-limoy commented May 29, 2025

closes #392

I've created a Dockerfile for the frontend component, which allows for containerization of the frontend application. Currently, the frontend can only be run locally via the command line.
now we can run the frontend as a k8s workload resource, and its separate from the backend container.

Build the frontend image:

docker build -f Dockerfile -t frontend-img .

Run the container and expose it on port 8080:

docker run -it --rm --name workspace-frontend -p 8080:8080 frontend-img

@github-project-automation github-project-automation bot moved this to Needs Triage in Kubeflow Notebooks May 29, 2025
@Noa-limoy Noa-limoy changed the base branch from main to notebooks-v2 May 29, 2025 12:43
@google-oss-prow google-oss-prow bot added size/M and removed size/XXL labels May 29, 2025
Copy link
Contributor

@andyatmiami andyatmiami left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general - I think we want to approach the Dockerfile more geared to a production build of the frontend.

  • ⚠️ We should verify this with the community - so please only take this as a discussion point for now!

By "production build" - I mean making sure the image is built deterministically and focused on being as small as possible.

  • npm ci
  • only install required dependencies
  • NODE_ENV=production
  • using multi-stage built to not leak dev/optional dependencies into image

Here is a rough outline of how I could see this Dockerfile structured given the above (but just use this is a discussion point for time being)

# Build stage
FROM node:20-slim AS builder

# Set working directory
WORKDIR /usr/src/app

# Copy package files
COPY package*.json ./

# Install ALL dependencies (including devDependencies)
RUN npm ci

# Copy source code
COPY . .

# Build the application
RUN npm run build

# Production stage
FROM node:20-slim

# Set working directory
WORKDIR /usr/src/app

# Copy package files from builder stage
COPY --from=builder /usr/src/app/package*.json ./

# Install only production dependencies
RUN npm ci --only=production

# Copy built assets from builder stage
COPY --from=builder /usr/src/app/dist ./dist
COPY --from=builder /usr/src/app/public ./public

# Create non-root user
RUN addgroup --system appgroup && \
    adduser --system appuser --ingroup appgroup && \
    chown -R appuser:appgroup /usr/src/app

# Switch to non-root user
USER appuser

# Expose the development port (matching webpack dev server)
EXPOSE 8080

# Set environment variables
ENV NODE_ENV=production
ENV PORT=8080

# Start the production server
CMD ["npm", "run", "start:prod"]

Copy link

@andyatmiami: changing LGTM is restricted to collaborators

In response to this:

In general - I think we want to approach the Dockerfile more geared to a production build of the frontend.

  • ⚠️ We should verify this with the community - so please only take this as a discussion point for now!

By "production build" - I mean making sure the image is built deterministically and focused on being as small as possible.

  • npm ci
  • only install required dependencies
  • NODE_ENV=production
  • using multi-stage built to not leak dev/optional dependencies into image

Here is a rough outline of how I could see this Dockerfile structured given the above (but just use this is a discussion point for time being)

# Build stage
FROM node:20-slim AS builder

# Set working directory
WORKDIR /usr/src/app

# Copy package files
COPY package*.json ./

# Install ALL dependencies (including devDependencies)
RUN npm ci

# Copy source code
COPY . .

# Build the application
RUN npm run build

# Production stage
FROM node:20-slim

# Set working directory
WORKDIR /usr/src/app

# Copy package files from builder stage
COPY --from=builder /usr/src/app/package*.json ./

# Install only production dependencies
RUN npm ci --only=production

# Copy built assets from builder stage
COPY --from=builder /usr/src/app/dist ./dist
COPY --from=builder /usr/src/app/public ./public

# Create non-root user
RUN addgroup --system appgroup && \
   adduser --system appuser --ingroup appgroup && \
   chown -R appuser:appgroup /usr/src/app

# Switch to non-root user
USER appuser

# Expose the development port (matching webpack dev server)
EXPOSE 8080

# Set environment variables
ENV NODE_ENV=production
ENV PORT=8080

# Start the production server
CMD ["npm", "run", "start:prod"]

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Comment on lines 16 to 18
# Use a non-root user for security
RUN addgroup --system appgroup && adduser --system appuser --ingroup appgroup
USER appuser
Copy link
Contributor

@andyatmiami andyatmiami May 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not implying anything wrong with this RUN statement to define a non-root user - but we should definitely align how we do this across controller + backend + frontend for sake of consistency.

Right now, backend + controller simply use hard-coded value of USER 65532:65532

(fwiw - I like this solution better - but def want consensus from the community at large!)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I used the following method beacuse this approach is a bit more explicit and readable.
but I totally agree with you that consistency across the project definitely matters.
I’ll change it..

@Noa-limoy Noa-limoy force-pushed the feat/containerize_fronted_component/392 branch 2 times, most recently from 89a691b to f77bc77 Compare June 10, 2025 12:17
@Noa-limoy Noa-limoy changed the title Containerize fronted component #392 feat(FE): Containerize fronted component #392 Jun 10, 2025
@Noa-limoy Noa-limoy changed the title feat(FE): Containerize fronted component #392 feat(ws): Containerize fronted component #392 Jun 10, 2025
@Noa-limoy Noa-limoy force-pushed the feat/containerize_fronted_component/392 branch from f77bc77 to a4f2b22 Compare June 11, 2025 09:55
@google-oss-prow google-oss-prow bot added size/L and removed size/M labels Jun 11, 2025
Copy link
Contributor

@andyatmiami andyatmiami left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Really nice work getting this all fleshed out...

I have some initial suggestions for the nginx.conf file - not sure this will be enough to make it production-ready - but I feel its a step in the right direction....

worker_processes  auto;

error_log  /dev/stderr warn;
pid        /tmp/nginx.pid;

events {
    worker_connections 1024;
}

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] - $http_x_api_version - "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /dev/stdout  main;

    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    # Security headers
    add_header X-Frame-Options "SAMEORIGIN" always;
    add_header X-XSS-Protection "1; mode=block" always;
    add_header X-Content-Type-Options "nosniff" always;
    add_header Referrer-Policy "no-referrer-when-downgrade" always;
    add_header Content-Security-Policy "default-src 'self' http: https: data: blob: 'unsafe-inline'" always;

    # --- Gzip Compression ---
    gzip on;
    gzip_types text/plain text/html text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript image/svg+xml;
    gzip_comp_level 5;
    gzip_min_length 1000;
    gzip_proxied any;
    gzip_vary on;
    gzip_disable "msie6";

    # Health check endpoint
    server {
        listen       8080;

        # Health check endpoint
        location /health {
            access_log off;
            return 200 'healthy\n';
        }

        location / {
            root   /usr/share/nginx/html;
            index  index.html;
            try_files $uri $uri/ /index.html;
        }

        # Static assets (cache enabled)
        location ~* \.(css|js|gif|jpeg|jpg|png|ico|woff|woff2|ttf|otf|svg|eot)$ {
            root   /usr/share/nginx/html;
            expires 30d;
            add_header Cache-Control "public, no-transform";
            try_files $uri =404;
        }

        # Backend API - Using Kubernetes service discovery
        location /api/ {
            # Use environment variable for backend service
            set $backend_service "${BACKEND_SERVICE:-backend-service}:4000";
            proxy_pass http://$backend_service;
            proxy_http_version 1.1;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;

            # Timeouts
            proxy_connect_timeout 60s;
            proxy_send_timeout 60s;
            proxy_read_timeout 60s;
        }
    }
}

Key changes included above:

  • logs write to stdout / stderr (so they can be captured by k8s)
  • worker_processes: auto to avoid hard-coded 1
  • added various security headers (although feel free to suggest better configurations of said headers)
  • added simple /health endpoint for k8s liveness/readiness
  • added proxy timeout settings (again, suggest better values vs. the 60s there now)
  • most importantly, though, added env var support for the proxy_pass value... as the hardcoded docker hostname isn't going to work for us.. although unclear if this "env var" solution is good enough

@andyatmiami
Copy link
Contributor

/ok-to-test

@Noa-limoy Noa-limoy force-pushed the feat/containerize_fronted_component/392 branch 2 times, most recently from 5828fcf to b146c0e Compare June 25, 2025 07:43
ENV PORT=8080

# Set default backend service
ENV BACKEND_SERVICE=localhost:4000
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should change this default value to backend-service:4000 (albeit there isn't a clear cut "best default" imho)

Problem with localhost:4000

  • Naively, localhost will resolve within the running container and not be able to communicate with the backend without additional flags/configs specified
  • This also doesn't really make sense in a k8s environment

My Suggestion
ENV BACKEND_SERVICE=backend-service:4000

Rationale

  • this still won't "just work" if trying to run locally, but it at least alludes to user that this "external dependency" exists (i.e. its a more descriptive error vs. just localhost resolution errors)
  • Both Docker and Kubernetes could (if configured correctly) support the backend-service:4000 - so under the right conditions - the -e flag need not be provided to Docker and/or specified in env config in Kubernetes

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I completely agree with your suggestion...
the name backend-service indeed implies it's an external service – unlike localhost – and it's more appropriate for Kubernetes envs, where there's internal DNS that can resolve backend-service.

@@ -0,0 +1,70 @@
# ---------- Builder stage ----------
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in an effort to promote more deterministic builds - while offering a little more flexibility - I would if we should introduce some global build args here:

ARG NODE_VERSION=20.11.0
ARG NGINX_VERSION=1.25.3

So then our various "stage declarations" would look like:

  • FROM node:${NODE_VERSION}-slim AS builder
  • FROM nginx:${NGINX_VERSION}-alpine

Of course this would also mean net-new commits would be required to pick up even incremental (non-breaking) version updates... while the "transparency" is nice - the "housekeeping" aspect is less nice 🤔

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For now - we will err on the side of consistency and not act on this change - as the controller and backend modules do not leverage these types of ARG inputs for granular version control.

Comment on lines 24 to 55
USER root

# Install envsubst (gettext package)
RUN apk add --no-cache gettext

# Copy built assets from builder stage
COPY --from=builder /usr/src/app/dist /usr/share/nginx/html

# Copy nginx template
COPY nginx.conf.template /etc/nginx/nginx.conf.template

# Create directories and set permissions for non-root user
RUN mkdir -p /var/cache/nginx/client_temp \
/var/cache/nginx/proxy_temp \
/var/cache/nginx/fastcgi_temp \
/var/cache/nginx/uwsgi_temp \
/var/cache/nginx/scgi_temp \
/var/run/nginx \
/tmp/nginx && \
# Change ownership of nginx directories to nginx user (UID 101)
chown -R 101:101 /var/cache/nginx \
/var/run/nginx \
/usr/share/nginx/html \
/tmp/nginx \
/etc/nginx

# Create startup script that works with non-root user
RUN echo '#!/bin/sh' > /docker-entrypoint.sh && \
echo 'envsubst "\${BACKEND_SERVICE}" < /etc/nginx/nginx.conf.template > /tmp/nginx/nginx.conf' >> /docker-entrypoint.sh && \
echo 'exec nginx -c /tmp/nginx/nginx.conf -g "daemon off;"' >> /docker-entrypoint.sh && \
chmod +x /docker-entrypoint.sh && \
chown 101:101 /docker-entrypoint.sh
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel like we can be more precise on the commands that actually need to run as root - and then execute other commands after the USER 101:101 declaration... to avoid the various chown commands

I think the following encapsulates what is actually required to run as root

# Install envsubst (gettext package)
RUN apk add --no-cache gettext

# Copy built assets from builder stage
COPY --from=builder /usr/src/app/dist /usr/share/nginx/html

# Copy nginx template
COPY nginx.conf.template /etc/nginx/nginx.conf.template

# Create base directories that need root
RUN mkdir -p /var/cache/nginx \
             /var/run/nginx \
             /tmp/nginx

All other commands can be moved down into the "user space" so the permissions are just set appropriately from the outset...

Copy link
Author

@Noa-limoy Noa-limoy Jul 2, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Once we drop root privileges — as per your suggestion — and switch to USER 101:101 we can no longer create or modify directories under those system paths, as those are owned by root and not writable by the nginx user

/var/cache/nginx/client_temp \ /var/cache/nginx/proxy_temp \ /var/cache/nginx/fastcgi_temp \ /var/cache/nginx/uwsgi_temp \ /var/cache/nginx/scgi_temp

So even though we want to avoid chown, in this case it’s necessary otherwise, we will run into permission issues when creating these directories.


# Gzip Compression
gzip on;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript image/svg+xml;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I hesitate to offer this suggestion now as #434 is not yet merged - but we should probably include gzip support for the custom content-type header we are leveraging:

  • application/vnd.kubeflow-notebooks.manifest+yaml

Granted if that value were to change - it should be reflected here.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just wondering — is the payload for this custom content-type typically large enough to justify enabling gzip and the extra maintenance if this value ever changes?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we ended up changing the content-type to application/yaml - and at scale it could conceivably be large enough to warrant compression.. so i think we should add it.

@andyatmiami
Copy link
Contributor

andyatmiami commented Jul 2, 2025

As my outstanding PR review comments don't fundamentally change the integrity of the code provided on the PR (are moreso just "nice to haves") - I went ahead and tested the changes in a variety of scenarios and happy to confirm everything works as intended 💯

Running frontend as a container with backend running on host machine

Admittedly this isn't a very realistic/interesting scenario - but I figured I'd try it out nonetheless to verify the flexibility of the implementation.

  1. backend running locally on machine:

    ➜ backend/ git:((HEAD detached at roee/RHOAIENG-25098)) $ gmake run
    go fmt ./...
    go vet ./...
    /Users/astonebe/Development/Code/GitHub/kubeflow-notebooks/workspaces/backend/bin/swag fmt -g main.go -d cmd,api,internal/auth,internal/config,internal/helper,internal/models/health_check,internal/models/namespaces,internal/models/workspacekinds,internal/models/workspaces,internal/repositories,internal/repositories/health_check,internal/repositories/namespaces,internal/repositories/workspacekinds,internal/repositories/workspaces,internal/server,openapi
    /Users/astonebe/Development/Code/GitHub/kubeflow-notebooks/workspaces/backend/bin/swag init --parseDependency -q -g main.go -d cmd,api,internal/auth,internal/config,internal/helper,internal/models/health_check,internal/models/namespaces,internal/models/workspacekinds,internal/models/workspaces,internal/repositories,internal/repositories/health_check,internal/repositories/namespaces,internal/repositories/workspacekinds,internal/repositories/workspaces,internal/server,openapi --output openapi
    go run ./cmd/main.go --port=4000
    time=2025-07-02T08:58:19.799-04:00 level=INFO msg="starting manager"
    time=2025-07-02T08:58:19.799-04:00 level=INFO msg="starting server" addr=:4000   
    
  2. git checkout noa/feat/containerize_fronted_component/392

  3. docker build -f Dockerfile -t nv2-frontend-dev .

  4. docker run -it --rm -p 8080:8080 -e BACKEND_SERVICE=host.docker.internal:4000 nv2-frontend-dev

  5. <visit http://localhost:8080 in browser>

  6. Confirmed page loaded successfully and rendered information from my local backend service

    10.88.0.8 - - [02/Jul/2025:14:09:15 +0000] - - - "GET /api/v1/workspaces/ HTTP/1.1" 301 53 "http://localhost:8080/workspaces" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/136.0.0.0 Safari/537.36" "-"
    10.88.0.8 - - [02/Jul/2025:14:09:15 +0000] - - - "GET /api/v1/workspaces HTTP/1.1" 200 703 "http://localhost:8080/workspaces" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/136.0.0.0 Safari/537.36" "-"
    10.88.0.8 - - [02/Jul/2025:14:09:15 +0000] - - - "GET /api/v1/workspacekinds HTTP/1.1" 200 1029 "http://localhost:8080/workspaces" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/136.0.0.0 Safari/537.36" "-"
    10.88.0.8 - - [02/Jul/2025:14:09:15 +0000] - - - "GET /workspaces/backend/api/v1/workspacekinds/jupyterlab/assets/icon HTTP/1.1" 200 825 "http://localhost:8080/workspaces" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/136.0.0.0 Safari/537.36" "-"
    10.88.0.8 - - [02/Jul/2025:14:09:25 +0000] - - - "GET /api/v1/workspaces/ HTTP/1.1" 301 53 "http://localhost:8080/workspaces" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/136.0.0.0 Safari/537.36" "-"
    10.88.0.8 - - [02/Jul/2025:14:09:25 +0000] - - - "GET /api/v1/workspaces HTTP/1.1" 200 703 "http://localhost:8080/workspaces" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/136.0.0.0 Safari/537.36" "-"
    10.88.0.8 - - [02/Jul/2025:14:09:35 +0000] - - - "GET /api/v1/workspaces/ HTTP/1.1" 301 53 "http://localhost:8080/workspaces" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/136.0.0.0 Safari/537.36" "-"
    

Running frontend and backend as workloads within KinD

  1. Loaded respective frontend and backend images into KinD

    ➜ backend/ git:((HEAD detached at origin/notebooks-v2)) $ gmake docker-build IMG=quay.io/rh-ee-astonebe/kubeflow-notebooks-v2:backend-containerize
    ➜ backend/ git:((HEAD detached at origin/notebooks-v2)) $ kind load docker-image quay.io/rh-ee-astonebe/kubeflow-notebooks-v2:backend-containerize
    ➜ frontend/ git:((HEAD detached at noa/feat/containerize_fronted_component/392)) $ docker tag nv2-frontend-dev:latest quay.io/rh-ee-astonebe/kubeflow-notebooks-v2:frontend-containerized
    ➜ frontend/ git:((HEAD detached at noa/feat/containerize_fronted_component/392)) $ kind load docker-image quay.io/rh-ee-astonebe/kubeflow-notebooks-v2:frontend-containerized
    
  2. Applied (very) basic manifests suitable to run containers (attached at end of comment)

    $ kubectl apply -f test-backend-rbac.yaml 
    $ kubectl apply -f test-backend-deployment.yaml 
    $ kubectl apply -f test-frontend-manifest.yaml 
    
  3. Confirmed all workloads running successfully

    $ kubectl get all -A
    NAMESPACE                     NAME                                                           READY   STATUS    RESTARTS         AGE
    cert-manager                  pod/cert-manager-7979fbf6b6-57gfp                              1/1     Running   59 (5h33m ago)   6d21h
    cert-manager                  pod/cert-manager-cainjector-68b64d44c7-brlh5                   1/1     Running   0                6d21h
    cert-manager                  pod/cert-manager-webhook-ff897cd5d-vdhbd                       1/1     Running   0                6d21h
    default                       pod/backend-6c69d94bf7-kw6xs                                   1/1     Running   0                38s
    default                       pod/frontend-deployment-c8b8b5f85-p9l48                        0/1     Running   0                5s
    default                       pod/ws-jupyterlab-scipy-workspace-v65tt-0                      1/1     Running   0                23h
    default                       pod/ws-jupyterlab-workspace-h2vpz-0                            1/1     Running   0                22h
    kube-system                   pod/coredns-668d6bf9bc-2cjx4                                   1/1     Running   0                6d21h
    kube-system                   pod/coredns-668d6bf9bc-vbq9h                                   1/1     Running   0                6d21h
    kube-system                   pod/etcd-kind-control-plane                                    1/1     Running   0                6d21h
    kube-system                   pod/kindnet-kllkq                                              1/1     Running   0                6d21h
    kube-system                   pod/kube-apiserver-kind-control-plane                          1/1     Running   0                6d21h
    kube-system                   pod/kube-controller-manager-kind-control-plane                 1/1     Running   1 (5d12h ago)    6d21h
    kube-system                   pod/kube-proxy-rxx4b                                           1/1     Running   0                6d21h
    kube-system                   pod/kube-scheduler-kind-control-plane                          1/1     Running   2 (5d ago)       6d21h
    local-path-storage            pod/local-path-provisioner-7dc846544d-76lls                    1/1     Running   0                6d21h
    workspace-controller-system   pod/workspace-controller-controller-manager-7bc497d494-54nhp   1/1     Running   1 (5d12h ago)    6d21h
    
    NAMESPACE                     NAME                                                              TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
    cert-manager                  service/cert-manager                                              ClusterIP   10.96.124.214   <none>        9402/TCP                 6d21h
    cert-manager                  service/cert-manager-cainjector                                   ClusterIP   10.96.8.167     <none>        9402/TCP                 6d21h
    cert-manager                  service/cert-manager-webhook                                      ClusterIP   10.96.67.80     <none>        443/TCP,9402/TCP         6d21h
    default                       service/backend-service                                           NodePort    10.96.122.225   <none>        4000:31434/TCP           38s
    default                       service/frontend-service                                          NodePort    10.96.220.85    <none>        8080:31797/TCP           5s
    default                       service/kubernetes                                                ClusterIP   10.96.0.1       <none>        443/TCP                  6d21h
    default                       service/ws-jupyterlab-scipy-workspace-8dxj6                       ClusterIP   10.96.211.81    <none>        8888/TCP                 23h
    default                       service/ws-jupyterlab-workspace-ftl7z                             ClusterIP   10.96.85.41     <none>        8888/TCP                 22h
    kube-system                   service/kube-dns                                                  ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP,9153/TCP   6d21h
    workspace-controller-system   service/workspace-controller-controller-manager-metrics-service   ClusterIP   10.96.238.53    <none>        8080/TCP                 6d21h
    workspace-controller-system   service/workspace-controller-webhook-service                      ClusterIP   10.96.153.109   <none>        443/TCP                  6d21h
    
    NAMESPACE     NAME                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
    kube-system   daemonset.apps/kindnet      1         1         1       1            1           kubernetes.io/os=linux   6d21h
    kube-system   daemonset.apps/kube-proxy   1         1         1       1            1           kubernetes.io/os=linux   6d21h
    
    NAMESPACE                     NAME                                                      READY   UP-TO-DATE   AVAILABLE   AGE
    cert-manager                  deployment.apps/cert-manager                              1/1     1            1           6d21h
    cert-manager                  deployment.apps/cert-manager-cainjector                   1/1     1            1           6d21h
    cert-manager                  deployment.apps/cert-manager-webhook                      1/1     1            1           6d21h
    default                       deployment.apps/backend                                   1/1     1            1           38s
    default                       deployment.apps/frontend-deployment                       0/1     1            0           5s
    kube-system                   deployment.apps/coredns                                   2/2     2            2           6d21h
    local-path-storage            deployment.apps/local-path-provisioner                    1/1     1            1           6d21h
    workspace-controller-system   deployment.apps/workspace-controller-controller-manager   1/1     1            1           6d21h
    
    NAMESPACE                     NAME                                                                 DESIRED   CURRENT   READY   AGE
    cert-manager                  replicaset.apps/cert-manager-7979fbf6b6                              1         1         1       6d21h
    cert-manager                  replicaset.apps/cert-manager-cainjector-68b64d44c7                   1         1         1       6d21h
    cert-manager                  replicaset.apps/cert-manager-webhook-ff897cd5d                       1         1         1       6d21h
    default                       replicaset.apps/backend-6c69d94bf7                                   1         1         1       38s
    default                       replicaset.apps/frontend-deployment-c8b8b5f85                        1         1         0       5s
    kube-system                   replicaset.apps/coredns-668d6bf9bc                                   2         2         2       6d21h
    local-path-storage            replicaset.apps/local-path-provisioner-7dc846544d                    1         1         1       6d21h
    workspace-controller-system   replicaset.apps/workspace-controller-controller-manager-7bc497d494   1         1         1       6d21h
    
    NAMESPACE   NAME                                                   READY   AGE
    default     statefulset.apps/ws-jupyterlab-scipy-workspace-v65tt   1/1     23h
    default     statefulset.apps/ws-jupyterlab-workspace-h2vpz         1/1     22h
    
  4. $ kubectl port-forward service/frontend-service 8080:8080

  5. <visit http://localhost:8080 in browser>

image

  1. $ kubectl logs frontend-deployment-c8b8b5f85-p9l48

    127.0.0.1 - - [02/Jul/2025:16:23:39 +0000] - - - "GET /api/v1/workspaces/ HTTP/1.1" 301 53 "http://localhost:8080/workspaces" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/136.0.0.0 Safari/537.36" "-"
    127.0.0.1 - - [02/Jul/2025:16:23:39 +0000] - - - "GET /api/v1/workspaces HTTP/1.1" 200 703 "http://localhost:8080/workspaces" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/136.0.0.0 Safari/537.36" "-"
    127.0.0.1 - - [02/Jul/2025:16:23:39 +0000] - - - "GET /api/v1/workspacekinds HTTP/1.1" 200 959 "http://localhost:8080/workspaces" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/136.0.0.0 Safari/537.36" "-"
    127.0.0.1 - - [02/Jul/2025:16:23:49 +0000] - - - "GET /api/v1/workspaces HTTP/1.1" 200 703 "http://localhost:8080/workspaces" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/136.0.0.0 Safari/537.36" "-"
    ...
    

test-backend-rbac.yaml.txt

test-backend-deployment.yaml.txt

test-frontend-manifest.yaml.txt

@Noa-limoy Noa-limoy force-pushed the feat/containerize_fronted_component/392 branch from b146c0e to 2a961da Compare July 3, 2025 09:16
@google-oss-prow google-oss-prow bot added area/frontend area - related to frontend components area/v2 area - version - kubeflow notebooks v2 labels Jul 3, 2025
@Noa-limoy Noa-limoy force-pushed the feat/containerize_fronted_component/392 branch from 2a961da to ecb7535 Compare July 3, 2025 09:19
Comment on lines 43 to 46
# Upstream backend configuration
upstream backend {
server ${BACKEND_SERVICE};
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In discussions with Mathew - it was mentioned that we do NOT want to expose the backend via NGINX

we will only expose the frontend - and then configure the frontend container such that it will communicate directly with the backend

Comment on lines 71 to 85
# Backend API
location /api/ {
proxy_pass http://backend/api/;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;

# Timeouts
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;

}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ENV PORT=8080

# Set default backend service
ENV BACKEND_SERVICE=backend-service:4000
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We want to set whatever environment variables are supported by frontend here that allows configuring the network information to connect to backend...

Comment on lines 50 to 55
# Create startup script that works with non-root user
RUN echo '#!/bin/sh' > /docker-entrypoint.sh && \
echo 'envsubst "\${BACKEND_SERVICE}" < /etc/nginx/nginx.conf.template > /tmp/nginx/nginx.conf' >> /docker-entrypoint.sh && \
echo 'exec nginx -c /tmp/nginx/nginx.conf -g "daemon off;"' >> /docker-entrypoint.sh && \
chmod +x /docker-entrypoint.sh && \
chown 101:101 /docker-entrypoint.sh
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can probably be simplified in light of:

@Noa-limoy Noa-limoy force-pushed the feat/containerize_fronted_component/392 branch from ecb7535 to 01dc84a Compare July 14, 2025 09:10
@andyatmiami
Copy link
Contributor

andyatmiami commented Jul 14, 2025

/lgtm

as discussed with community - the ability to have the frontend container interact with the backend will be enabled via work on the manifests...

so this PR it itself laser-focused on simply containerizing the frontend code - which I have verified to be doing just that

image
$ docker run -it --rm -p 8080:8080  nbv2-frontend
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Sourcing /docker-entrypoint.d/15-local-resolvers.envsh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
10.88.0.10 - - [14/Jul/2025:14:12:18 +0000] - - - "GET / HTTP/1.1" 200 825 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/136.0.0.0 Safari/537.36" "-"
10.88.0.10 - - [14/Jul/2025:14:12:18 +0000] - - - "GET /main.bundle.js HTTP/1.1" 200 288183 "http://localhost:8080/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/136.0.0.0 Safari/537.36" "-"
10.88.0.10 - - [14/Jul/2025:14:12:19 +0000] - - - "GET /main.css HTTP/1.1" 200 111342 "http://localhost:8080/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/136.0.0.0 Safari/537.36" "-"
10.88.0.10 - - [14/Jul/2025:14:12:19 +0000] - - - "GET /api/v1/namespaces HTTP/1.1" 200 825 "http://localhost:8080/workspaces" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/136.0.0.0 Safari/537.36" "-"
10.88.0.10 - - [14/Jul/2025:14:12:19 +0000] - - - "GET /api/v1/workspaces/ HTTP/1.1" 200 825 "http://localhost:8080/workspaces" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/136.0.0.0 Safari/537.36" "-"
10.88.0.10 - - [14/Jul/2025:14:12:38 +0000] - - - "GET /api/v1/workspaces/ HTTP/1.1" 200 825 "http://localhost:8080/workspaces" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/136.0.0.0 Safari/537.36" "-"

@google-oss-prow google-oss-prow bot added the lgtm label Jul 14, 2025
@thesuperzapper thesuperzapper changed the title feat(ws): Containerize fronted component #392 feat(ws): containerize frontend component Jul 24, 2025
@thesuperzapper
Copy link
Member

@Noa-limoy thanks for your work on this, great to have another contributor.

Since we aren't pushing the image yet, I am happy to merge without as much testing. However, I expect we might need to add configs for HTTP_PATH_PREFIX (or similar), or just be very careful to ensure always using relative paths in the frontend itself, and the IFRAME stuff might not work properly once we wrap the frontend in the central dashboard.

/approve

Copy link

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: thesuperzapper

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@google-oss-prow google-oss-prow bot merged commit e747ad5 into kubeflow:notebooks-v2 Jul 24, 2025
9 checks passed
@github-project-automation github-project-automation bot moved this from Needs Triage to Done in Kubeflow Notebooks Jul 24, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved area/frontend area - related to frontend components area/v2 area - version - kubeflow notebooks v2 lgtm ok-to-test size/L
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

[TASK] Containerize frontend component
3 participants