Skip to content

Conversation

@vofish
Copy link

@vofish vofish commented Aug 22, 2025

  • Added AKS support for Gemma-2b
  • Selected NVIDIA T4 GPU for deployment, as L4 GPUs are not available in Azure.

@k8s-ci-robot k8s-ci-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Aug 22, 2025
@k8s-ci-robot k8s-ci-robot requested review from ahg-g and jjk-g August 22, 2025 12:03
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: vofish
Once this PR has been reviewed and has the lgtm label, please assign jjk-g for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the size/M Denotes a PR that changes 30-99 lines, ignoring generated files. label Aug 22, 2025
@huaxig
Copy link

huaxig commented Aug 26, 2025

FYI, the output of kustomize build core/deployment/vllm/gemma-2b/aks:

apiVersion: v1
kind: Service
metadata:
  labels:
    app: gemma-2b-vllm-inference-server
  name: gemma-2b-vllm-service
spec:
  ports:
  - port: 8000
    protocol: TCP
    targetPort: 8000
  selector:
    app: gemma-2b-vllm-inference-server
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    name: gemma-2b-vllm-inference-server
  name: gemma-2b-vllm-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: gemma-2b-vllm-inference-server
  template:
    metadata:
      labels:
        ai.gke.io/inference-server: vllm
        ai.gke.io/model: gemma-2b
        app: gemma-2b-vllm-inference-server
        examples.ai.gke.io/source: blueprints
    spec:
      containers:
      - args:
        - --model=$(MODEL_ID)
        - --tensor-parallel-size=1
        command:
        - python3
        - -m
        - vllm.entrypoints.openai.api_server
        env:
        - name: MODEL_ID
          value: google/gemma-2b
        - name: HUGGING_FACE_HUB_TOKEN
          valueFrom:
            secretKeyRef:
              key: hf_api_token
              name: hf-secret
        image: vllm/vllm-openai:latest
        name: inference-server
        ports:
        - containerPort: 8000
          name: metrics
        readinessProbe:
          failureThreshold: 60
          httpGet:
            path: /health
            port: 8000
          periodSeconds: 10
        resources:
          limits:
            nvidia.com/gpu: 1
          requests:
            nvidia.com/gpu: 1
        volumeMounts:
        - mountPath: /dev/shm
          name: dshm
      nodeSelector:
        kubernetes.azure.com/accelerator: nvidia
      volumes:
      - emptyDir:
          medium: Memory
        name: dshm

Comment on lines +6 to +16
template:
spec:
nodeSelector:
kubernetes.azure.com/accelerator: nvidia
containers:
- name: inference-server
resources:
requests:
nvidia.com/gpu: 1
limits:
nvidia.com/gpu: 1
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a bit unclear on how this GPU patch restricts the deployment to only use T4-type GPU instances. Looking at the t4.yaml file, the nodeSelector is set to kubernetes.azure.com/accelerator: nvidia. This seems to select any node with an NVIDIA GPU, rather than specifically targeting T4 instances.

Could you clarify how the T4 type is enforced?

Also, have you tested this on AKS? If so, did you use a manual node pool (perhaps one provisioned with only T4 instances) or an automatic node pool? If it was an automatic node pool, how is this resource request bound specifically to T4 GPUs?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants