-
Notifications
You must be signed in to change notification settings - Fork 0
Sync with upstream: update workflow files #52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Add GitHub Actions workflow to sync with upstream repository. * Create a new workflow file `.github/workflows/sync_with_upstream.yml`. * Trigger the workflow on a daily schedule and on push events to the main branch. * Add steps to fetch changes from the upstream repository. * Add steps to merge upstream changes with the fork. * Create a new branch if merge conflicts arise. * Send notifications if manual intervention is required for conflict resolution. --- For more details, open the [Copilot Workspace session](https://copilot-workspace.githubnext.com/Zhuul/vllm?shareId=XXXX-XXXX-XXXX-XXXX).
Add sync worker to detect changes and merge with fork
* **.github/workflows/sync_with_upstream.yml** - Add error handling for merge conflicts - Add logging for debugging and monitoring * **.buildkite/scripts/run-multi-node-test.sh** - Add retry mechanism for failed Docker container starts - Add logging for debugging and monitoring
This reverts commit 8458f5e.
…ent container with Podman
- Created TROUBLESHOOTING-WSL-GPU.md for comprehensive GPU troubleshooting steps in WSL2 with Podman. - Added check-venv.sh to verify Python virtual environment setup within the container. - Introduced check-wsl-gpu.sh for diagnosing WSL2 + GPU configuration issues. - Implemented manage-container.sh for managing the vLLM development container lifecycle. - Developed run-vllm-dev-fedora.ps1 and run-vllm-dev-fedora.sh for launching the vLLM development container with GPU support. - Added setup-wsl-gpu.sh for installing NVIDIA Container Toolkit in WSL2.
…nly; sync repo to upstream/main elsewhere
…al root/test/tool changes
- Introduced `run-vllm-dev-podman-fixed.ps1` for improved container management and GPU diagnostics. - Created `run-vllm-dev-wsl2.ps1` for WSL2 optimized vLLM container execution with proper CUDA support. - Developed `setup-podman-wsl2-gpu.ps1` to automate Podman and GPU setup in WSL2 environment. - Implemented `validate-rtx5090.py` to validate RTX 5090 support and PyTorch compatibility. - Added `final_environment_test.py` for comprehensive testing of vLLM setup. - Created `test-vllm-container.ps1` to verify container functionality and GPU access. - Developed `test_installed_vllm.py` to test installed vLLM package functionality. - Implemented `test_vllm_gpu.py` to verify vLLM and GPU functionality.
…ccache+compat LD paths; add LOCAL_MIRROR option; SHM/tmpfs; prune extras to minimal; add .dockerignore; Podman-first run scripts updated
…nd usability; add ccache support, local mirror option, and streamline dev setup process
…ith mirror and recreate options
…U diagnostics and performance
…cript for better build logging and error handling
… related parameters
…n in the extras directory
…ripts for improved volume management
- Added new Podman-based scripts for running and managing vLLM containers. - Deprecated old run-vllm-dev.ps1 and run-vllm-dev.sh scripts, redirecting to new Podman scripts. - Implemented a comprehensive test script for vLLM container functionality. - Created a patches directory with an apply_patches.sh script for managing patches. - Added README files for better documentation across extras, patches, podman, secrets, storage, and testing directories. - Introduced GPU status checking and diagnostics in the new Podman scripts. - Established a secrets directory for local-only secret management. - Developed storage helpers for managing external volumes for models and caches. - Created a testing harness with a matrix for models/environments and scripts for running tests and comparing results.
|
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
56fd993 to
f51bf9a
Compare
This PR was automatically created because workflow files were updated while syncing with upstream.
Please review and merge.