Skip to content

[GPU] Gracefully handle zero batch size#34515

Open
mostafafaheem wants to merge 1 commit intoopenvinotoolkit:masterfrom
mostafafaheem:zero_batch_handling
Open

[GPU] Gracefully handle zero batch size#34515
mostafafaheem wants to merge 1 commit intoopenvinotoolkit:masterfrom
mostafafaheem:zero_batch_handling

Conversation

@mostafafaheem
Copy link

Details:

  • Avoid division by zero error when creating memory descriptor for GPU plugin

Tickets:

AI Assistance:

  • AI assistance used: yes
  • To generate tests

@mostafafaheem mostafafaheem requested review from a team as code owners March 5, 2026 12:26
@github-actions github-actions bot added the category: GPU OpenVINO GPU plugin label Mar 5, 2026
@sys-openvino-ci sys-openvino-ci added the ExternalPR External contributor label Mar 5, 2026
@mostafafaheem
Copy link
Author

mostafafaheem commented Mar 5, 2026

Hello @p-durandin, @p-wysocki, @praasz!

Is this the right approach, or should it just return an exception? When I tested the following snippet using CPU, it caused a segmentation fault that was fixed when I set batch size to 1, so if the exception approach was to be taken, should it be at the model validation level, or the plugin level? Since CPU plugin also seems to have a problem with 0 batch size.

#include <chrono>
#include <thread>
#include <string>
#include <iostream>

#include <openvino/openvino.hpp>

int main(int argc, char *argv[]){
    ov::Core ieCore;
    auto model = ieCore.read_model(argv[1]);

    // Reshape model
    model->reshape({0, 3, 300, 300});

    ov::CompiledModel compiledModel = ieCore.compile_model(model, argv[2]);

    // Create an inference request
    ov::InferRequest infer_request = compiledModel.create_infer_request();

    // Get input port for model with one input
    auto input_port = compiledModel.input();

    // Create tensor from external memory
    ov::Tensor input_tensor(input_port.get_element_type(), {0, 3, 300, 300});

    // Set input tensor for model with one input
    infer_request.set_input_tensor(input_tensor);
    infer_request.start_async();
    infer_request.wait();

    std::this_thread::sleep_for(std::chrono::seconds(1));
    compiledModel = {};
    std::cout << "Hello\n" << std::flush;
}

@mostafafaheem mostafafaheem changed the title Gracefully handle zero batch size in GPU plugin [GPU] Gracefully handle zero batch size Mar 6, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

category: GPU OpenVINO GPU plugin ExternalPR External contributor

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Good First Issue]: Trap divide error in GPU plugin for batch size 0

2 participants