Skip to content

Conversation

@luisjira
Copy link

At the moment, xdp-forward-queueing includes logic to track the number of outstanding bytes in hardware, but it is unused.

This pull request attaches the frame return logic to the corresponding tracepoint to allow subtracting the completed data. It also adds the counter increment when data is sent.

@luisjira luisjira force-pushed the xdp-forward-queueing-fix branch from 4be4a36 to 4847583 Compare March 28, 2025 14:36
Copy link
Member

@tohojo tohojo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the patch; a bunch of nits, see below.

* pin the link to pin_root_path/xdp_check_return.
*/

return_prog = bpf_object__find_program_by_name(skel->obj,"xdp_check_return");
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The program is available in the skeleton as skel->progs.xdp_check_return - no need to use bpf_object__find_program_by_name()

goto end_detach;
}

link = bpf_program__attach_raw_tracepoint(return_prog, "xdp_frame_return");
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It should be possible to just call bpf_program__attach(), since the target is specified in the section definition in the BPF source file.

}

link = bpf_program__attach_raw_tracepoint(return_prog, "xdp_frame_return");
if (libbpf_get_error((const void *) link)) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It should be possible to just pass the pointers to libbpf_get_error() without the cast...

link = bpf_program__attach_raw_tracepoint(return_prog, "xdp_frame_return");
if (libbpf_get_error((const void *) link)) {
pr_warn("Couldn't attach to xdp_frame_return: %s\n",
strerror(libbpf_get_error((const void *) link)));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

...same here

}
skel->links.xdp_check_return = link;

snprintf(pin_link_path, sizeof(pin_link_path), "%s/%s", pin_root_path,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should use the try_snprintf() function (from lib/util), and check the return code. We should probably also disambiguate the pin path right now, with this path bad things will happen if multiple instances of xdp-forward are loaded at the same time.

for (iface = opt->ifaces; iface; iface = iface->next)
xdp_program__detach(xdp_prog, iface->ifindex, opt->xdp_mode, 0);

bpf_link__unpin(link);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bpf_link__unpin() will crash if link is NULL

pr_info("Unloaded from interface %s\n", iface->ifname);
}

snprintf(pin_link_path, sizeof(pin_link_path), "%s/%s", pin_root_path,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See above wrt using try_snprintf()

snprintf(pin_link_path, sizeof(pin_link_path), "%s/%s", pin_root_path,
"xdp_check_return");

link = bpf_link__open(pin_link_path);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of going through this open/unpin/destroy dance, just unlink the pin file?

strerror(errno));
goto end_detach;
}
pr_info("Pinned xdp_check_return to %s\n", pin_link_path);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should probably be pr_debug()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants