-
Notifications
You must be signed in to change notification settings - Fork 129
Open
eProsima/Fast-DDS
#5859Labels
more-information-neededFurther information is requiredFurther information is required
Description
Description
tf
/ tf_static
topics start generating the dropping message warning, 'the timestamp on the message is earlier than all the data in the transform cache`.
once it starts happening, the frequency just increases linearly as time goes.
this actually increases the delay and latency between publisher and subscribers significantly.
the reproduction rate is approximately 30%.
System Information
- ros2: Humble, patch release 5
- ros-humble-fastrtps: v2.6.7 with + [20706] Make reader get_first_untaken_info() coherent with read()/take() (backport #4696) eProsima/Fast-DDS#4708
- Number of Nodes: - 120
- Number of Topics: - 500
tf
/tf_static
topic: Publisher -15 and Subscriber 30 for each topic.
XML configuration
<?xml version="1.0" encoding="UTF-8"?>
<dds xmlns="http://www.eprosima.com/XMLSchemas/fastRTPS_Profiles">
<profiles>
<transport_descriptors>
<!-- Create a descriptor for the new transport -->
<transport_descriptor>
<transport_id>shm_transport</transport_id>
<type>SHM</type>
<segment_size>10485760</segment_size>
<port_queue_capacity>256</port_queue_capacity>
</transport_descriptor>
</transport_descriptors>
<participant profile_name="SHMParticipant" is_default_profile="true">
<rtps>
<!-- Link the Transport Layer to the Participant -->
<userTransports>
<transport_id>shm_transport</transport_id>
</userTransports>
<!-- <useBuiltinTransports>false</useBuiltinTransports> -->
</rtps>
</participant>
<publisher profile_name="service">
<qos>
<reliability>
<max_blocking_time>
<sec>10</sec>
</max_blocking_time>
</reliability>
</qos>
</publisher>
</profiles>
</dds>
Additional Information
- When this warning happens, https://github.com/eProsima/Fast-DDS/blob/b1e4707ad3cfe4cad7e5100318402206e0bd5e78/src/cpp/rtps/transport/shared_mem/MultiProducerConsumerRingBuffer.hpp#L228-L231 this if statement comes True.
- if we increase
port_queue_capacity
increases 4 times bigger, the 4 times longer it takes until the problem starts. (almost the same situation forsegment_size
) this does not avoid this original problem, it only can buy some time based on the configuration.
Metadata
Metadata
Assignees
Labels
more-information-neededFurther information is requiredFurther information is required