multiple data readers and writers for same topics

3 posts / 0 new
Last post
Offline
Last seen: 1 month 3 weeks ago
Joined: 01/24/2018
Posts: 3
multiple data readers and writers for same topics

Hi all,

I have the following scenario and am seeking some help/advices.

I have 2 separate applications (AP1 and AP2) that are publishing to the same topic (let's call it topic A) and one single application which creates 2 data readers for the same topic A.

Each of the data readers HISTORY QOS is set to LAST with a depth of 1. My understanding of DDS is that after the discovery data writers and readers will be matched. Each application has a unique ID which is part of the topic

When AP1 publishes topic A (which contains its unique ID), a sample is created in the queue of each data reader. Then each data reader then reads or takes the sample and then checks the sender unique ID to determine whether it should discard or process the sample.

Now if the 2 applications publish to that topic at the same topic, we could have the following case scenario:

- AP 1 publishes topic A with its unique ID and the intended recipient is data reader 1

AP 2 publishes topic A with its unique ID and the intended recipient is data reader 2 

- data reader 1 is notified via on_data_available that a sample is ready but because of latency, the sample from AP 2 is writing in the queue of data reader 1 overwriting the previous sample. Therefore data reader 1 is going to miss the first sample for which it was the intended recipient.

- This means that data reader could constantly miss the samples due to the HISTORY QOS and depth

I could fix the issue by changing the HISTORY QOS to ALL and increasing the queue size. But I was wondering if there was a more "elegant" way to make sure that data reader 1 gets all the sample from AP1, etc.

I was thinking of data reader content filter topics but my understanding is that you can only create content filter topic for data reader at creation of the data reader and not while the data reader has already been created. I cannot create the content filter when the reader is created because at this point in time I don't have all the necessary data to create the filter. I would get in fact the data once I receive the first sample.

thanks for the suggestion

irwin's picture
Offline
Last seen: 22 hours 18 min ago
Joined: 08/14/2010
Posts: 18

Yes you can indeed solve this problem. You can create a content filter using a content filter expression that has patrameters. Initilally you can set up your parameters that samples from both applications are accepted. When you get your addiyional information, you can update your parameters in run time that you only receive the samples from the application you desire.

                        Irwin

 

Gerardo Pardo's picture
Offline
Last seen: 1 month 2 weeks ago
Joined: 06/02/2010
Posts: 589

Another way to do this is to designate the ID field as the Topic Key as in :

struct MyType {
    @key uint64 ID;
    /* other fields in the Type */
};  
 

When you do this the HISTORY "KEEP_LAST" value is applied to each value of the Key separately. So the DataReader on each application would retain the last sample sent by each application, and the sample sent by APP1 will not replace the one sent by APP2. The Reader can also use lookup_instance followed by calls to read_instance or take_instance to read/take only the samples from the APP they are interested in.

Note that in this case each DataReader would get both samples, whereas with the content filter the DataReader would only get the samples that pass the filter.

But even if you used a content filter I still think designating the ID as a key may be a better way to model the system. By duing this you are telling DDS that samples with different values of the ID should not replac each other, rather they correspond to independent streans (or instances in DDS speak) that should be managed separately.

Also depending on your specific use-case there may be some other approaches that are even more efficient. For example. If your use case is that one DataReader to always read samples from APP1 and the other DataReader from APP2, then I would consider using the PARTITION Qos instead of putting the ID inside the sample data-type:

  • AP1 would create a Publisher and set the PARTITION Qos to "AP1" and then create the DataWriter inside that Publisher.
  • AP2 would create a Publisher and set the PARTITION Qos to "AP2" and then create the DataWriter inside that Publisher. 
  • The Application that creates the DataReaders would create 2 Subscribers and one DataReader on each Subscriber. Initially the Subscribers coule be in PARTITION "*" or "AP*" so each would match both DataWriters. Once you decide that a DataReader wants to receive samples from one particular DataWriter you swich the corresponding Subscriber to that PARTITION and that DataReader will un-match from the other DataWriter and receive samples only from that DataWriter...

With this approach you could even wait until you discover the DataWriters and see their partition (which is sent via Discovery) before you create the DataReader, so that the DataReader is created already in the right partition that matches that of the DataWriter you want to get samples from.