Based on the conversation in the comments and not claiming the role of absolute truth, I see that when there is a service distributed on several host machines (service servers) that send some data on request from a connected client, it would be possible to organize work So:
We create a certain pool for saving the list of servers that could be updated from different streams, for example, through volatile List<T> , where T is either a class describing the server, or just a string , which stores the address for connecting the service to a specific machine. Instead of List you can use any Dictionary or Hashtable ( MSDN ), since Hashtable eliminates the entry of identical values ​​into a collection, and we do not want the same machine, i.e. service listed twice ?. In any case, the list of available servers can be implemented through non-blocking synchronization (I would have done so), or vice versa, while working with the list - it should be blocked for all threads, except for what is currently working with it. The task of the pool is to store the addresses of current servers.
Further, in the WCF service itself, we are doing a very simple (and fast) method, for example, bool Ping() , which will actually return only true , and allow us to check the response from the service (server).
In a separate thread, or as Task ( MSDN ), we have the full right to implement a “watcher” for servers that will keep our pool up to date, that is, remove "fallen off" servers from it and add new ones. You can do it, for example, using WCF service discovery ( still on MSDN ).
Again, by a separate thread or task, we implement the functionality that would request, for example, by timer, data from the service (server), previously calling the same Ping() method. If the service does not ping (we don’t receive a true response for some time, then we don’t waste resources on this service to request data, and our observer will remove it from the list of relevant services from the previous paragraph.
Since the connection may be lost during the data transfer, we create an error handler that tells us that the connection was interrupted, that's all. Because again, the "fallen off" server will be removed from the pool by the "observer", and if it appears again, it will be added there by it.
In principle, there are plenty of ways to optimize and modify: for example, you can make the list of service hosts available to the client as a file (if the number of servers is finite), then there is no need for Discovery - according to such a list it is enough to store only a flag - whether the host is online at such an address or not, and depending on this, decide whether to ask for data or not.
In principle, it is possible to organize the work differently - for example, on the client machine automatically instantiate a new client instance for each service instance, then each service will have a personal client (but it is better not to do that, because the resources will be eaten and you will still have to distribute which client what service works, and even process errors).
Or you can do without the "observer" - and update the pool just from the methods (method) that request data and run asynchronously in different threads - this is where the volatile collection comes in handy.
In general, the options can be even more diverse, especially if you remember that on the service side you can also do something when connecting and disconnecting the client.
The method described above just seemed to me the most simple and fast to implement "head on." And once again I repeat that I do not pretend to absolute truth in any time.