How to handle ENOBUFS

In some circumstances I'm getting a socket error ENOBUFS, which indicates that some buffer (probably related to sending or receiving packets) is full.

Is it safe to do the following when sending, or do I risk an infinite loop here? If this is unsafe, what else is one supposed to do? Try with increasing sleep delay for a finite number of times and disconnect afterwards?

ssize_t bytes = send(socket,string,length,0);
while(errno == ENOBUFS)
{
    tx_thread_sleep(10);
    ssize_t bytes = send(socket,string,length,0);
}

Also, when receiving, does this error mean that packets have been dropped because they haven't been processed fast enough? If yes, is it safe to ignore this error conedition when calling recv()?

  • Hi ChrisS,

    When it comes to send(), ENOBUFS is set when there is not enough packets available in the packet pool. It depends how your application makes use of the packet pool, but anyway I suggest making at most N attempts to send a packet and then failing.

    I don't think recv() will ever cause ENOBUFS error, I would expect EAGAIN if there is no packet.

    Regards,
    adboc
  • In reply to adboc:

    Does this mean that my application can fail when I receive more packets than I can process, and so there would be no packets available in the pool, or are incoming and outgoing packets handled in different pools? Any best practice on how to handle such a high-load situation?

     

    Edit: As far as I can tell this is the same packet pool that is configured in the configurator. I'm monitoring the available packets by printing g_packet_pool0.nx_packet_pool_available. Is there any reason why one would want to use more than one packet pool, e.g. one for the IP instance  and one for BSD sockets? Apart from a finer granulairty regarding memory usage when different packet sizes are used, I cannot think of any reason.

  • In reply to ChrisS:

    Hi ChrisS,

    Yes, if the device receives valid packets, it can run out of available packets. Of course NetX will stop receving (enqueuing) new packets if it has no available ones in the pool. In this case I suggest using a larger packet pool and try to process packets as soon as possible (and release it).

    The reason of using more packet pools is memory efficiency. Each packet pool contains packets of the same size. Sometimes an application might require larger packets, but sometimes it may need only several bytes to send. E.g. using 1000-byte packet for only several bytes of data will waste noticeable amount of memory.

    Please note that NetX has packet chaining feature, so small packets can be chained to represent larger amount of data. But obviously this will be slightly less efficient than storing all data in one packet.

    Regards,
    adboc
  • In reply to adboc:

    Hi adboc,

    to come back to this: If I need to send a larger amount of data and there is not enough space in the packet buffer it will fail to send it. Given that I have this data buffered somewhere else, should one try to send it in smaller chunks (but with possibly lower performance) instead? If I wanted to use a larger packet buffer I'd need to move it to SD-RAM, I assume this is not possible when using the configurator, or only by write-protecting the modified generated files?

    Are there situations in which the packet buffer might run full by leaked packets that are not freed? Any reasonable strategy to find such allocations when the packet data is encrypted?


    Best regards,
    Chris

  • In reply to ChrisS:

    Hi Chris-
    Given that you have the data buffered somewhere else, it would seem to be much more memory efficient to NOT create another large buffer (and to avoid SDRAM use) and just use smaller buffers for transmission and to take advantage of automatic packet chaining. If it turns out you take too much of a performance hit- you could then look at using larger buffers. Anyway, that's the approach I would look at...

    As far as leaky buffers- NetX is great at managing buffers internally, Its up to you to make sure you are managing buffers inside your application. I find doing consistent error checks (defensive coding) and then some robust testing always helps 'plug' any leaks that might sneak thru...

    Warren
  • In reply to WarrenM:

    Hi Warren,

    using smaller transmissions is what I need to do right now to archieve a reliable operation but this comes at a performance cost in transfer speed which I would like to maximize. Our protocol requires an acknowledgement of the transferred data from the server, so there is a delay involved between each transfer (on top of TCP acks).

    Also there is the general question of how to handle the case when the packet pool is nearly full and this still occurs. That's why I was wondering if there is a method to detect if there are packets in the pool that my application didn't free because of an error. Inspecting the packet contents themselves won't tell me where it was allocated (atleast not without massive debug output of all packets and their locations if this is even possible).