Still in our investigation phase and we have interest in maximizing our MQTT receive packet size. This effort is basically following on from http://renesasrulz.com/synergy/f/synergy---forum/15788/receiving-mqtt-messages.
As a part of this investigation, it came to my attention that if I exceed the size of an Ethernet frame (taking into consideration the MQTT header and whatnot), NetX is unable to handle the two packets correctly on receipt. The thing that worries me about this the most is that after about four of these incorrectly handled packets are received, MQTT fails altogether and the (mosquitto) broker assumes a timeout based disconnect. So, my questions are two-fold:
I have more details of things that I don't understand about how this plays out that I'll try to cover below.
First, I have a dedicated MQTT Thread with a NetX Duo MQTT stack using the following settings:
For my testing purposes, I've been using a message of 1450 characters.
When this large message comes in, I can see _nxd_mqtt_packet_receive_process() kick in two times, once for each packet, but even though it evaluates the packets as being of packet_type MQTT_CONTROL_PACKET_TYPE_PUBLISH, it never goes into _nxd_mqtt_process_publish(). Frustratingly, I don't feel like I can really trust the debugger to show me what's happening because the relationships it draws between the assembly and the source don't always make sense. Regardless, I created a number of breakpoints that have log and resume actions for basically every source line that has an address associated with it (see below image for an idea where I'm talking about). What I can see is that it looks like it reaches the switch case where this should happen, however, it never enters _nxd_mqtt_process_publish(). If the first message the system gets is the 1450 message, for the first packet, I can see that the packet_ptr->nx_packet_length is 1460 (aka the max that will fit an Ethernet TCP frame), and the second packet is the length of the remaining message (4). Subsequent received messages well thenceforth show a length of 4, regardless of if they fit in one packet or not, though, interestingly, single packet messages do still seem to trigger all the appropriate functions and callbacks to be processed.
Example logs below contain primarily line identifiers, but also where it prints 3 '\003', that refers to the packet_type where 3 is MQTT_CONTROL_PACKET_TYPE_PUBLISH. And where it prints bare numbers, that is the packet_ptr->nx_packet_length. Lastly, bear in mind that the g_packet_pool0_pool_memory is randomly dependent on on network traffic, so I'm not sure that the location being the same is noteworthy in any way.
Message size: 1444
_nxd_mqtt_packet_receive_process:1764_nxd_mqtt_packet_receive_process:1776_nxd_mqtt_packet_receive_process:1777_nxd_mqtt_packet_receive_process:1780_nxd_mqtt_packet_receive_process:1783_nxd_mqtt_packet_receive_process:17863 '\003'0x1ffe73a4 <g_packet_pool0_pool_memory+10300>_nxd_mqtt_packet_receive_process:17943 '\003'0x1ffe73a4 <g_packet_pool0_pool_memory+10300>1458Made it to _nxd_mqtt_process_publish()Made it to _nxd_mqtt_process_publish()_nxd_mqtt_packet_receive_process:18431458_nxd_mqtt_packet_receive_process:1852
Message size: 1450 (first ever received message)
_nxd_mqtt_packet_receive_process:1764_nxd_mqtt_packet_receive_process:1776_nxd_mqtt_packet_receive_process:1777_nxd_mqtt_packet_receive_process:1780_nxd_mqtt_packet_receive_process:1783_nxd_mqtt_packet_receive_process:17863 '\003'0x1ffe8798 <g_packet_pool0_pool_memory+12360>_nxd_mqtt_packet_receive_process:17943 '\003'0x1ffe8798 <g_packet_pool0_pool_memory+12360>1460_nxd_mqtt_packet_receive_process:1764_nxd_mqtt_packet_receive_process:1776_nxd_mqtt_packet_receive_process:1777_nxd_mqtt_packet_receive_process:1780_nxd_mqtt_packet_receive_process:1783_nxd_mqtt_packet_receive_process:17863 '\003'0x1ffebfec <g_packet_pool0_pool_memory+26780>_nxd_mqtt_packet_receive_process:17943 '\003'0x1ffebfec <g_packet_pool0_pool_memory+26780>4
Message size: 1450 (later received message)
_nxd_mqtt_packet_receive_process:1764_nxd_mqtt_packet_receive_process:1776_nxd_mqtt_packet_receive_process:1777_nxd_mqtt_packet_receive_process:1780_nxd_mqtt_packet_receive_process:1780_nxd_mqtt_packet_receive_process:1783_nxd_mqtt_packet_receive_process:17863 '\003'0x1ffeb404 <g_packet_pool0_pool_memory+26780>_nxd_mqtt_packet_receive_process:17943 '\003'_nxd_mqtt_packet_receive_process:17943 '\003'0x1ffeb404 <g_packet_pool0_pool_memory+26780>0x1ffeb404 <g_packet_pool0_pool_memory+26780>44_nxd_mqtt_packet_receive_process:1764_nxd_mqtt_packet_receive_process:1776_nxd_mqtt_packet_receive_process:1777_nxd_mqtt_packet_receive_process:1780_nxd_mqtt_packet_receive_process:1783_nxd_mqtt_packet_receive_process:1783_nxd_mqtt_packet_receive_process:17863 '\003'0x1ffeb404 <g_packet_pool0_pool_memory+26780>_nxd_mqtt_packet_receive_process:17943 '\003'_nxd_mqtt_packet_receive_process:17943 '\003'0x1ffeb404 <g_packet_pool0_pool_memory+26780>0x1ffeb404 <g_packet_pool0_pool_memory+26780>44
In reply to JanetC:
Here is the state of the pools after the too large messages wreak their havoc:
While I tried to capture this, the value of the empty requests kept going up.
In reply to elene.trull: