In the last post on Transitioning to ThreadX®, we looked at several different techniques that can be used to minimize an ThreadX® based application for memory usage. In today’s post, we will examine several different techniques that developers can use to maximize their applications for performance.
First, a developer needs to recognize that a good RTOS will only use approximately two to four percent available CPU time. The overhead associated with the RTOS is pretty small but it is dependent on how the developer sets up their system tick frequency. By default, the ThreadX kernel interrupts execution every 10 milliseconds to run the scheduler and reevaluate the system. If the frequency is increased so that the system tick occurs every 1 millisecond or even every 100 microseconds, the overhead associated with internal scheduling and due to context switches will increase. Developers therefore want to use the lowest frequency system tick possible for their application. This will ensure that as much CPU processing time as possible is spent on executing threads and not the RTOS scheduler.
Second, in the last post we discussed that it is desirable to use just as many threads as necessary in order to optimize memory usage. Thread minimization can also help improve performance. The more threads in an application, the more cycles the scheduler must use determining which thread is the appropriate one to execute. This technique also should include minimizing the number of priority levels that are available to be assigned to a thread. The fewer values available, i.e. 0 – 31 versus 0 – 1023, the more efficiently the kernel will execute.
I’ve often thought that event flags are one of the most underutilized kernel objects available to developers. It should be no surprise the using event flags can improve application performance. Developers will default to using a semaphore which contains a control block and code necessary to manage that block. Event flags are not much more than a memory location where every single bit represents an event. On an ARM architecture, event flags are not only memory efficient but they also don’t have all the overhead and code that comes along with a semaphore. For this reason, event flags should be a third consideration to developers looking to increase their applications performance.
Finally, developers who are new to using a RTOS will often run into bottlenecks and memory issues when using message queues. The reason is that they often try to pass large amounts of data through the message queue which requires copying the original source into the queue, notifying the receiving thread which then copies the data again into local memory. Obviously, this uses more memory and all those copy operations can decrease the applications performance. When the data being passed through the queue reaches 16 bytes or more, the developer is much better off just passing a pointer to the data. This removes the heavy copy operations and improves performance.
Following these tips and the techniques from the last post will help you develop applications that use smaller memory footprints and are fast and efficient. In the next post we’ll start exploring software quality, how the Renesas Synergy™ Platform ensures quality software and what developers can do to leverage that foundation in their own applications.
Until next time,
Live long and profit!
Hot Tip of the Week
With all the news coverage of recent hacking attacks you may be thinking you should look into implement ting secure communications for your IoT device. The basic building blocks for all secure communications are standard cryptographic functions for authentication, encryption and decryption. The Synergy Platform implements all of the popular cryptographic function and you see some of these functions in action with the Getting Started with Cryptography Application Note and Application Project available here: https://www.renesas.com/en-us/software/D6001093.html
The application project demonstrates several key cryptographic functions you will be interested in: