FileX fragment files

Hi.

We have problems with writing two files on sd. After several write the file readings become very slow. The problem is that the files become fragmented.
Since the files are 2, I tried to allocate a file with the maximum size on boot, but when I update the file it still fragments. How can I solve the problem?

Best Regards

Paolo

  • Hi Paolo,

    It seems that no one yet responded to this. Were you able to solve this issue?

    JB
    RenesasRulz Forum Moderator

    https://renesasrulz.com/
    https://academy.renesas.com/
    https://en-us.knowledgebase.renesas.com/

  • In reply to JB:

    Hi.
    No. I'm behind other problems now. If I can't find a solution I will have to go to a low level by skipping the file system :(

    Paolo
  • In reply to Paolo Miatto:

    Hi Paulo

    Sorry, I hadn't see this one. This is the suggestion from our FileX developer:

    A couple of things to try:

    1. With FileX, the customer can pre-allocate clusters to get things contiguous and hopefully faster performing, e.g., fx_file_allocate
    2. They should try giving the FileX fx_media_open call more memory for a larger internal cache inf FileX
    3. They can increase the cache sizes for FileX

    Regards,
    Janet
    Express Logic, Inc
  • In reply to JanetC:

    Hi Janet.
    Thanks for the reply, but unfortunately these suggestions do not help (I had already tried).
    Since the log file I made so that it has a fixed size, I created it as the first file, wrongly, overwriting it I thought it kept the same position, but this doesn't happen :(. So the data file, which grows over time it is fragmented.

    I know that the only solution is to use physical sectors.

    Paolo