Asynchronous I/O
TargetFAT and TargetXFS provide an optional buffering mechanism for maintaining streaming throughput despite intermittent I/O delays, such as from flash block erasures. Asynchronous I/O (AIO) support transparently adds a task and RAM buffers between the file system API and the rest of the file system. The following are two typical use cases for AIO buffering:
- MP3 streaming is disturbed by writing a cell phone control message. MP3 files need buffering so that the music is not interrupted by these writes.
- A camera needs to support a multi-shot sequence that generates data faster than the file system can absorb it. Writes to the image file must be buffered so a specified multi-shot rate can be achieved.
The global configuration definition FILE_AIO_INC must be TRUE to enable AIO. To use AIO for a particular file, either use the ‘a’ mode character with fopen() or the O_BUFFERED flag with open().
By default, three buffers are shared between the application and the AIO task. This allows the application to read from one buffer while the AIO task fills a second buffer, with a third buffer providing additional margin between mismatched I/O rates or intermittent delays. The scenario is similar for writing.
The default buffer size is the cluster size of the underlying volume. The buffer count and size pa-rameters are separate for each volume and can be changed by the following call:
int aio_set(const char *path, int num_bufs, int clusts_per_buf);
The three parameters are: - A path that specifies the volume to be buffered. This can be the root directory name or the path to any file or directory on the volume, - The number of buffers to use (must be at least 2), and - The buffer size, in terms of the number of clusters per buffer.
The key to ensuring that AIO buffering performs adequately is allocating enough buffers, but the required number is both application and system dependent, depending on I/O rates and worst-case delays. To find the proper number of buffers via ‘tuning’, the following call is provided:
int aio_high(int fid);
aio_high() returns the maximum number of buffers queued since the file was opened. For streaming reads, this is the maximum number of buffers that were filled by the AIO task but not yet read by the application. For streaming writes, this is the maximum number of buffers filled by application writes that were not yet written to the file system.
Tuning is done by initially allocating an overly large number of buffers, running test applications, and using aio_high() to learn how many buffers were actually required. For example, if 50 buff-ers are allocated via aio_set() but all tests pass without buffer underrun and aio_high() returns 23, then the aio_set() allocation can be safely reduced to 23.
Some additional points:
AIO works for both reads and writes. The first application read or write unblocks the AIO task and starts the asynchronous transfers. The AIO task blocks after the read or write finishes, or if a non-read or non-write call is made on the file.
If an application read is attempted while an AIO write is in progress, the read fails with errno set to EINVAL. Writes are buffered as explained above. Any other call blocks until the write completes. In an AIO read is in progress, application writes fail with error set to EINVAL.
The priority of the created AIO task is one less than the priority of the application task making the read or write call. This allows the application task to preempt the AIO task whenever there is room in the buffer (for a write) or waiting data (for a read), preventing buffer underruns.
When the application performs an AIO write, the write occurs to a buffer and success is indi-cated to the application. But when the AIO task later writes to the file system, an error could occur. Some looseness in coupling is inevitable, but if an error occurs the error code is stored in the file control block, which may be read by calling ferror(), and the write task terminates, causing later application write calls to fail.
The default AIO task stack size is 2KB. That has proved adequate in tests, but could depend on your environment and driver implementation, and should be changed if needed.
Two sample applications, buf_rd and buf_wr, that demonstrate using AIO buffering for reading and writing, respectively, are available from Blunk Microsystems.