Filesystem¶
Simple filesystem read/write is achieved using the uv_fs_* functions and the uv_fs_t struct.
Note
The libuv filesystem operations are different from socket operations. Socket operations use the non-blocking operations provided by the operating system. Filesystem operations use blocking functions internally, but invoke these functions in a thread pool and notify watchers registered with the event loop when application interaction is required.
Note
The fs operations are actually a part of libeio on Unix systems. libeio is a separate library written by the author of libev.
All filesystem functions have two forms - synchronous and asynchronous.
The synchronous forms automatically get called (and block) if no callback is specified. The return value of functions is the equivalent Unix return value (usually 0 on success, -1 on error).
The asynchronous form is called when a callback is passed and the return value is 0.
Reading/Writing files¶
A file descriptor is obtained using
int uv_fs_open(uv_loop_t* loop, uv_fs_t* req, const char* path, int flags, int mode, uv_fs_cb cb)
flags and mode are standard Unix flags. libuv takes care of converting to the appropriate Windows flags.
File descriptors are closed using
int uv_fs_close(uv_loop_t* loop, uv_fs_t* req, uv_file file,
Filesystem operation callbacks have the signature:
void callback(uv_fs_t* req);
Let’s see a simple implementation of cat. We start with registering a callback for when the file is opened:
uvcat/main.c - opening a file
1 2 3 4 5 6 7 8 9 10 | void on_open(uv_fs_t *req) {
if (req->result != -1) {
uv_fs_read(uv_default_loop(), &read_req, req->result,
buffer, sizeof(buffer), -1, on_read);
}
else {
fprintf(stderr, "error opening file: %d\n", req->errorno);
}
uv_fs_req_cleanup(req);
}
|
The result field of a uv_fs_t is the file descriptor in case of the uv_fs_open callback. If the file is successfully opened, we start reading it.
Warning
The uv_fs_req_cleanup() function must be called to free internal memory allocations in libuv.
uvcat/main.c - read callback
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | void on_read(uv_fs_t *req) {
uv_fs_req_cleanup(req);
if (req->result < 0) {
fprintf(stderr, "Read error: %s\n", uv_strerror(uv_last_error(uv_default_loop())));
}
else if (req->result == 0) {
uv_fs_t close_req;
// synchronous
uv_fs_close(uv_default_loop(), &close_req, open_req.result, NULL);
}
else {
uv_fs_write(uv_default_loop(), &write_req, 1, buffer, req->result, -1, on_write);
}
}
|
In the case of a read call, you should pass an initialized buffer which will be filled with data before the read callback is triggered.
In the read callback the result field is 0 for EOF, -1 for error and the number of bytes read on success.
Here you see a common pattern when writing asynchronous programs. The uv_fs_close() call is performed synchronously. Usually tasks which are one-off, or are done as part of the startup or shutdown stage are performed synchronously, since we are interested in fast I/O when the program is going about its primary task and dealing with multiple I/O sources. For solo tasks the performance difference usually is negligible and may lead to simpler code.
We can generalize the pattern that the actual return value of the original system call is stored in uv_fs_t.result.
Filesystem writing is similarly simple using uv_fs_write(). Your callback will be triggered after the write is complete. In our case the callback simply drives the next read. Thus read and write proceed in lockstep via callbacks.
uvcat/main.c - write callback
1 2 3 4 5 6 7 8 9 | void on_write(uv_fs_t *req) {
uv_fs_req_cleanup(req);
if (req->result < 0) {
fprintf(stderr, "Write error: %s\n", uv_strerror(uv_last_error(uv_default_loop())));
}
else {
uv_fs_read(uv_default_loop(), &read_req, open_req.result, buffer, sizeof(buffer), -1, on_read);
}
}
|
Note
The error usually stored in errno can be accessed from uv_fs_t.errorno, but converted to a standard UV_* error code. There is currently no way to directly extract a string error message from the errorno field.
Warning
Due to the way filesystems and disk drives are configured for performance, a write that ‘succeeds’ may not be committed to disk yet. See uv_fs_fsync for stronger guarantees.
We set the dominos rolling in main():
uvcat/main.c
1 2 3 4 5 | int main(int argc, char **argv) {
uv_fs_open(uv_default_loop(), &open_req, argv[1], O_RDONLY, 0, on_open);
uv_run(uv_default_loop());
return 0;
}
|
Filesystem operations¶
All the standard filesystem operations like unlink, rmdir, stat are supported asynchronously and have intuitive argument order. They follow the same patterns as the read/write/open calls, returning the result in the uv_fs_t.result field. The full list:
Filesystem operations
UV_FS_FSYNC,
UV_FS_FDATASYNC,
UV_FS_UNLINK,
UV_FS_RMDIR,
UV_FS_MKDIR,
UV_FS_RENAME,
UV_FS_READDIR,
UV_FS_LINK,
UV_FS_SYMLINK,
UV_FS_READLINK,
UV_FS_CHOWN,
UV_FS_FCHOWN
} uv_fs_type;
/* uv_fs_t is a subclass of uv_req_t */
struct uv_fs_s {
UV_REQ_FIELDS
uv_fs_type fs_type;
uv_loop_t* loop;
uv_fs_cb cb;
ssize_t result;
void* ptr;
char* path;
int errorno;
UV_FS_PRIVATE_FIELDS
};
UV_EXTERN void uv_fs_req_cleanup(uv_fs_t* req);
UV_EXTERN int uv_fs_close(uv_loop_t* loop, uv_fs_t* req, uv_file file,
uv_fs_cb cb);
UV_EXTERN int uv_fs_open(uv_loop_t* loop, uv_fs_t* req, const char* path,
int flags, int mode, uv_fs_cb cb);
UV_EXTERN int uv_fs_read(uv_loop_t* loop, uv_fs_t* req, uv_file file,
void* buf, size_t length, int64_t offset, uv_fs_cb cb);
UV_EXTERN int uv_fs_unlink(uv_loop_t* loop, uv_fs_t* req, const char* path,
uv_fs_cb cb);
UV_EXTERN int uv_fs_write(uv_loop_t* loop, uv_fs_t* req, uv_file file,
void* buf, size_t length, int64_t offset, uv_fs_cb cb);
UV_EXTERN int uv_fs_mkdir(uv_loop_t* loop, uv_fs_t* req, const char* path,
int mode, uv_fs_cb cb);
UV_EXTERN int uv_fs_rmdir(uv_loop_t* loop, uv_fs_t* req, const char* path,
uv_fs_cb cb);
UV_EXTERN int uv_fs_readdir(uv_loop_t* loop, uv_fs_t* req,
const char* path, int flags, uv_fs_cb cb);
UV_EXTERN int uv_fs_stat(uv_loop_t* loop, uv_fs_t* req, const char* path,
uv_fs_cb cb);
UV_EXTERN int uv_fs_fstat(uv_loop_t* loop, uv_fs_t* req, uv_file file,
uv_fs_cb cb);
UV_EXTERN int uv_fs_rename(uv_loop_t* loop, uv_fs_t* req, const char* path,
const char* new_path, uv_fs_cb cb);
UV_EXTERN int uv_fs_fsync(uv_loop_t* loop, uv_fs_t* req, uv_file file,
uv_fs_cb cb);
UV_EXTERN int uv_fs_fdatasync(uv_loop_t* loop, uv_fs_t* req, uv_file file,
uv_fs_cb cb);
UV_EXTERN int uv_fs_ftruncate(uv_loop_t* loop, uv_fs_t* req, uv_file file,
int64_t offset, uv_fs_cb cb);
UV_EXTERN int uv_fs_sendfile(uv_loop_t* loop, uv_fs_t* req, uv_file out_fd,
uv_file in_fd, int64_t in_offset, size_t length, uv_fs_cb cb);
UV_EXTERN int uv_fs_chmod(uv_loop_t* loop, uv_fs_t* req, const char* path,
int mode, uv_fs_cb cb);
Buffers and Streams¶
The basic I/O tool in libuv is the stream (uv_stream_t). TCP sockets, UDP sockets, and pipes for file I/O and IPC are all treated as stream subclasses.
Streams are initialized using custom functions for each subclass, then operated upon using
int uv_read_start(uv_stream_t*, uv_alloc_cb alloc_cb, uv_read_cb read_cb);
int uv_read_stop(uv_stream_t*);
int uv_write(uv_write_t* req, uv_stream_t* handle,
uv_buf_t bufs[], int bufcnt, uv_write_cb cb);
The stream based functions are simpler to use than the filesystem ones and libuv will automatically keep reading from a stream when uv_read_start() is called once, until uv_read_stop() is called.
The discrete unit of data is the buffer – uv_buf_t. This is simply a collection of a pointer to bytes (uv_buf_t.base) and the length (uv_buf_t.len). The uv_buf_t is lightweight and passed around by value. What does require management is the actual bytes, which have to be allocated and freed by the application.
To demonstrate streams we will need to use uv_pipe_t. This allows streaming local files [2]. Here is a simple tee utility using libuv. Doing all operations asynchronously shows the power of evented I/O. The two writes won’t block each other, but we’ve to be careful to copy over the buffer data to ensure we don’t free a buffer until it has been written.
The program is to be executed as:
./uvtee <output_file>
We start of opening pipes on the files we require. libuv pipes to a file are opened as bidirectional by default.
uvtee/main.c - read on pipes
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 | int main(int argc, char **argv) {
loop = uv_default_loop();
uv_pipe_init(loop, &stdin_pipe, 0);
uv_pipe_open(&stdin_pipe, 0);
uv_pipe_init(loop, &stdout_pipe, 0);
uv_pipe_open(&stdout_pipe, 1);
uv_fs_t file_req;
int fd = uv_fs_open(loop, &file_req, argv[1], O_CREAT | O_RDWR, 0644, NULL);
uv_pipe_init(loop, &file_pipe, 0);
uv_pipe_open(&file_pipe, fd);
uv_read_start((uv_stream_t*)&stdin_pipe, alloc_buffer, read_stdin);
uv_run(loop);
return 0;
}
|
The third argument of uv_pipe_init() should be set to 1 for IPC using named pipes. This is covered in Processes. The uv_pipe_open() call associates the file descriptor with the file.
We start monitoring stdin. The alloc_buffer callback is invoked as new buffers are required to hold incoming data. read_stdin will be called with these buffers.
uvtee/main.c - reading buffers
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 | uv_buf_t alloc_buffer(uv_handle_t *handle, size_t suggested_size) {
return uv_buf_init((char*) malloc(suggested_size), suggested_size);
}
void read_stdin(uv_stream_t *stream, ssize_t nread, uv_buf_t buf) {
if (nread == -1) {
if (uv_last_error(loop).code == UV_EOF) {
uv_close((uv_handle_t*)&stdin_pipe, NULL);
uv_close((uv_handle_t*)&stdout_pipe, NULL);
uv_close((uv_handle_t*)&file_pipe, NULL);
}
}
else {
if (nread > 0) {
write_data((uv_stream_t*)&stdout_pipe, nread, buf, on_stdout_write);
write_data((uv_stream_t*)&file_pipe, nread, buf, on_file_write);
}
}
if (buf.base)
free(buf.base);
}
|
The standard malloc is sufficient here, but you can use any memory allocation scheme. For example, node.js uses its own slab allocator which associates buffers with V8 objects.
The read callback nread parameter is -1 on any error. This error might be EOF, in which case we close all the streams, using the generic close function uv_close() which deals with the handle based on its internal type. Otherwise nread is a non-negative number and we can attempt to write that many bytes to the output streams. Finally remember that buffer allocation and deallocation is application responsibility, so we free the data.
uvtee/main.c - Write to pipe
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 | typedef struct {
uv_write_t req;
uv_buf_t buf;
} write_req_t;
void free_write_req(uv_write_t *req) {
write_req_t *wr = (write_req_t*) req;
free(wr->buf.base);
free(wr);
}
void on_stdout_write(uv_write_t *req, int status) {
free_write_req(req);
}
void on_file_write(uv_write_t *req, int status) {
free_write_req(req);
}
void write_data(uv_stream_t *dest, size_t size, uv_buf_t buf, uv_write_cb callback) {
write_req_t *req = (write_req_t*) malloc(sizeof(write_req_t));
req->buf = uv_buf_init((char*) malloc(size), size);
memcpy(req->buf.base, buf.base, size);
uv_write((uv_write_t*) req, (uv_stream_t*)dest, &req->buf, 1, callback);
}
|
write_data() makes a copy of the buffer obtained from read. Again, this buffer does not get passed through to the callback trigged on write completion. To get around this we wrap a write request and a buffer in write_req_t and unwrap it in the callbacks.
Warning
If your program is meant to be used with other programs it may knowingly or unknowingly be writing to a pipe. This makes it susceptible to aborting on receiving a SIGPIPE. It is a good idea to insert:
signal(SIGPIPE, SIG_IGN)
in the initialization stages of your application.
File change events¶
All modern operating systems provide APIs to put watches on individual files or directories and be informed when the files are modified. libuv wraps common file change notification libraries [1]. This is one of the more inconsistent parts of libuv. File change notification systems are themselves extremely varied across platforms so getting everything working everywhere is difficult. To demonstrate, I’m going to build a simple utility which runs a command whenever any of the watched files change:
./onchange <command> <file1> [file2] ...
The file change notification is started using uv_fs_event_init():
onchange/main.c
1 2 3 4 | while (argc-- > 2) {
fprintf(stderr, "Adding watch on %s\n", argv[argc]);
uv_fs_event_init(loop, (uv_fs_event_t*) malloc(sizeof(uv_fs_event_t)), argv[argc], run_command, 0);
}
|
The third argument is the actual file or directory to monitor. The last argument, flags, can be:
int uid, int gid, uv_fs_cb cb);
};
but both are currently unimplemented on all platforms.
Warning
You will in fact raise an assertion error if you pass any flags. So stick to 0.
The callback will receive the following arguments:
- uv_fs_event_t *handle - The watcher. The filename field of the watcher is the file on which the watch was set.
- const char *filename - If a directory is being monitored, this is the file which was changed. Only non-null on Linux and Windows. May be null even on those platforms.
- int flags - one of UV_RENAME or UV_CHANGE.
- int status - Currently 0.
In our example we simply print the arguments and run the command using system().
onchange/main.c - file change notification callback
1 2 3 4 5 6 7 8 9 10 | void run_command(uv_fs_event_t *handle, const char *filename, int events, int status) {
fprintf(stderr, "Change detected in %s: ", handle->filename);
if (events == UV_RENAME)
fprintf(stderr, "renamed");
if (events == UV_CHANGE)
fprintf(stderr, "changed");
fprintf(stderr, " %s\n", filename ? filename : "");
system(command);
}
|
[1] | inotify on Linux, kqueue on BSDs, ReadDirectoryChangesW on Windows, event ports on Solaris, unsupported on Cygwin |
[2] | see 管道 Pipes |