Posts Tagged ‘ socket ’

Unix Domain Sockets vs Loopback TCP Sockets

Two communicating processes on a single machine have a few options. They can use regular TCP sockets, UDP sockets, unix domain sockets, or shared memory. A recent project I was working on used Node.js with two communicating processes on the same machine. I wanted to know how to reduce the CPU utilization of the machine, so I ran a few experiments to compare the efficiency between unix domain sockets and TCP sockets using the loopback interface. This post covers my experiments and test results.

First off, is a disclaimer. This test is not exhaustive. Both client and server are written in Node.js and can only be as efficient as the Node.js runtime.

All code in this post is available at: github.com/nicmcd/uds_vs_tcp

Server Application

I created a simple Node.js server application that could be connected to via TCP socket or Unix domain socket. It simply echos all received messages. Here is the code:

var assert = require('assert');
assert(process.argv.length == 4, 'node server.js <tcp port> <domain socket path>');

var net = require('net');

var tcpPort = parseInt(process.argv[2]);
assert(!isNaN(tcpPort), 'bad TCP port');
console.log('TCP port: ' + tcpPort);

var udsPath = process.argv[3];
console.log('UDS path: ' + udsPath);

function createServer(name, portPath) {
    var server = net.createServer(function(socket) {
        console.log(name + ' server connected');
        socket.on('end', function() {
            console.log(name + ' server disconnected');
        });
        socket.write('start sending now!');
        socket.pipe(socket);
    });
    server.listen(portPath, function() {
        console.log(name + ' server listening on ' + portPath);
    });
}

var tcpServer = createServer('TCP', tcpPort);
var udsServer = createServer('UDS', udsPath);

Client Application

The client application complements the server application. It connects to the server via TCP or Unix domain sockets. It sends a bunch of randomly generated packets and measures the time it takes to finish. When complete, it prints the time and exits. Here is the code:

var assert = require('assert');
assert(process.argv.length == 5, 'node client.js <port or path> <packet size> <packet count>');

var net = require('net');
var crypto = require('crypto');

if (isNaN(parseInt(process.argv[2])) == false)
    var options = {port: parseInt(process.argv[2])};
else
    var options = {path: process.argv[2]};
console.log('options: ' + JSON.stringify(options));

var packetSize = parseInt(process.argv[3]);
assert(!isNaN(packetSize), 'bad packet size');
console.log('packet size: ' + packetSize);

var packetCount = parseInt(process.argv[4]);
assert(!isNaN(packetCount), 'bad packet count');
console.log('packet count: ' + packetCount);

var client = net.connect(options, function() {
    console.log('client connected');
});

var printedFirst = false;
var packet = crypto.randomBytes(packetSize).toString('base64').substring(0,packetSize);
var currPacketCount = 0;
var startTime;
var endTime;
var delta;
client.on('data', function(data) {
    if (printedFirst == false) {
        console.log('client received: ' + data);
        printedFirst = true;
    }
    else {
        currPacketCount += 1;
        if (data.length != packetSize)
            console.log('weird packet size: ' + data.length);
        //console.log('client received a packet: ' + currPacketCount);
    }

    if (currPacketCount < packetCount) {
        if (currPacketCount == 0) {
            startTime = process.hrtime();
        }
        client.write(packet);
    } else {
        client.end();
        endTime = process.hrtime(startTime);
        delta = (endTime[0] * 1e9 + endTime[1]) / 1e6;
        console.log('millis: ' + delta);
    }
});

Running a Single Test

First start the server application with:

node server.js 5555 /tmp/uds

This starts the server using TCP port 5555 and Unix domain socket /tmp/uds.

Now we can run the client application to get some statistics. Let’s first try the TCP socket. Run the client with:


node client.js 5555 1000 100000

This runs the client application using TCP port 5555 and sends 100,000 packets all sized 1000 bytes. This tooks 8006 milliseconds on my machine. We can now try running with the Unix domain socket with:


node client.js /tmp/uds 1000 100000

This runs the client the same as before except it uses the /tmp/uds Unix domain socket instead of the TCP socket. On my machine this took 3570 milliseconds to run. These two runs show that for 1k byte packets, Unix domain sockets are about 2-3x more efficient than TCP sockets.
At this point you might be completely convinced that Unix domain sockets are better and you’ll use them whenever you can. That’s too easy. Let’s run the client application a whole bunch of times and graph the results.
I recently posted about a python package I created for running many tasks and aggregating the data. I thought this socket comparison would make a good example.

Running the Full Test

As mentioned, running the full test uses the Taskrun Python package (available at github.com/nicmcd/taskrun). The script I quickly hacked together to run the client application and parse the results is as follows:


import taskrun
import os

POWER = 15
RUNS = 10
PACKETS_PER_RUN = 100000

manager = taskrun.Task.Manager(
    numProcs = 1,
    showCommands = True,
    runTasks = True,
    showProgress = True)

DIR = "sims"
mkdir = manager.task_new('dir', 'rm -rI ' + DIR + '; mkdir ' + DIR)

def makeName(stype, size, run):
    return stype + '_size' + str(size) + '_run' + str(run)

def makeCommand(port_or_path, size, name):
    return 'node client.js ' + port_or_path + ' ' + str(size) + ' ' + str(PACKETS_PER_RUN) + \
        ' | grep millis | awk \'{printf "%s, ", $2}\' > ' + os.path.join(DIR, name)

barrier1 = manager.task_new('barrier1', 'sleep 0')
for exp in range(0, POWER):
    size = pow(2, exp)
    for run in range(0, RUNS):
        # Unix domain socket test
        name = makeName('uds', size, run)
        task = manager.task_new(name, makeCommand('/tmp/uds', size, name))
        task.dependency_is(mkdir)
        barrier1.dependency_is(task)

        # TCP socket test
        name = makeName('tcp', size, run)
        task = manager.task_new(name, makeCommand('5555', size, name))
        task.dependency_is(mkdir)
        barrier1.dependency_is(task)

# create CSV header
filename = os.path.join(DIR, 'uds_vs_tcp.csv')
header = 'NAME, '
for run in range(0, RUNS):
    header += 'RUN ' + str(run) + ', '
hdr_task = manager.task_new('CSV header', 'echo \'' + header + '\' > ' + filename)
hdr_task.dependency_is(barrier1)

# UDS to CSV
cmd = ''
for exp in range(0,POWER):
    size = pow(2, exp)
    cmd += 'echo -n \'UDS Size ' + str(size) + ', \' >> ' + filename + '; '
    for run in range(0, RUNS):
        name = makeName('uds', size, run)
        cmd += 'cat ' + os.path.join(DIR, name) + ' >> ' + filename + '; '
    cmd += 'echo \'\' >> ' + filename + '; '
uds_task = manager.task_new('UDS to CSV', cmd)
uds_task.dependency_is(hdr_task)

# TCP to CSV
cmd = ''
for exp in range(0,POWER):
    size = pow(2, exp)
    cmd += 'echo -n \'TCP Size ' + str(size) + ', \' >> ' + filename + '; '
    for run in range(0, RUNS):
        name = makeName('tcp', size, run)
        cmd += 'cat ' + os.path.join(DIR, name) + ' >> ' + filename + '; '
    cmd += 'echo \'\' >> ' + filename + '; '
tcp_task = manager.task_new('TCP to CSV', cmd)
tcp_task.dependency_is(uds_task)

manager.run_request_is()

Admittedly, this isn’t the prettiest code to look at, but it gets the job done. For both Unix domain socket and TCP socket, it runs the client application for all packet sizes that are a power of 2 from 1 to 16384. Each setup is run 10 times. Each test result is written to its own file. After all the tests have been run, the taskrun script creates a CSV file using all the test results. The CSV file can then be imported into a spreadsheet application for analysis.

Results

I ran this on an Intel E5-2620 v2 processor with 16GB of RAM. I imported the CSV into Excel, averaged the 10 results of each setup, then graphed the results. This first graph shows the execution time compared to packet size on a logarithmic graph.

Execution Time vs. Packet Size

The results shown here are fairly predicable. The Unix domain sockets are always more efficient and the efficiency benefit is in the 2-3x range. After noticing some weird ups and down in the graph, I decided to generate a graph with the execution times normalized to the TCP execution time.

Relative Execution Time vs Packet Size

I’m not exactly sure why the efficiency of Unix domain sockets varies as it does compared to TCP sockets, but it is always better. This is simply because Unix domain sockets don’t traverse the operating system’s network stack. The kernel simply copies the data from the client’s application into the file buffer in the server’s application.