GigE Vision SDK || libfsock /w GVA

NEIO Systems, Ltd.
3 min readJan 22, 2021


There is a shift in the industry for using cameras providing data via a Gigabit Ethernet Interface. Given the increase in resolution of 4K and beyond, 10Gbit but also 25Gbit Ethernet is becoming popular. One reason being that the network infrastructure is becoming very attractive from a price point of view, but also the ease of use — e.g cabling sustains hundreds of meters — pushes this trend forward.

libfsock Media extensions

With our FastSockets API we introduced a concept overcoming disadvantages of traditional networking. Kernel Bypass being the essential need for any fast communication beyond 1Gbit.

For Media data coming in (e.g images from a 50Megapixel camera), we need an approach to handle this data efficiently. The following pictures describes in an abstract way the data flow for the camera images, ready to visualize with an image viewer

Zero Copy Image Processing into Application Buffers

Depending on the connectivity, this allows to combine several sources into one adapter.

The middleware detects the GigE version protocol (v1 vs v2) automatically and provides images with zero copy to the application. This operation is dropless (in case of missing frames due to switch congestion, this is reported) and achieved with very low CPU utilization. It is much superior to filter drivers which can be seen as a band aid compared to our approach with a different architectural design concept. Our approach allows not only for faster network speeds, but also handling the ingress for multiple cameras, e.g 4 cameras at 10GbE bundled to 2x25 GbE.

For subscribing to camera frames we use the following code:

/* we allocate a GVA endpoint */
rc = fsock_open(&sin.sin_addr, FSOCK_GVA, &gva_dev);
/* depending on the image fps, resolution, request alternate RING BUF SIZE (default 128MB) */
rc = fsock_set_ep_attribute(gva_dev, FSOCK_RINGBUF_SIZE, &ring_size, sizeof (ring_size));
/* create a channel for given 'port' */
rc = fsock_bind(gva_dev, FSOCK_BIND_DEFAULT, port, NULL, &gva_chan);
struct ip_mreq mreq; /* standard structures, e.g mcast */
rc = fsock_setopt(gva_chan, FSOCK_JOIN_MCAST, &mreq, sizeof(mreq));
/* allocate internal buffers to describe received block*/
fsock_gva_alloc_type_t gva_alloc;
rc = fsock_setopt(gva_chan, FSOCK_GVA_ALLOC, &gva_alloc, sizeof(gva_alloc));
/* if needed, specify a RX timeout for the channel */
rc = fsock_setopt(gva_chan, FSOCK_RX_TIMEOUT, &block_timeout, sizeof(block_timeout));
/* wait for data blocks , XOR in the IN place option */
rc = fsock_recv(gva_chan, FSOCK_RECV_DEFAULT | FSOCK_RECV_GVABLOCK_INPLACE, &gva_blk, sizeof(gva_blk), &rxinfo);
/* re-queue this block */
rc = fsock_setopt(gva_chan, FSOCK_GVA_QUEUE_BUF, &gva_blk.gva_buf, sizeof(gva_buf_t));

For this we see on an event driven approach low CPU utilization and maximum bandwidth as follows:

CPU Load per *Core* vs Aggregated Bandwidth for 8 streams

libFSOCK Feature Set for GigE Vision Acceleration


For Real

Our test application for grabbing GVSP blocks in action. Receiving 1000 blocks (images) with zero CPU load. This is syncing life traffic from a 10GbE device.



NEIO Systems, Ltd. || low latency, networking experts, 10GbE++, FPGA trading, Linux and Windows internals gurus