Skip to content

fasthttp #

fasthttp

The fasthttp module is a high-performance HTTP server library for V that provides low-level socket management and non-blocking I/O.

Features

  • High Performance: Uses platform-specific I/O multiplexing:
    • epoll on Linux for efficient connection handling
    • kqueue on macOS and BSD for high-performance event notification
  • Non-blocking I/O: Handles multiple concurrent connections efficiently
  • Simple API: Easy-to-use request handler pattern
  • Cross-platform: Supports Linux, macOS, FreeBSD, OpenBSD, NetBSD and DragonFly

Installation

The module is part of the standard V library. Import it in your V code:

import fasthttp

Quick Start

Here's a minimal HTTP server example:

import fasthttp

fn handle_request(req fasthttp.HttpRequest) ![]u8 {
    path := req.buffer[req.path.start..req.path.start + req.path.len].bytestr()

    mut body := ''
    mut status_line := ''

    if path == '/' {
        body = 'Hello, World!\n'
        status_line = 'HTTP/1.1 200 OK'
    } else {
        body ='${path} not found\n'
        status_line = 'HTTP/1.1 404 Not Found'
    }

    headers := [
        status_line,
        'Content-Type: text/plain',
       'Content-Length: ${body.len}',
        'Connection: close',
    ]
    header_string := headers.join('\r\n')

    return '${header_string}\r\n\r\n${body}'.bytes()
}

fn main() {
    mut server := fasthttp.new_server(fasthttp.ServerConfig{
        port:    3000
        handler: handle_request
    }) or {
        eprintln('Failed to create server: ${err}')
        return
    }

    println('Server listening on http://localhost:3000')
    server.run() or { eprintln('error: ${err}') }
}

API Reference

HttpRequest Struct

Represents an incoming HTTP request.

Fields:

  • buffer: []u8 - The raw request buffer containing the complete HTTP request
  • method: Slice - The HTTP method (GET, POST, etc.)
  • path: Slice - The request path
  • version: Slice - The HTTP version (e.g., "HTTP/1.1")
  • client_conn_fd: int - Internal socket file descriptor

Slice Struct

Represents a slice of the request buffer.

Fields:

  • start: int - Starting index in the buffer
  • len: int - Length of the slice

Usage:

method := req.buffer[req.method.start..req.method.start + req.method.len].bytestr()
path := req.buffer[req.path.start..req.path.start + req.path.len].bytestr()

Request Handler Pattern

The handler function receives an HttpRequest and must return either:

  • []u8 - A byte array containing the HTTP response body
  • An error if processing failed

The handler should extract method and path information from the request and route accordingly.

Example:

fn my_handler(req fasthttp.HttpRequest) ![]u8 {
    method := req.buffer[req.method.start..req.method.start + req.method.len].bytestr()
    path := req.buffer[req.path.start..req.path.start + req.path.len].bytestr()

    match method {
        'GET' {
            if path == '/' {
                return 'Home page'.bytes()
            }
        }
        'POST' {
            if path == '/api/data' {
                return 'Data received'.bytes()
            }
        }
        else {}
    }

    return '404 Not Found'.bytes()
}

Response Format

Responses should be returned as byte arrays. The server will send them directly to the client as HTTP response bodies.

// Simple text response
return 'Hello, World!'.bytes()

// HTML response
return '<html><body>Hello</body></html>'.bytes()

// JSON response
return '{"message": "success"}'.bytes()

Example

See the complete example in examples/fasthttp/ for a more detailed server implementation with multiple routes and controllers.

./v examples/fasthttp
./examples/fasthttp/fasthttp

Platform Support

  • Linux: Uses epoll for high-performance I/O multiplexing
  • macOS: Uses kqueue for event notification
  • Windows: Currently not supported

Performance Considerations

  • The fasthttp module is designed for high throughput and low latency
  • Handler functions should be efficient; blocking operations will affect other connections
  • Use goroutines within handlers if you need to perform long-running operations without blocking the I/O loop

Request-scoped allocation with -prealloc

When an application is compiled with -prealloc, fasthttp starts a scoped prealloc arena for each request before decoding the HTTP request and before calling the request handler. All V allocations made by the request parser, the handler, and code called by the handler use that request arena while the handler is running.

The arena is freed as a unit after the response no longer needs request-owned data. On Linux the normal response path sends the response synchronously, then ends the request arena. On macOS and BSD the response buffer can be kept by the connection until kqueue finishes writing it; in that case fasthttp detaches the scope from the request thread and frees it after the write completes.

This means request-local V allocations are cheap bump-pointer allocations, and freeing them does not require walking individual objects. Startup state, server state, and allocations made directly by C libraries are not part of a request arena. If a handler starts V spawn work while the request scope is active, the generated thread wrapper retains that scope until the spawned function returns; void spawned functions also run inside their own scoped arena, which is freed at thread exit. Manual takeover responses transfer ownership to user code and currently abandon the request arena, so long-lived takeover handlers should manage their own allocation lifetime explicitly.

To inspect request arena usage while developing, build with:

v -prealloc -d trace_prealloc run .

Notes

  • HTTP headers are currently not parsed; the entire request is available in the buffer
  • Only the request method, path, and version are parsed automatically
  • Response status codes and headers must be manually constructed if needed
  • The module provides low-level access for maximum control and performance

fn decode_http_request #

fn decode_http_request(buffer []u8) !HttpRequest

decode_http_request parses a raw HTTP request from the given byte buffer

fn new_server #

fn new_server(config ServerConfig) !&Server

new_server creates and initializes a new Server instance.

fn parse_http1_request_line #

fn parse_http1_request_line(mut req HttpRequest) !int

parse_http1_request_line parses the request line of an HTTP/1.1 request. spec: https://datatracker.ietf.org/doc/rfc9112/ request-line is the start-line for for requests According to RFC 9112, the request line is structured as: request-line = method SP request-target SP HTTP-version where: METHOD is the HTTP method (e.g., GET, POST) SP is a single space character REQUEST-TARGET is the path or resource being requested HTTP-VERSION is the version of HTTP being used (e.g., HTTP/1.1) CRLF is a carriage return followed by a line feed returns the position after the CRLF on success

fn ResponseTakeoverMode.from #

fn ResponseTakeoverMode.from[W](input W) !ResponseTakeoverMode

enum ResponseTakeoverMode #

enum ResponseTakeoverMode {
	none
	manual
	reusable
}

struct HttpRequest #

struct HttpRequest {
pub mut:
	buffer         []u8 // A V slice of the read buffer for convenience
	method         Slice
	path           Slice
	version        Slice
	header_fields  Slice
	body           Slice
	client_conn_fd int
	user_data      voidptr // User-defined context data
}

HttpRequest represents an HTTP request. TODO make fields immutable

struct HttpResponse #

struct HttpResponse {
pub mut:
	content       []u8
	file_path     string
	takeover_mode ResponseTakeoverMode
	should_close  bool // if true, close the connection after sending (Connection: close)
	// content_owned lets the backend free or move content after it has been sent.
	content_owned bool
	// request_arena is a prealloc scope handle that must be freed after sending.
	request_arena voidptr
}

struct Server #

struct Server {
pub:
	family                  net.AddrFamily = .ip6
	port                    int            = 3000
	max_request_buffer_size int            = 8192
	timeout_in_seconds      int            = 30
	user_data               voidptr
mut:
	listen_fds      []int    = []int{len: max_thread_pool_size, cap: max_thread_pool_size, init: -1}
	epoll_fds       []int    = []int{len: max_thread_pool_size, cap: max_thread_pool_size, init: -1}
	threads         []thread = []thread{len: max_thread_pool_size, cap: max_thread_pool_size}
	request_handler fn (HttpRequest) !HttpResponse @[required]
	running         &stdatomic.AtomicVal[bool] = stdatomic.new_atomic(false)
	shutting_down   &stdatomic.AtomicVal[bool] = stdatomic.new_atomic(false)
	stopped         &stdatomic.AtomicVal[bool] = stdatomic.new_atomic(true)
	active_requests &stdatomic.AtomicVal[int]  = stdatomic.new_atomic(0)
}

fn (Server) handle #

fn (s &Server) handle() ServerHandle

handle returns a reusable handle for waiting on or shutting down the server.

fn (Server) run #

fn (mut server Server) run() !

run starts the server and begins listening for incoming connections.

struct ServerConfig #

struct ServerConfig {
pub:
	family                  net.AddrFamily = .ip6
	port                    int            = 3000
	max_request_buffer_size int            = 8192
	timeout_in_seconds      int            = 30
	handler                 fn (HttpRequest) !HttpResponse @[required]
	user_data               voidptr
}

ServerConfig bundles the parameters needed to start a fasthttp server.

struct ServerHandle #

struct ServerHandle {
	ptr voidptr
}

ServerHandle exposes lifecycle controls for a running fasthttp.Server.

fn (ServerHandle) wait_till_running #

fn (h ServerHandle) wait_till_running(params WaitTillRunningParams) !int

wait_till_running waits until the server transitions to its serving state.

fn (ServerHandle) shutdown #

fn (h ServerHandle) shutdown(params ShutdownParams) !

shutdown gracefully stops accepting new requests and waits for active requests to finish.

struct ShutdownParams #

@[params]
struct ShutdownParams {
pub:
	timeout         time.Duration = time.infinite
	retry_period_ms int           = 10
}

ShutdownParams configures how long graceful shutdown should wait for in-flight requests.

struct Slice #

struct Slice {
pub:
	start int
	len   int
}

struct WaitTillRunningParams #

@[params]
struct WaitTillRunningParams {
pub:
	max_retries     int = 100
	retry_period_ms int = 10
}

WaitTillRunningParams allows parametrizing the calls to ServerHandle.wait_till_running().