Skip to content

net.s3 #

net.s3 is an S3-compatible client written in pure V.

It speaks the AWS Signature Version 4 protocol on top of crypto.hmac and crypto.sha256, with no third-party dependencies. The same client targets any S3-compatible endpoint (hosted or self-hosted) by configuring the right endpoint, region and credentials.

Quick start

import net.s3

c := s3.new_client(s3.Credentials{
    endpoint:          'https://s3.example.com'
    access_key_id:     '...'
    secret_access_key: '...'
    bucket:            'my-bucket'
})

c.put('hello.txt', 'Hi from V!'.bytes())!
text := c.get_string('hello.txt')!
url := c.presign('hello.txt', expires_in: 3600)!

Credentials from the environment

Credentials.from_env() resolves each field from the first non-empty provider-specific environment variable, so the same code works against AWS, self-hosted and managed S3 services without reconfiguration:

import net.s3

c := s3.new_client(s3.Credentials.from_env())

Supported variables (first non-empty wins per field):

  • key id: S3_ACCESS_KEY_ID, AWS_ACCESS_KEY_ID, CELLAR_ADDON_KEY_ID, SCW_ACCESS_KEY, B2_APPLICATION_KEY_ID, R2_ACCESS_KEY_ID, SPACES_KEY
  • secret: S3_SECRET_ACCESS_KEY, AWS_SECRET_ACCESS_KEY, CELLAR_ADDON_KEY_SECRET, SCW_SECRET_KEY, B2_APPLICATION_KEY, R2_SECRET_ACCESS_KEY, SPACES_SECRET
  • session token: S3_SESSION_TOKEN, AWS_SESSION_TOKEN
  • region: S3_REGION, AWS_REGION, AWS_DEFAULT_REGION, SCW_DEFAULT_REGION
  • bucket: S3_BUCKET
  • endpoint: S3_ENDPOINT, AWS_ENDPOINT, AWS_ENDPOINT_URL, CELLAR_ADDON_HOST, B2_ENDPOINT, R2_ENDPOINT, SPACES_ENDPOINT

Multipart upload

upload_file automatically picks single-shot or multipart based on file size. For finer control, start_multipart returns a stateful MultipartUploader that streams chunks generated on the fly:

import net.s3

c := s3.new_client(s3.Credentials.from_env())
c.upload_file('big.bin', '/path/to/big.bin', s3.PutOptions{
    content_type: 'application/octet-stream'
})!

s3:// URLs

Importing net.s3 registers an s3:// scheme handler with net.http, so the generic http.fetch(url: 's3://...') route works out of the box. There is also a direct s3.fetch helper:

import net.s3

resp := s3.fetch('s3://my-bucket/hello.txt')!
println(resp.body.bytestr())

File handle

Client.file(key) returns a small File reference for ergonomic call sites:

import net.s3

c := s3.new_client(s3.Credentials.from_env())
f := c.file('hello.txt')
text := f.text()!
url := f.presign(expires_in: 3600)!

Tests

The unit suite is offline and runs by default:

v test vlib/net/s3/

The integration suite is gated on S3_INTEGRATION=1 and exercises the client against a live endpoint:

S3_INTEGRATION=1 \
S3_HOST=https://s3.example.com \
S3_KEY_ID=... S3_KEY_SECRET=... \
S3_BUCKET=v-s3-tests \
v test vlib/net/s3/integration_test.v

Constants #

const min_part_size = i64(5 * 1024 * 1024) // 5 MiB

Multipart upload constants — defaults aligned with the S3 protocol limits.

const max_part_size = i64(5 * 1024 * 1024 * 1024) // 5 GiB
const max_parts = 10000 // hard S3 limit
const version = '0.1.0'

version is the module version, kept in sync with v.mod.

const service_name = 's3'

service_name is the SigV4 service identifier baked into the signing key.

const algo = 'AWS4-HMAC-SHA256'

algo is the SigV4 algorithm marker S3 expects.

const unsigned_payload = 'UNSIGNED-PAYLOAD'

unsigned_payload is the magic string used in x-amz-content-sha256 when the payload is not pre-hashed. Used by the single-shot put path to avoid buffering / re-scanning the entire body just to sign it.

const empty_sha256 = 'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855'

empty_sha256 is sha256("") precomputed. Used for HEAD/GET/DELETE where there is no body and we want a real hash for stricter S3 endpoints.

fn build_canonical_request #

fn build_canonical_request(method string, path string, query string, headers map[string]string,
	signed_headers string, payload_hash string) string

build_canonical_request assembles the canonical request string per SigV4 §3.2. path and query MUST already be URI-encoded.

fn build_object_path #

fn build_object_path(creds Credentials, bucket_override string, key string) !string

build_object_path produces the canonical URI path for a key. Path style: /<bucket>/<encoded-key>[+endpoint extra path] Virtual hosted: /<encoded-key> Returns an error if neither bucket nor key is provided. The key is forwarded byte-exact (only percent-encoded): S3 treats keys as opaque identifiers, so folder/, /x and a//b all designate distinct objects and must reach the wire as written.

fn canonical_host #

fn canonical_host(creds Credentials, bucket_override string) string

canonical_host resolves the on-the-wire host. Path-style addressing is the default; virtual-hosted style uses <bucket>.<endpoint-host>. If no endpoint is configured we fall back to s3.<region>.amazonaws.com (the canonical default for clients pointed at AWS S3 itself).

fn canonical_query_string #

fn canonical_query_string(params map[string]string) string

canonical_query_string sorts query params by key (then by value if the same key appears twice — we don't here) and joins them as k=v&k=v with values already URI-encoded.

fn contains_crlf #

fn contains_crlf(value string) bool

contains_crlf returns true if value contains a CR or LF byte. Header values that pass user-provided strings (ACL, content-type, …) MUST be checked to prevent HTTP header injection (CRLF smuggling).

fn decode_xml_entities #

fn decode_xml_entities(s string) string

decode_xml_entities decodes the five predefined XML entities. We don't try to handle arbitrary &#nn; sequences because S3 only ever uses these five.

fn derive_signing_key #

fn derive_signing_key(secret string, date string, region string, service string) []u8

derive_signing_key implements the four-step HMAC chain that produces the SigV4 signing key. The result is cacheable per (secret, date, region, service) tuple — left to the caller to memoize if needed.

fn extract_xml_tag #

fn extract_xml_tag(body string, tag string) string

extract_xml_tag returns the inner text of <tag>...</tag> (first match, case-sensitive) with the five predefined XML entities decoded, or '' if the tag is absent.

fn fetch #

fn fetch(url string, opts FetchOptions) !FetchResponse

fetch is a fetch('s3://bucket/key', { ... })-style helper.

Examples:

resp := s3.fetch('s3://my-bucket/path/to/file.txt')! resp := s3.fetch('s3://my-bucket/key', method: .put, body: 'hello'.bytes())! resp := s3.fetch('s3://key', method: .get, credentials: s3.Credentials{ bucket: 'b', ... })!

The URL must use the s3:// scheme. Anything else is rejected outright (avoids accidentally calling a real HTTP endpoint with S3 credentials).

fn format_amz_date #

fn format_amz_date(t time.Time) string

format_amz_date returns the basic ISO-8601 timestamp used by SigV4 (YYYYMMDDTHHMMSSZ). t is assumed to be in UTC.

fn guess_region #

fn guess_region(endpoint string) string

guess_region derives the SigV4 region from an endpoint URL. Public so it can be reused by the higher-level Client for log / inspect output.

fn hmac_sha256 #

fn hmac_sha256(key []u8, data []u8) []u8

hmac_sha256 wraps crypto.hmac.new for HMAC-SHA-256 with the V stdlib's type signature (a hash function that returns []u8).

fn info_string #

fn info_string() string

info_string returns a one-line build-time identification, useful in User-Agent strings or --version output.

fn new_client #

fn new_client(creds Credentials) Client

new_client builds a Client. Pass an empty Credentials to fall back to env vars.

fn new_error #

fn new_error(code string, message string) IError

new_error builds an S3Error from a code + message. Use this for client-side validation failures (missing creds, invalid path, etc.).

fn new_http_error #

fn new_http_error(status int, path string, body string) IError

new_http_error wraps an HTTP-level failure. Body is the raw response body — parse_xml_error is responsible for digging out the structured <Error> envelope when the server returns one.

fn normalize_method #

fn normalize_method(m string) ?string

normalize_method uppercases and validates the HTTP method.

fn parse_acl #

fn parse_acl(s string) Acl

parse_acl returns the matching Acl from a wire string. Returns .unset when the input is empty or unknown so the caller can ignore it.

fn parse_list_response #

fn parse_list_response(body string) !ListResult

parse_list_response is hand-rolled XML extraction. ListObjectsV2 uses a rigid structure with no attributes we care about, so a tag scanner beats pulling in the full XML decoder both in code size and predictability against malformed input.

fn parse_s3_url #

fn parse_s3_url(url string) !(string, string)

parse_s3_url splits s3://bucket/key/with/slashes into (bucket, key).

Special case: s3://key (no second path component) returns ('', 'key') — the caller is then expected to provide the bucket via credentials.

fn presign_url #

fn presign_url(creds Credentials, req PresignRequest) !string

presign_url returns a fully-formed https://... URL signed via SigV4 query-string parameters. The URL is valid until now + expires_in.

fn redact_url #

fn redact_url(url string) string

redact_url strips query strings before logging. Used in error messages so presigned-URL-style query params (which can contain credentials) aren't leaked into logs.

fn sha256_hex #

fn sha256_hex(data []u8) string

sha256_hex returns the lowercase hex digest of data.

fn sign_request #

fn sign_request(creds Credentials, req SignRequest) !SignedRequest

sign_request builds the SigV4 Authorization header for an HTTP request.

The returned SignedRequest.headers is the full set the caller must send (excluding Content-Length, which the HTTP client adds itself). Adding, removing, or mutating a header after signing will break the signature.

fn strip_slashes #

fn strip_slashes(s string) string

strip_slashes removes leading and trailing '/' or '\' separators. S3 canonical paths must contain a single leading slash, no trailing one.

fn to_hex_lower #

fn to_hex_lower(data []u8) string

to_hex_lower formats raw bytes as their lowercase hex string. Used for SHA-256 digests inside SigV4 (the spec requires lowercase).

fn uri_encode #

fn uri_encode(input string, encode_slash bool) string

uri_encode performs RFC 3986 percent-encoding as required by Signature V4. Only the unreserved set A–Z / a–z / 0–9 / '-' / '_' / '.' / '~' is preserved. When encode_slash is false (used for object keys), '/' is left intact and backslashes are normalized to '/' so Windows-style paths produce the same canonical key. All other bytes are emitted as %XX with uppercase hex digits.

fn uri_encode_path #

fn uri_encode_path(path string) string

uri_encode_path encodes an S3 object key segment-aware: '/' is preserved because S3 paths use it as the segment separator.

fn uri_encode_query #

fn uri_encode_query(value string) string

uri_encode_query encodes a value that will appear inside a query string. Slashes must be percent-encoded.

fn validate_bucket_name #

fn validate_bucket_name(name string) !

validate_bucket_name applies the S3 bucket naming rules honoured by most providers:- 3..63 chars

  • lowercase letters, digits, dots, hyphens only
  • must start/end with letter or digit
  • no consecutive dots, no .- / -.
  • cannot look like an IPv4 address

Provider-specific reservations (e.g. xn--, sthree-) are intentionally not checked here — they vary, so the server-side error is authoritative.

fn Acl.from #

fn Acl.from[W](input W) !Acl

fn Credentials.from_env #

fn Credentials.from_env() Credentials

Credentials.from_env reads credentials from environment variables, trying several provider conventions in order so the same code works against many hosts without reconfiguration. Each field is resolved independently; the first non-empty value wins.

Lookup order per field: key id : S3_ACCESS_KEY_ID, AWS_ACCESS_KEY_ID, CELLAR_ADDON_KEY_ID, SCW_ACCESS_KEY, B2_APPLICATION_KEY_ID, R2_ACCESS_KEY_ID, SPACES_KEY secret : S3_SECRET_ACCESS_KEY, AWS_SECRET_ACCESS_KEY, CELLAR_ADDON_KEY_SECRET, SCW_SECRET_KEY, B2_APPLICATION_KEY, R2_SECRET_ACCESS_KEY, SPACES_SECRET session : S3_SESSION_TOKEN, AWS_SESSION_TOKEN region : S3_REGION, AWS_REGION, AWS_DEFAULT_REGION, SCW_DEFAULT_REGION bucket : S3_BUCKET endpoint : S3_ENDPOINT, AWS_ENDPOINT, AWS_ENDPOINT_URL, CELLAR_ADDON_HOST, B2_ENDPOINT, R2_ENDPOINT, SPACES_ENDPOINT

fn StorageClass.from #

fn StorageClass.from[W](input W) !StorageClass

enum Acl #

enum Acl {
	unset
	private
	public_read
	public_read_write
	aws_exec_read
	authenticated_read
	bucket_owner_read
	bucket_owner_full_control
	log_delivery_write
}

ACL is the canned Access Control List applied to a stored object or bucket. Values match the S3 wire protocol.

fn (Acl) to_header_value #

fn (a Acl) to_header_value() string

to_header_value renders the ACL as the canonical S3 string used in headers and presigned query parameters. .unset returns an empty string so callers can skip the header.

enum StorageClass #

enum StorageClass {
	unset
	standard
	deep_archive
	express_onezone
	glacier
	glacier_ir
	intelligent_tiering
	onezone_ia
	outposts
	reduced_redundancy
	snow
	standard_ia
}

StorageClass enumerates the standard S3 storage tiers. .unset means the server default (typically STANDARD) and produces no x-amz-storage-class header. Providers vary on which tiers they actually honour.

fn (StorageClass) to_header_value #

fn (sc StorageClass) to_header_value() string

to_header_value renders the storage class as the on-wire string.

struct BucketOptions #

@[params]
struct BucketOptions {
pub:
	bucket            string
	acl               Acl
	region_constraint string
}

struct Client #

@[heap]
struct Client {
pub:
	credentials Credentials
	// part_size is the multipart-upload chunk size in bytes (default 5 MiB,
	// the S3 minimum).
	part_size i64 = 5 * 1024 * 1024
	// queue_size is the intended parallel-upload concurrency for multipart
	// (currently sequential; reserved).
	queue_size int = 5
	// retry is the number of retry attempts for failed uploads.
	retry int = 3
	// read_timeout / write_timeout map onto V's net.http settings.
	// Defaults are generous because parts can be 5 MiB+ on slow links.
	read_timeout  i64 = 5 * 60 * time.second
	write_timeout i64 = 5 * 60 * time.second
}

Client is the entry point for S3 operations. It carries default credentials and tuneable HTTP behaviour; per-call overrides go through the option params of each method. Instantiate once, reuse for many objects.

fn (Client) abort_multipart #

fn (c &Client) abort_multipart(key string, upload_id string, opts PutOptions) !

abort_multipart cancels an in-flight upload. Best-effort — callers usually invoke it inside an error path and don't care about the result.

fn (Client) bucket_exists #

fn (c &Client) bucket_exists(opts BucketOptions) !bool

bucket_exists checks bucket existence/access. Returns true if accessible (200), false on 404 / 403 (no such bucket or no read permission), error on other statuses. Uses HEAD under the hood — no body is fetched.

fn (Client) complete_multipart #

fn (c &Client) complete_multipart(key string, upload_id string, parts []PartRef, opts PutOptions) !

complete_multipart finalizes a multipart upload. Parts must be in ascending part_number order; we sort defensively.

fn (Client) create_bucket #

fn (c &Client) create_bucket(opts BucketOptions) !

create_bucket creates a new bucket. Returns:- nil on success (HTTP 200)

  • S3Error("BucketAlreadyOwnedByYou") if you already own this bucket
  • S3Error("BucketAlreadyExists") if someone else owns it
  • S3Error("InvalidBucketName") for non-conformant names

The S3 wire response for these states is HTTP 409, parsed from the returned XML body.

fn (Client) delete #

fn (c &Client) delete(key string, opts StatOptions) !

delete removes a single object. S3 returns 204 on success, 204 on already-absent (idempotent) — we surface success in both cases.

fn (Client) delete_bucket #

fn (c &Client) delete_bucket(opts BucketOptions) !

delete_bucket removes an empty bucket. S3 returns 409 BucketNotEmpty if it still has keys — caller is expected to clean up first or handle the error.

fn (Client) exists #

fn (c &Client) exists(key string, opts StatOptions) !bool

exists is stat + boolean: true on 200, false on 404, error otherwise.

fn (Client) file #

fn (c &Client) file(key string, opts FileOptions) File

file returns a File reference for the given key, bound to this client. path may be <bucket>/<key> if the client has no default bucket and bucket here is empty.

fn (Client) get #

fn (c &Client) get(key string, opts GetOptions) ![]u8

get downloads the entire object body into memory. For large files set a range and assemble the result yourself, or use a presigned URL with an HTTP streaming client.

fn (Client) get_string #

fn (c &Client) get_string(key string, opts GetOptions) !string

get_string is a convenience wrapper around get that returns the body as a V string.

fn (Client) initiate_multipart #

fn (c &Client) initiate_multipart(key string, opts PutOptions) !string

initiate_multipart starts an upload and returns the UploadId. ACL, content-type, and friends from opts are sent here — they apply to the final object.

fn (Client) list #

fn (c &Client) list(opts ListOptions) !ListResult

list returns up to ~1000 objects matching opts. Use the returned next_continuation_token to page through more.

Speaks the ListObjectsV2 protocol.

fn (Client) presign #

fn (c &Client) presign(key string, opts PresignOptions) !string

presign generates a presigned URL — see PresignOptions for tunables.

fn (Client) put #

fn (c &Client) put(key string, data []u8, opts PutOptions) !

put uploads data to key. Use upload_file / upload_bytes_multipart for streaming or for files larger than ~100 MiB; put keeps everything in memory.

fn (Client) size #

fn (c &Client) size(key string, opts StatOptions) !i64

size returns just the Content-Length of the object.

fn (Client) start_multipart #

fn (c &Client) start_multipart(key string, opts PutOptions) !MultipartUploader

start_multipart begins a multipart upload and returns a MultipartUploader. Memory cost: zero — each upload(chunk) call streams the chunk to S3 and returns when it is acknowledged.

fn (Client) stat #

fn (c &Client) stat(key string, opts StatOptions) !Stat

stat returns the object metadata (size / last_modified / etag / content_type). Returns an S3Error with code NoSuchKey when the object doesn't exist.

fn (Client) upload_bytes_multipart #

fn (c &Client) upload_bytes_multipart(key string, data []u8, opts PutOptions) !

upload_bytes_multipart uploads an in-memory byte buffer using multipart. Useful for large generated payloads (test fixtures up to 5 GiB). Uses up to Client.queue_size parallel part uploads.

fn (Client) upload_file #

fn (c &Client) upload_file(key string, local_path string, opts PutOptions) !

upload_file streams a local file to S3, choosing single-part or multipart based on size. Use this for files larger than ~50 MiB or anything you don't want to slurp into memory.

key is the destination object key. The local file is read in client.part_size chunks (default 5 MiB).

fn (Client) upload_file_multipart #

fn (c &Client) upload_file_multipart(key string, local_path string, size i64, opts PutOptions) !

upload_file_multipart streams a local file part-by-part with up to Client.queue_size concurrent uploads. Peak memory is roughly queue_size * part_size.

fn (Client) upload_part #

fn (c &Client) upload_part(key string, upload_id string, part_number int, data []u8, opts PutOptions) !string

upload_part uploads a single chunk and returns the server-side ETag. Up to Client.retry attempts with exponential backoff (200ms, 400ms, 800ms, …) on transient failures.

The payload is always SHA-256-signed (not UNSIGNED-PAYLOAD) so the service validates byte-for-byte integrity end-to-end. Multipart uploads don't carry a Content-MD5 header by default; without payload signing a flipped bit in transit would silently produce a corrupt object that passes the multipart ETag check.

struct CommonPrefix #

struct CommonPrefix {
pub:
	prefix string
}

CommonPrefix represents a "directory-like" prefix in a list result when a delimiter is provided.

struct Credentials #

struct Credentials {
pub:
	access_key_id        string
	secret_access_key    string
	session_token        string
	region               string
	bucket               string
	endpoint             string // 'https://s3.fr-par.scw.cloud' or 'host:port' — host part is what gets signed
	virtual_hosted_style bool   // when true, '<bucket>.<endpoint-host>' addressing
	insecure_http        bool   // permit `http://` endpoints (false by default — never silently downgrades)
}

Credentials carries the authentication material plus endpoint and addressing preferences. It is intentionally kept small; defaults (region, endpoint, etc.) are derived only when the request is signed, never stored implicitly, so the same Credentials value can be reused across regions/endpoints.

Field naming matches V conventions (snake_case). The from_env helper recognises several provider conventions — see from_env.

fn (Credentials) merge #

fn (c Credentials) merge(other Credentials) Credentials

merge produces a copy of c with non-empty fields from other overriding. Useful when callers pass per-call overrides while keeping a default Client.

fn (Credentials) resolved_region #

fn (c Credentials) resolved_region() string

resolved_region returns the region to use for signing. Order:1. explicit c.region2. parsed from c.endpoint when it follows the s3.<region>.amazonaws.com pattern3. 'auto' for Cloudflare R24. 'us-east-1' (S3 historical default) when no endpoint is set

fn (Credentials) validate #

fn (c Credentials) validate() !

validate ensures the credentials carry the minimum needed to sign a request. Also rejects credentials / region / bucket / endpoint values that contain CR or LF — those would let an attacker who controls any config field smuggle headers into the Authorization line.

fn (Credentials) host_only #

fn (c Credentials) host_only() string

host_only returns the bare host[:port] from c.endpoint, stripping any scheme and trailing path. Returned value is what gets signed in the host header for SigV4.

fn (Credentials) extra_path #

fn (c Credentials) extra_path() string

extra_path returns the path component of c.endpoint, including any leading '/'. Useful for proxies that mount S3 under a sub-path.

fn (Credentials) scheme #

fn (c Credentials) scheme() string

scheme returns 'http' or 'https' based on the endpoint's explicit scheme when present, falling back to insecure_http. So all three of these work: endpoint: 'https://s3.example.com' → https endpoint: 's3.example.com' → https (default) endpoint: 'http://localhost:9000' → http (auto-detected) endpoint: 'localhost:9000', insecure_http: true → http

struct FetchOptions #

@[params]
struct FetchOptions {
pub:
	method      http.Method = .get
	body        []u8
	credentials Credentials
	// content_type, acl, etc. are forwarded as-is when method is PUT/POST.
	content_type        string
	content_disposition string
	content_encoding    string
	cache_control       string
	acl                 Acl
	storage_class       StorageClass
	request_payer       bool
	range               string
	hash_payload        bool
}

FetchOptions overlays an S3 endpoint over the fetch call. All fields are optional; bucket and key are taken from the URL.

struct FetchResponse #

struct FetchResponse {
pub:
	status_code    int
	body           []u8
	headers        map[string]string
	etag           string
	content_type   string
	content_length i64
}

FetchResponse is the simplified return type of fetch. It's intentionally flat (no streaming yet) — easier to consume than V's http.Response and surfaces the most useful fields.

struct File #

struct File {
pub:
	client &Client
	bucket string
	key    string
}

File is a reference to one S3 object. It holds no buffer; every method round-trips to S3 (or generates a presigned URL for presign).

Construct via Client.file(...) or directly with File{ client: &c, key: '...' }.

fn (File) read #

fn (f &File) read() ![]u8

read returns the full object body. For range reads, see read_range.

fn (File) text #

fn (f &File) text() !string

text is a UTF-8 convenience over read.

fn (File) read_range #

fn (f &File) read_range(begin i64, end i64) ![]u8

read_range fetches bytes=<begin>-<end_inclusive> (HTTP Range semantics). Use end < 0 to read to end-of-file.

fn (File) write #

fn (f &File) write(data []u8, opts PutOptions) !

write uploads data as the entire object body.

fn (File) write_string #

fn (f &File) write_string(s string, opts PutOptions) !

write_string is a UTF-8 convenience over write.

fn (File) stat #

fn (f &File) stat() !Stat

stat returns object metadata (size / etag / last-modified / content-type).

fn (File) exists #

fn (f &File) exists() !bool

exists is a HEAD that converts 404 into false.

fn (File) size #

fn (f &File) size() !i64

size returns the Content-Length, in bytes.

fn (File) delete #

fn (f &File) delete() !

delete removes the object. Idempotent: no error when the object is absent.

fn (File) presign #

fn (f &File) presign(opts PresignOptions) !string

presign returns a presigned URL for this object. See PresignOptions.

struct FileOptions #

@[params]
struct FileOptions {
pub:
	bucket string
}

struct GetOptions #

@[params]
struct GetOptions {
pub:
	bucket        string
	range         string // e.g. 'bytes=0-1023' for partial downloads
	version_id    string
	request_payer bool
}

GetOptions configures a get / read call.

struct HttpResponse #

struct HttpResponse {
pub:
	status_code int
	body        string
	header      http.Header
}

HttpResponse is the small subset of an HTTP response we surface to callers of the lower-level helpers. body is the raw response body for non-stream requests; header keeps V's typed Header for parsing.

struct ListOptions #

@[params]
struct ListOptions {
pub:
	bucket             string
	prefix             string
	continuation_token string
	delimiter          string
	max_keys           int // 0 means default (server picks 1000)
	start_after        string
	encoding_type      string // 'url' or empty
	fetch_owner        bool
}

ListOptions configures a ListObjectsV2 call. bucket overrides the client's default; leave empty to use the client's bound bucket.

struct ListResult #

struct ListResult {
pub:
	name                    string
	prefix                  string
	delimiter               string
	start_after             string
	max_keys                int
	key_count               int
	is_truncated            bool
	continuation_token      string
	next_continuation_token string
	objects                 []ObjectInfo
	common_prefixes         []CommonPrefix
}

ListResult aggregates a ListObjectsV2 response. NextContinuationToken should be passed back as continuation_token to fetch the next page.

struct MultipartUploader #

struct MultipartUploader {
mut:
	client      &Client
	key         string
	upload_id   string
	opts        PutOptions
	parts       []PartRef
	part_number int
	completed   bool
	aborted     bool
}

MultipartUploader is a stateful handle to an in-flight multipart upload, produced by Client.start_multipart. Use it when you want to push parts generated on-the-fly (network sources, decompressors, anything you don't want to materialise on disk):

mut up := c.start_multipart('key', s3.PutOptions{ content_type: 'application/octet-stream' })! for chunk in chunks { up.upload(chunk)! } up.complete()!

On error, complete() / upload() return without aborting. The caller must invoke abort() themselves so that defer { up.abort() or {} } remains an explicit, visible cleanup hook.

fn (MultipartUploader) upload #

fn (mut u MultipartUploader) upload(data []u8) !

upload pushes one chunk as the next part. Each chunk MUST be at least min_part_size (5 MiB) except the last one — that's an S3 invariant the server enforces; we do not buffer for you.

fn (MultipartUploader) complete #

fn (mut u MultipartUploader) complete() !

complete finalises the upload. After this call the object is visible.

fn (MultipartUploader) abort #

fn (mut u MultipartUploader) abort() !

abort cancels the in-flight upload. Idempotent.

struct ObjectInfo #

struct ObjectInfo {
pub:
	key           string
	last_modified string
	etag          string
	size          i64
	storage_class string
	owner         ?Owner
}

ObjectInfo is one entry from a ListObjectsV2 response.

struct Owner #

struct Owner {
pub:
	id           string
	display_name string
}

Owner identifies an S3 object owner — only populated when fetch_owner is true.

struct PartRef #

struct PartRef {
pub:
	part_number int
	etag        string
}

PartRef is one ETag/PartNumber pair recorded during multipart upload.

struct PresignOptions #

@[params]
struct PresignOptions {
pub:
	bucket              string // overrides the client's default bucket for this call
	method              http.Method = .get
	expires_in          int         = 86400 // seconds, 1..604800
	acl                 Acl
	storage_class       StorageClass
	content_type        string
	content_disposition string
	request_payer       bool
}

PresignOptions controls presigned URL generation.

struct PresignRequest #

struct PresignRequest {
pub:
	method      string @[required]
	path        string @[required] // canonical URI (already URI-encoded)
	expires_in  int = 86400 // 1..604800 seconds (SigV4 hard limit is 7 days)
	extra_query map[string]string // additional signed query params (e.g. response-content-type, x-amz-acl)
	sign_time   time.Time
}

PresignRequest describes a presigned URL to generate. The output is a self-contained URL — no extra headers are required at request time.

struct PutOptions #

@[params]
struct PutOptions {
pub:
	bucket              string
	content_type        string
	content_disposition string
	content_encoding    string
	cache_control       string
	acl                 Acl
	storage_class       StorageClass
	request_payer       bool
	// hash_payload, when true, computes SHA-256 of the body before signing
	// instead of using `UNSIGNED-PAYLOAD`. Slightly stronger integrity guarantee
	// at the cost of one full body scan.
	hash_payload bool
}

PutOptions configures a put / write call.

struct S3Error #

struct S3Error {
pub:
	code       string
	message    string
	status     int
	path       string
	resource   string // S3's <Resource> field, when present
	request_id string // S3 RequestId, useful for support tickets
}

S3Error is the structured error returned for every signing or service failure. code is stable (e.g. NoSuchKey, MissingCredentials), message is human-readable, status is the HTTP status (0 for client-side errors), and path is the offending object key when known.

V's IError interface only requires msg() and code(), so this type can be returned via !T and inspected with type-assertion / as.

fn (S3Error) msg #

fn (e &S3Error) msg() string

msg renders the error in a single line. Includes the S3 error code so users can switch on it without parsing the prose. Path is appended when known.

fn (S3Error) code #

fn (e &S3Error) code() int

code returns a stable numeric code so callers can use if err.code() == .... We map a few well-known S3 codes; everything else returns 0 so callers fall back to string comparison on e.code.

struct SignRequest #

struct SignRequest {
pub:
	method        string @[required] // GET, PUT, POST, DELETE, HEAD
	path          string @[required] // canonical URI (already URI-encoded, kept as-is)
	query         string            // canonical query string (sorted, encoded; no leading '?')
	payload_hash  string            // hex SHA-256 of body or `unsigned_payload`
	extra_headers map[string]string // lowercase keys, raw values — will be added to canonical/signed set
	sign_time     time.Time         // when omitted (Time{}) we use time.utc()
}

SignRequest describes the request to sign. The signer is intentionally agnostic of HTTP option semantics (ACL, storage class, …) — the caller pre-fills extra_headers with whatever it intends to send. That keeps the signer pure and easy to test against published SigV4 reference vectors.

struct SignedRequest #

struct SignedRequest {
pub:
	method        string            // canonical HTTP method that was signed (GET, PUT, …)
	url           string            // scheme://host<extra_path>/<bucket>/<key>?<query>
	host          string            // host[:port], suitable for the Host header
	amz_date      string            // YYYYMMDDTHHMMSSZ
	authorization string            // ready-to-send Authorization header value
	headers       map[string]string // ALL headers that MUST be sent for the signature to verify
}

SignedRequest is the output of header signing.

struct Stat #

struct Stat {
pub:
	size          i64    // Content-Length in bytes
	last_modified string // RFC 1123 date as returned by S3
	etag          string // unquoted ETag (server returns it wrapped in quotes; we strip them)
	content_type  string // MIME type
}

Stat is the result of HEAD-ing an object.

struct StatOptions #

@[params]
struct StatOptions {
pub:
	bucket        string
	request_payer bool
}

StatOptions configures stat / size / exists checks.