Knot Resolver library


  • libknot 2.0 (Knot DNS high-performance DNS library.)

For users

The library as described provides basic services for name resolution, which should cover the usage, examples are in the resolve API documentation.


If you’re migrating from getaddrinfo(), see “synchronous” API, but the library offers iterative API as well to plug it into your event loop for example.

For developers

The resolution process starts with the functions in resolve.c, they are responsible for:

  • reacting to state machine state (i.e. calling consume layers if we have an answer ready)

  • interacting with the library user (i.e. asking caller for I/O, accepting queries)

  • fetching assets needed by layers (i.e. zone cut)

This is the driver. The driver is not meant to know “how” the query resolves, but rather “when” to execute “what”.


On the other side are layers. They are responsible for dissecting the packets and informing the driver about the results. For example, a produce layer generates query, a consume layer validates answer.


Layers are executed asynchronously by the driver. If you need some asset beforehand, you can signalize the driver using returning state or current query flags. For example, setting a flag AWAIT_CUT forces driver to fetch zone cut information before the packet is consumed; setting a RESOLVED flag makes it pop a query after the current set of layers is finished; returning FAIL state makes it fail current query.

Layers can also change course of resolution, for example by appending additional queries.

consume = function (state, req, answer)
        if answer:qtype() == kres.type.NS then
                local qry = req:push(answer:qname(), kres.type.SOA, kres.class.IN)
                qry.flags.AWAIT_CUT = true
        return state

This doesn’t block currently processed query, and the newly created sub-request will start as soon as driver finishes processing current. In some cases you might need to issue sub-request and process it before continuing with the current, i.e. validator may need a DNSKEY before it can validate signatures. In this case, layers can yield and resume afterwards.

consume = function (state, req, answer)
        if state == kres.YIELD then
                print('continuing yielded layer')
                return kres.DONE
                if answer:qtype() == kres.type.NS then
                        local qry = req:push(answer:qname(), kres.type.SOA, kres.class.IN)
                        qry.flags.AWAIT_CUT = true
                        print('planned SOA query, yielding')
                        return kres.YIELD
                return state

The YIELD state is a bit special. When a layer returns it, it interrupts current walk through the layers. When the layer receives it, it means that it yielded before and now it is resumed. This is useful in a situation where you need a sub-request to determine whether current answer is valid or not.

Writing layers


FIXME: this dev-docs section is outdated! Better see comments in files instead, for now.

The resolver library leverages the processing API from the libknot to separate packet processing code into layers.


This is only crash-course in the library internals, see the resolver library documentation for the complete overview of the services.

The library offers following services:

  • Cache - MVCC cache interface for retrieving/storing resource records.

  • Resolution plan - Query resolution plan, a list of partial queries (with hierarchy) sent in order to satisfy original query. This contains information about the queries, nameserver choice, timing information, answer and its class.

  • Nameservers - Reputation database of nameservers, this serves as an aid for nameserver choice.

A processing layer is going to be called by the query resolution driver for each query, so you’re going to work with struct kr_request as your per-query context. This structure contains pointers to resolution context, resolution plan and also the final answer.

int consume(kr_layer_t *ctx, knot_pkt_t *pkt)
        struct kr_request *req = ctx->req;
        struct kr_query *qry = req->current_query;

This is only passive processing of the incoming answer. If you want to change the course of resolution, say satisfy a query from a local cache before the library issues a query to the nameserver, you can use states (see the Static hints for example).

int produce(kr_layer_t *ctx, knot_pkt_t *pkt)
        struct kr_request *req = ctx->req;
        struct kr_query *qry = req->current_query;

        /* Query can be satisfied locally. */
        if (can_satisfy(qry)) {
                /* This flag makes the resolver move the query
                 * to the "resolved" list. */
                qry->flags.RESOLVED = true;
                return KR_STATE_DONE;

        /* Pass-through. */
        return ctx->state;

It is possible to not only act during the query resolution, but also to view the complete resolution plan afterwards. This is useful for analysis-type tasks, or “per answer” hooks.

int finish(kr_layer_t *ctx)
        struct kr_request *req = ctx->req;
        struct kr_rplan *rplan = req->rplan;

        /* Print the query sequence with start time. */
        char qname_str[KNOT_DNAME_MAXLEN];
        struct kr_query *qry = NULL
        WALK_LIST(qry, rplan->resolved) {
                knot_dname_to_str(qname_str, qry->sname, sizeof(qname_str));
                printf("%s at %u\n", qname_str, qry->timestamp);

        return ctx->state;

APIs in Lua

The APIs in Lua world try to mirror the C APIs using LuaJIT FFI, with several differences and enhancements. There is not comprehensive guide on the API yet, but you can have a look at the bindings file.

Elementary types and constants

  • States are directly in kres table, e.g. kres.YIELD, kres.CONSUME, kres.PRODUCE, kres.DONE, kres.FAIL.

  • DNS classes are in kres.class table, e.g. kres.class.IN for Internet class.

  • DNS types are in kres.type table, e.g. kres.type.AAAA for AAAA type.

  • DNS rcodes types are in kres.rcode table, e.g. kres.rcode.NOERROR.

  • Extended DNS error codes are in kres.extended_error table, e.g. kres.extended_error.BLOCKED.

  • Packet sections (QUESTION, ANSWER, AUTHORITY, ADDITIONAL) are in the kres.section table.

Working with domain names

The internal API usually works with domain names in label format, you can convert between text and wire freely.

local dname = kres.str2dname('')
local strname = kres.dname2str(dname)

Working with resource records

Resource records are stored as tables.

local rr = { owner = kres.str2dname('owner'),
             ttl = 0,
             class = kres.class.IN,
             type = kres.type.CNAME,
             rdata = kres.str2dname('someplace') }

RRSets in packet can be accessed using FFI, you can easily fetch single records.

local rrset = { ... }
local rr = rrset:get(0) -- Return first RR

Working with packets

Packet is the data structure that you’re going to see in layers very often. They consists of a header, and four sections: QUESTION, ANSWER, AUTHORITY, ADDITIONAL. The first section is special, as it contains the query name, type, and class; the rest of the sections contain RRSets.

First you need to convert it to a type known to FFI and check basic properties. Let’s start with a snippet of a consume layer.

consume = function (state, req, pkt)
        print('rcode:', pkt:rcode())
        print('query:', kres.dname2str(pkt:qname()), pkt:qclass(), pkt:qtype())
        if pkt:rcode() ~= kres.rcode.NOERROR then
                print('error response')

You can enumerate records in the sections.

local records = pkt:section(kres.section.ANSWER)
for i = 1, #records do
        local rr = records[i]
        if rr.type == kres.type.AAAA then

During produce or begin, you might want to want to write to packet. Keep in mind that you have to write packet sections in sequence, e.g. you can’t write to ANSWER after writing AUTHORITY, it’s like stages where you can’t go back.

-- Clear answer and write QUESTION
pkt:question('\7blocked', kres.class.IN, kres.type.SOA)
-- Start writing data
-- Nothing in answer
local soa = { owner = '\7blocked', ttl = 900, class = kres.class.IN, type = kres.type.SOA, rdata = '...' }
pkt:put(soa.owner, soa.ttl, soa.class, soa.type, soa.rdata)

Working with requests

The request holds information about currently processed query, enabled options, cache, and other extra data. You primarily need to retrieve currently processed query.

consume = function (state, req, pkt)

        -- Print information about current query
        local current = req:current()
        print(current.stype, current.sclass,, current.flags)

In layers that either begin or finalize, you can walk the list of resolved queries.

local last = req:resolved()

As described in the layers, you can not only retrieve information about current query, but also push new ones or pop old ones.

-- Push new query
local qry = req:push(pkt:qname(), kres.type.SOA, kres.class.IN)
qry.flags.AWAIT_CUT = true

-- Pop the query, this will erase it from resolution plan

Significant Lua API changes

Incompatible changes since 3.0.0

In the main kres.* lua binding, there was only change in struct knot_rrset_t:

  • constructor now accepts TTL as additional parameter (defaulting to zero)

  • add_rdata() doesn’t accept TTL anymore (and will throw an error if passed)

In case you used knot_* functions and structures bound to lua:

  • knot_dname_is_sub(a, b): knot_dname_in_bailiwick(a, b) > 0

  • knot_rdata_rdlen(): knot_rdataset_at().len

  • knot_rdata_data(): knot_rdataset_at().data

  • knot_rdata_array_size(): offsetof(struct knot_data_t, data) + knot_rdataset_at().len

  • struct knot_rdataset: field names were renamed to .count and .rdata

  • some functions got inlined from headers, but you can use their kr_* clones: kr_rrsig_sig_inception(), kr_rrsig_sig_expiration(), kr_rrsig_type_covered(). Note that these functions now accept knot_rdata_t* instead of a pair knot_rdataset_t* and size_t - you can use knot_rdataset_at() for that.

  • knot_rrset_add_rdata() doesn’t take TTL parameter anymore

  • knot_rrset_init_empty() was inlined, but in lua you can use the constructor

  • knot_rrset_ttl() was inlined, but in lua you can use :ttl() method instead

  • knot_pkt_qname(), _qtype(), _qclass(), _rr(), _section() were inlined, but in lua you can use methods instead, e.g. myPacket:qname()

  • knot_pkt_free() takes knot_pkt_t* instead of knot_pkt_t**, but from lua you probably didn’t want to use that; constructor ensures garbage collection.

API reference


This section is generated with doxygen and breathe. Due to their limitations, some symbols may be incorrectly described or missing entirely. For exhaustive and accurate reference, refer to the header files instead.

Name resolution

The API provides an API providing a “consumer-producer”-like interface to enable user to plug it into existing event loop or I/O code.

Example usage of the iterative API:

// Create request and its memory pool
struct kr_request req = {
    .pool = {
        .ctx = mp_new (4096),
        .alloc = (mm_alloc_t) mp_alloc

// Setup and provide input query
int state = kr_resolve_begin(&req, ctx);
state = kr_resolve_consume(&req, query);

// Generate answer
while (state == KR_STATE_PRODUCE) {

    // Additional query generate, do the I/O and pass back answer
    state = kr_resolve_produce(&req, &addr, &type, query);
    while (state == KR_STATE_CONSUME) {
        int ret = sendrecv(addr, proto, query, resp);

        // If I/O fails, make "resp" empty
        state = kr_resolve_consume(&request, addr, resp);

// "state" is either DONE or FAIL
kr_resolve_finish(&request, state);



Initializer for an array of *_selected.


typedef uint8_t *(*alloc_wire_f)(struct kr_request *req, uint16_t *maxlen)

Allocate buffer for answer’s wire (*maxlen may get lowered).

Motivation: XDP wire allocation is an overlap of library and daemon:

  • it needs to be called from the library

  • it needs to rely on some daemon’s internals

  • the library (currently) isn’t allowed to directly use symbols from daemon (contrary to modules), e.g. some of our lib-using tests run without daemon

Note: after we obtain the wire, we’re obliged to send it out. (So far there’s no use case to allow cancelling at that point.)

typedef bool (*addr_info_f)(struct sockaddr*)
typedef void (*async_resolution_f)(knot_dname_t*, enum knot_rr_type)
typedef see_source_code kr_sockaddr_array_t


enum kr_rank

RRset rank - for cache and ranked_rr_*.

The rank meaning consists of one independent flag - KR_RANK_AUTH, and the rest have meaning of values where only one can hold at any time. You can use one of the enums as a safe initial value, optionally | KR_RANK_AUTH; otherwise it’s best to manipulate ranks via the kr_rank_* functions.

See also:


The representation is complicated by restrictions on integer comparison:

  • AUTH must be > than !AUTH

  • AUTH INSECURE must be > than AUTH (because it attempted validation)

  • !AUTH SECURE must be > than AUTH (because it’s valid)


enumerator KR_RANK_INITIAL

Did not attempt to validate.

It’s assumed compulsory to validate (or prove insecure).

enumerator KR_RANK_OMIT

Do not attempt to validate.

(And don’t consider it a validation failure.)

enumerator KR_RANK_TRY

Attempt to validate, but failures are non-fatal.

enumerator KR_RANK_INDET

Unable to determine whether it should be secure.

enumerator KR_RANK_BOGUS

Ought to be secure but isn’t.

enumerator KR_RANK_MISSING

No RRSIG found for that owner+type combination.


Proven to be insecure, i.e.

we have a chain of trust from TAs that cryptographically denies the possibility of existence of a positive chain of trust from the TAs to the record. Or it may be covered by a closer negative TA.

enumerator KR_RANK_AUTH

Authoritative data flag; the chain of authority was “verified”.

Even if not set, only in-bailiwick stuff is acceptable, i.e. almost authoritative (example: mandatory glue and its NS RR).

enumerator KR_RANK_SECURE

Verified whole chain of trust from the closest TA.


bool kr_rank_check(uint8_t rank)

Check that a rank value is valid.

Meant for assertions.

bool kr_rank_test(uint8_t rank, uint8_t kr_flag)

Test the presence of any flag/state in a rank, i.e.

including KR_RANK_AUTH.

static inline void kr_rank_set(uint8_t *rank, uint8_t kr_flag)

Set the rank state.

The _AUTH flag is kept as it was.

int kr_resolve_begin(struct kr_request *request, struct kr_context *ctx)

Begin name resolution.


Expects a request to have an initialized mempool.

  • request – request state with initialized mempool

  • ctx – resolution context


CONSUME (expecting query)

knot_rrset_t *kr_request_ensure_edns(struct kr_request *request)

Ensure that request->answer->opt_rr is present if query has EDNS.

This function should be used after clearing a response packet to ensure its opt_rr is properly set. Returns the opt_rr (for convenience) or NULL.

knot_pkt_t *kr_request_ensure_answer(struct kr_request *request)

Ensure that request->answer is usable, and return it (for convenience).

It may return NULL, in which case it marks ->state with _FAIL and no answer will be sent. Only use this when it’s guaranteed that there will be no delay before sending it. You don’t need to call this in places where “resolver knows” that there will be no delay, but even there you need to check if the ->answer is NULL (unless you check for _FAIL anyway).

int kr_resolve_consume(struct kr_request *request, struct kr_transport **transport, knot_pkt_t *packet)

Consume input packet (may be either first query or answer to query originated from kr_resolve_produce())


If the I/O fails, provide an empty or NULL packet, this will make iterator recognize nameserver failure.

  • request – request state (awaiting input)

  • src – [in] packet source address

  • packet – [in] input packet


any state

int kr_resolve_produce(struct kr_request *request, struct kr_transport **transport, knot_pkt_t *packet)

Produce either next additional query or finish.

If the CONSUME is returned then dst, type and packet will be filled with appropriate values and caller is responsible to send them and receive answer. If it returns any other state, then content of the variables is undefined.

  • request – request state (in PRODUCE state)

  • dst – [out] possible address of the next nameserver

  • type – [out] possible used socket type (SOCK_STREAM, SOCK_DGRAM)

  • packet – [out] packet to be filled with additional query


any state

int kr_resolve_checkout(struct kr_request *request, const struct sockaddr *src, struct kr_transport *transport, knot_pkt_t *packet)

Finalises the outbound query packet with the knowledge of the IP addresses.


The function must be called before actual sending of the request packet.

  • request – request state (in PRODUCE state)

  • src – address from which the query is going to be sent

  • dst – address of the name server

  • type – used socket type (SOCK_STREAM, SOCK_DGRAM)

  • packet – [in,out] query packet to be finalised


kr_ok() or error code

int kr_resolve_finish(struct kr_request *request, int state)

Finish resolution and commit results if the state is DONE.


The structures will be deinitialized, but the assigned memory pool is not going to be destroyed, as it’s owned by caller.

  • request – request state

  • state – either DONE or FAIL state (to be assigned to request->state)



struct kr_rplan *kr_resolve_plan(struct kr_request *request)

Return resolution plan.

  • request – request state


pointer to rplan

knot_mm_t *kr_resolve_pool(struct kr_request *request)

Return memory pool associated with request.

  • request – request state



int kr_request_set_extended_error(struct kr_request *request, int info_code, const char *extra_text)

Set the extended DNS error for request.

The error is set only if it has a higher or the same priority as the one already assigned. The provided extra_text may be NULL, or a string that is allocated either statically, or on the request’s mempool. To clear any error, call it with KNOT_EDNS_EDE_NONE and NULL as extra_text.

To facilitate debugging, we include a unique base32 identifier at the start of the extra_text field for every call of this function. To generate such an identifier, you can use the command: $ base32 /dev/random | head -c 4

  • request – request state

  • info_code – extended DNS error code

  • extra_text – optional string with additional information


info_code that is set after the call

static inline void kr_query_inform_timeout(struct kr_request *req, const struct kr_query *qry)
struct kr_context
#include <resolve.h>

Name resolution context.

Resolution context provides basic services like cache, configuration and options.


This structure is persistent between name resolutions and may be shared between threads.

Public Members

struct kr_qflags options

Default kr_request flags.

For startup defaults see init_resolver()

knot_rrset_t *downstream_opt_rr

Default EDNS towards both clients and upstream.

LATER: consider splitting the two, e.g. allow separately configured limits for UDP packet size (say, LAN is under control).

knot_rrset_t *upstream_opt_rr
trie_t *trust_anchors
trie_t *negative_anchors
struct kr_zonecut root_hints
struct kr_cache cache
unsigned cache_rtt_tout_retry_interval
module_array_t *modules
struct kr_cookie_ctx cookie_ctx
int32_t tls_padding

See net.tls_padding in ../daemon/README.rst &#8212; -1 is “true” (default policy), 0 is “false” (no padding)

knot_mm_t *pool
struct kr_request_qsource_flags

Public Members

bool tcp

true if the request is not on UDP; only meaningful if (dst_addr).

bool tls

true if the request is encrypted; only meaningful if (dst_addr).

bool http

true if the request is on HTTP; only meaningful if (dst_addr).

bool xdp

true if the request is on AF_XDP; only meaningful if (dst_addr).

struct kr_extended_error

Public Members

int32_t info_code

May contain -1 (KNOT_EDNS_EDE_NONE); filter before converting to uint16_t.

const char *extra_text

Can be NULL.

Allocated on the kr_request::pool or static.

struct kr_request
#include <resolve.h>

Name resolution request.

Keeps information about current query processing between calls to processing APIs, i.e. current resolved query, resolution plan, … Use this instead of the simple interface if you want to implement multiplexing or custom I/O.


All data for this request must be allocated from the given pool.

Public Members

struct kr_context *ctx
knot_pkt_t *answer

See kr_request_ensure_answer()

struct kr_query *current_query

Current evaluated query.

const struct sockaddr *addr

Address that originated the request.

May be that of a client behind a proxy, if PROXYv2 is used. Otherwise, it will be the same as comm_addr. NULL for internal origin.

const struct sockaddr *comm_addr

Address that communicated the request.

This may be the address of a proxy. It is the same as addr if no proxy is used. NULL for internal origin.

const struct sockaddr *dst_addr

Address that accepted the request.

NULL for internal origin. Beware: in case of UDP on wildcard address it will be wildcard; closely related: issue #173.

const knot_pkt_t *packet
struct kr_request_qsource_flags flags

Request flags from the point of view of the original client.

This client may be behind a proxy.

struct kr_request_qsource_flags comm_flags

Request flags from the point of view of the client actually communicating with the resolver.

When PROXYv2 protocol is used, this describes the request from the proxy. When there is no proxy, this will have exactly the same value as flags.

size_t size

query packet size

int32_t stream_id

HTTP/2 stream ID for DoH requests.

kr_http_header_array_t headers

HTTP/2 headers for DoH requests.

struct kr_request.[anonymous] qsource
unsigned rtt

Current upstream RTT.

const struct kr_transport *transport

Current upstream transport.

struct kr_request.[anonymous] upstream

Upstream information, valid only in consume() phase.

struct kr_qflags options
int state
ranked_rr_array_t answ_selected
ranked_rr_array_t auth_selected
ranked_rr_array_t add_selected
bool answ_validated

internal to validator; beware of caching, etc.

bool auth_validated

see answ_validated ^^ ; TODO

uint8_t rank

Overall rank for the request.

Values from kr_rank, currently just KR_RANK_SECURE and _INITIAL. Only read this in finish phase and after validator, please. Meaning of _SECURE: all RRs in answer+authority are _SECURE, including any negative results implied (NXDOMAIN, NODATA).

struct kr_rplan rplan
trace_log_f trace_log

Logging tracepoint.

trace_callback_f trace_finish

Request finish tracepoint.

int vars_ref

Reference to per-request variable table.

LUA_NOREF if not set.

knot_mm_t pool
unsigned int uid

for logging purposes only

addr_info_f is_tls_capable
addr_info_f is_tcp_connected
addr_info_f is_tcp_waiting
kr_sockaddr_array_t forwarding_targets

When forwarding, possible targets are put here.

struct kr_request.[anonymous] selection_context
unsigned int count_no_nsaddr
unsigned int count_fail_row
alloc_wire_f alloc_wire_cb

CB to allocate answer wire (can be NULL).

struct kr_extended_error extended_error

EDE info; don’t modify directly, use kr_request_set_extended_error()


typedef int32_t (*kr_stale_cb)(int32_t ttl, const knot_dname_t *owner, uint16_t type, const struct kr_query *qry)

Callback for serve-stale decisions.

Param ttl:

the expired TTL (i.e. it’s < 0)


the adjusted TTL (typically 1) or < 0.


void kr_qflags_set(struct kr_qflags *fl1, struct kr_qflags fl2)

Combine flags together.

This means set union for simple flags.

void kr_qflags_clear(struct kr_qflags *fl1, struct kr_qflags fl2)

Remove flags.

This means set-theoretic difference.

int kr_rplan_init(struct kr_rplan *rplan, struct kr_request *request, knot_mm_t *pool)

Initialize resolution plan (empty).

  • rplan – plan instance

  • request – resolution request

  • pool – ephemeral memory pool for whole resolution

void kr_rplan_deinit(struct kr_rplan *rplan)

Deinitialize resolution plan, aborting any uncommitted transactions.

  • rplan – plan instance

bool kr_rplan_empty(struct kr_rplan *rplan)

Return true if the resolution plan is empty (i.e.

finished or initialized)

  • rplan – plan instance


true or false

struct kr_query *kr_rplan_push_empty(struct kr_rplan *rplan, struct kr_query *parent)

Push empty query to the top of the resolution plan.


This query serves as a cookie query only.

  • rplan – plan instance

  • parent – query parent (or NULL)


query instance or NULL

struct kr_query *kr_rplan_push(struct kr_rplan *rplan, struct kr_query *parent, const knot_dname_t *name, uint16_t cls, uint16_t type)

Push a query to the top of the resolution plan.


This means that this query takes precedence before all pending queries.

  • rplan – plan instance

  • parent – query parent (or NULL)

  • name – resolved name

  • cls – resolved class

  • type – resolved type


query instance or NULL

int kr_rplan_pop(struct kr_rplan *rplan, struct kr_query *qry)

Pop existing query from the resolution plan.


Popped queries are not discarded, but moved to the resolved list.

  • rplan – plan instance

  • qry – resolved query


0 or an error

bool kr_rplan_satisfies(struct kr_query *closure, const knot_dname_t *name, uint16_t cls, uint16_t type)

Return true if resolution chain satisfies given query.

struct kr_query *kr_rplan_resolved(struct kr_rplan *rplan)

Return last resolved query.

struct kr_query *kr_rplan_last(struct kr_rplan *rplan)

Return last query (either currently being solved or last resolved).

This is necessary to retrieve the last query in case of resolution failures (e.g. time limit reached).

struct kr_query *kr_rplan_find_resolved(struct kr_rplan *rplan, struct kr_query *parent, const knot_dname_t *name, uint16_t cls, uint16_t type)

Check if a given query already resolved.

  • rplan – plan instance

  • parent – query parent (or NULL)

  • name – resolved name

  • cls – resolved class

  • type – resolved type


query instance or NULL

struct kr_qflags
#include <rplan.h>

Query flags.

Public Members


Don’t minimize QNAME.

bool NO_IPV6

Disable IPv6.

bool NO_IPV4

Disable IPv4.

bool TCP

Use TCP (or TLS) for this query.


Do not send any answer to the client.

Request state should be set to KR_STATE_FAIL when this flag is set.


Query is resolved.

Note that kr_query gets RESOLVED before following a CNAME chain; see .CNAME.


Query is waiting for A address.


Query is waiting for AAAA address.


Query is waiting for zone cut lookup.

bool NO_EDNS

Don’t use EDNS.


Query response is cached.


No cache for lookup; exception: finding NSs and subqueries.


Query response is cached but expiring.

See is_expiring().


Allow queries to local or private address ranges.


Want DNSSEC secured answer; exception: +cd, i.e.



Query response is DNSSEC bogus.


Query response is DNSSEC insecure.


Instruction to set CD bit in request.

bool STUB

Stub resolution, accept received answer as solved.


Always recover zone cut (even if cached).


Query response has wildcard expansion.


Permissive resolver mode.


Strict resolver mode.


Query again because bad cookie returned.

bool CNAME

Query response contains CNAME in answer section.


Reorder cached RRs.

bool TRACE

Also log answers on debug level.

bool NO_0X20

Disable query case randomization .


DS non-existence is proven.


Closest encloser proof has optout.


Non-authoritative in-bailiwick records are enough.

TODO: utilize this also outside cache.


Forward all queries to upstream; validate answers.

bool DNS64_MARK

Internal mark for dns64 module.


Internal to cache module.


No valid NS found during last PRODUCE stage.


Set by iterator in consume phase to indicate whether some basic aspects of the packet are OK, e.g.



Don’t do any DNS64 stuff (meant for view:addr).

struct kr_query
#include <rplan.h>

Single query representation.

Public Members

struct kr_query *parent
knot_dname_t *sname

The name to resolve - lower-cased, uncompressed.

uint16_t stype
uint16_t sclass
uint16_t id
uint16_t reorder

Seed to reorder (cached) RRs in answer or zero.

struct kr_qflags flags
struct kr_qflags forward_flags
uint32_t secret
uint32_t uid

Query iteration number, unique within the kr_rplan.

uint64_t creation_time_mono
uint64_t timestamp_mono

Time of query created or time of query to upstream resolver (milliseconds).

struct timeval timestamp

Real time for TTL+DNSSEC checks (.tv_sec only).

struct kr_zonecut zone_cut
struct kr_layer_pickle *deferred
int8_t cname_depth

Current xNAME depth, set by iterator.

0 = uninitialized, 1 = no CNAME, … See also KR_CNAME_CHAIN_LIMIT.

struct kr_query *cname_parent

Pointer to the query that originated this one because of following a CNAME (or NULL).

struct kr_request *request

Parent resolution request.

kr_stale_cb stale_cb

See the type.

struct kr_server_selection server_selection
struct kr_rplan
#include <rplan.h>

Query resolution plan structure.

The structure most importantly holds the original query, answer and the list of pending queries required to resolve the original query. It also keeps a notion of current zone cut.

Public Members

kr_qarray_t pending

List of pending queries.

Beware: order is significant ATM, as the last is the next one to solve, and they may be inter-dependent.

kr_qarray_t resolved

List of resolved queries.

struct kr_query *initial

The initial query (also in pending or resolved).

struct kr_request *request

Parent resolution request.

knot_mm_t *pool

Temporary memory pool.

uint32_t next_uid

Next value for kr_query::uid (incremental).





int cache_peek(kr_layer_t *ctx, knot_pkt_t *pkt)
int cache_stash(kr_layer_t *ctx, knot_pkt_t *pkt)
int kr_cache_open(struct kr_cache *cache, const struct kr_cdb_api *api, struct kr_cdb_opts *opts, knot_mm_t *mm)

Open/create cache with provided storage options.

  • cache – cache structure to be initialized

  • api – storage engine API

  • opts – storage-specific options (may be NULL for default)

  • mm – memory context.


0 or an error code

void kr_cache_close(struct kr_cache *cache)

Close persistent cache.


This doesn’t clear the data, just closes the connection to the database.

  • cache – structure

int kr_cache_commit(struct kr_cache *cache)

Run after a row of operations to release transaction/lock if needed.

static inline bool kr_cache_is_open(struct kr_cache *cache)

Return true if cache is open and enabled.

static inline void kr_cache_make_checkpoint(struct kr_cache *cache)

(Re)set the time pair to the current values.

int kr_cache_insert_rr(struct kr_cache *cache, const knot_rrset_t *rr, const knot_rrset_t *rrsig, uint8_t rank, uint32_t timestamp, bool ins_nsec_p)

Insert RRSet into cache, replacing any existing data.

  • cache – cache structure

  • rr – inserted RRSet

  • rrsig – RRSIG for inserted RRSet (optional)

  • rank – rank of the data

  • timestamp – current time (as-if; if the RR are older, their timestamp is appropriate)

  • ins_nsec_p – update NSEC* parameters if applicable


0 or an errcode

int kr_cache_clear(struct kr_cache *cache)

Clear all items from the cache.

  • cache – cache structure


if nonzero is returned, there’s a big problem - you probably want to abort(), perhaps except for kr_error(EAGAIN) which probably indicates transient errors.

int kr_cache_peek_exact(struct kr_cache *cache, const knot_dname_t *name, uint16_t type, struct kr_cache_p *peek)
int32_t kr_cache_ttl(const struct kr_cache_p *peek, const struct kr_query *qry, const knot_dname_t *name, uint16_t type)
int kr_cache_materialize(knot_rdataset_t *dst, const struct kr_cache_p *ref, knot_mm_t *pool)
int kr_cache_remove(struct kr_cache *cache, const knot_dname_t *name, uint16_t type)

Remove an entry from cache.


only “exact hits” are considered ATM, and some other information may be removed alongside.

  • cache – cache structure

  • name – dname

  • type – rr type


number of deleted records, or negative error code

int kr_cache_match(struct kr_cache *cache, const knot_dname_t *name, bool exact_name, knot_db_val_t keyval[][2], int maxcount)

Get keys matching a dname lf prefix.


the cache keys are matched by prefix, i.e. it very much depends on their structure; CACHE_KEY_DEF.

  • cache – cache structure

  • name – dname

  • exact_name – whether to only consider exact name matches

  • keyval – matched key-value pairs

  • maxcount – limit on the number of returned key-value pairs


result count or an errcode

int kr_cache_remove_subtree(struct kr_cache *cache, const knot_dname_t *name, bool exact_name, int maxcount)

Remove a subtree in cache.

It’s like _match but removing them instead of returning.


number of deleted entries or an errcode

int kr_cache_closest_apex(struct kr_cache *cache, const knot_dname_t *name, bool is_DS, knot_dname_t **apex)

Find the closest cached zone apex for a name (in cache).


timestamp is found by a syscall, and stale-serving is not considered

  • is_DS – start searching one name higher


the number of labels to remove from the name, or negative error code

int kr_unpack_cache_key(knot_db_val_t key, knot_dname_t *buf, uint16_t *type)

Unpack dname and type from db key.


only “exact hits” are considered ATM, moreover xNAME records are “hidden” as NS. (see comments in struct entry_h)

  • key – db key representation

  • buf – output buffer of domain name in dname format

  • type – output for type


length of dname or an errcode

int kr_cache_check_health(struct kr_cache *cache, int interval)

Periodic kr_cdb_api::check_health().

  • interval – in milliseconds. 0 for one-time check, -1 to stop the checks.


see check_health() for one-time check; otherwise normal kr_error() code.


const char *kr_cache_emergency_file_to_remove

Path to cache file to remove on critical out-of-space error.

(do NOT modify it)

struct kr_cache
#include <api.h>

Cache structure, keeps API, instance and metadata.

Public Members

kr_cdb_pt db

Storage instance.

const struct kr_cdb_api *api

Storage engine.

struct kr_cdb_stats stats
uint32_t ttl_min
uint32_t ttl_max

TTL limits; enforced primarily in iterator actually.

struct timeval checkpoint_walltime

Wall time on the last check-point.

uint64_t checkpoint_monotime

Monotonic milliseconds on the last check-point.

uv_timer_t *health_timer

Timer used for kr_cache_check_health()

struct kr_cache_p

Public Members

uint32_t time

The time of inception.

uint32_t ttl

TTL at inception moment.

Assuming it fits into int32_t ATM.

uint8_t rank

See enum kr_rank.

void *raw_data
void *raw_bound
struct kr_cache_p.[anonymous] [anonymous]

Header internal for cache implementation(s).

Only LMDB works for now.



LATER(optim.): this is overshot, but struct key usage should be cheap ATM.


Size of the RR count field.

VERBOSE_MSG(qry, ...)
cache_op(cache, op, ...)

Shorthand for operations on cache backend.


typedef uint32_t nsec_p_hash_t

Hash of NSEC3 parameters, used as a tag to separate different chains for same zone.

typedef knot_db_val_t entry_list_t[EL_LENGTH]

Decompressed entry_apex.

It’s an array of unparsed entry_h references. Note: arrays are passed “by reference” to functions (in C99).


enum [anonymous]


enum EL

Indices for decompressed entry_list_t.


enumerator EL_NS
enumerator EL_CNAME
enumerator EL_DNAME
enumerator EL_LENGTH
enum [anonymous]


enumerator AR_ANSWER

Positive answer record.

It might be wildcard-expanded.

enumerator AR_SOA

SOA record.

enumerator AR_NSEC

NSEC* covering or matching the SNAME (next closer name in NSEC3 case).

enumerator AR_WILD

NSEC* covering or matching the source of synthesis.

enumerator AR_CPE

NSEC3 matching the closest provable encloser.


struct entry_h *entry_h_consistent_E(knot_db_val_t data, uint16_t type)

Check basic consistency of entry_h for ‘E’ entries, not looking into ->data.

(for is_packet the length of data is checked)

struct entry_apex *entry_apex_consistent(knot_db_val_t val)
static inline struct entry_h *entry_h_consistent_NSEC(knot_db_val_t data)

Consistency check, ATM common for NSEC and NSEC3.

static inline struct entry_h *entry_h_consistent(knot_db_val_t data, uint16_t type)
static inline int nsec_p_rdlen(const uint8_t *rdata)
static inline nsec_p_hash_t nsec_p_mkHash(const uint8_t *nsec_p)
static inline size_t key_nwz_off(const struct key *k)
static inline size_t key_nsec3_hash_off(const struct key *k)
knot_db_val_t key_exact_type_maypkt(struct key *k, uint16_t type)

Finish constructing string key for for exact search.

It’s assumed that kr_dname_lf(k->buf, owner, *) had been ran.

static inline knot_db_val_t key_exact_type(struct key *k, uint16_t type)

Like key_exact_type_maypkt but with extra checks if used for RRs only.

static inline uint16_t EL2RRTYPE(enum EL i)
int entry_h_seek(knot_db_val_t *val, uint16_t type)

There may be multiple entries within, so rewind val to the one we want.

ATM there are multiple types only for the NS ktype - it also accommodates xNAMEs.


val->len represents the bound of the whole list, not of a single entry.


in case of ENOENT, val is still rewound to the beginning of the next entry.


error code TODO: maybe get rid of this API?

int entry_h_splice(knot_db_val_t *val_new_entry, uint8_t rank, const knot_db_val_t key, const uint16_t ktype, const uint16_t type, const knot_dname_t *owner, const struct kr_query *qry, struct kr_cache *cache, uint32_t timestamp)

Prepare space to insert an entry.

Some checks are performed (rank, TTL), the current entry in cache is copied with a hole ready for the new entry (old one of the same type is cut out).

  • val_new_entry – The only changing parameter; ->len is read, ->data written.


error code

int entry_list_parse(const knot_db_val_t val, entry_list_t list)

Parse an entry_apex into individual items.


error code.

static inline size_t to_even(size_t n)
static inline int entry_list_serial_size(const entry_list_t list)
void entry_list_memcpy(struct entry_apex *ea, entry_list_t list)

Fill contents of an entry_apex.


NULL pointers are overwritten - caller may like to fill the space later.

void stash_pkt(const knot_pkt_t *pkt, const struct kr_query *qry, const struct kr_request *req, bool needs_pkt)

Stash the packet into cache (if suitable, etc.)

  • needs_pkt – we need the packet due to not stashing some RRs; see stash_rrset() for details It assumes check_dname_for_lf().

int answer_from_pkt(kr_layer_t *ctx, knot_pkt_t *pkt, uint16_t type, const struct entry_h *eh, const void *eh_bound, uint32_t new_ttl)

Try answering from packet cache, given an entry_h.

This assumes the TTL is OK and entry_h_consistent, but it may still return error. On success it handles all the rest, incl. qry->flags.

static inline bool is_expiring(uint32_t orig_ttl, uint32_t new_ttl)

Record is expiring if it has less than 1% TTL (or less than 5s)

int32_t get_new_ttl(const struct entry_h *entry, const struct kr_query *qry, const knot_dname_t *owner, uint16_t type, uint32_t now)

Returns signed result so you can inspect how much stale the RR is.


: NSEC* uses zone name ATM; for NSEC3 the owner may not even be knowable.

  • owner – name for stale-serving decisions. You may pass NULL to disable stale.

  • type – for stale-serving.

static inline int rdataset_dematerialize_size(const knot_rdataset_t *rds)

Compute size of serialized rdataset.

NULL is accepted as empty set.

static inline int rdataset_dematerialized_size(const uint8_t *data, uint16_t *rdataset_count)

Analyze the length of a dematerialized rdataset.

Note that in the data it’s KR_CACHE_RR_COUNT_SIZE and then this returned size.

void rdataset_dematerialize(const knot_rdataset_t *rds, uint8_t *restrict data)

Serialize an rdataset.

It may be NULL as short-hand for empty.

int entry2answer(struct answer *ans, int id, const struct entry_h *eh, const uint8_t *eh_bound, const knot_dname_t *owner, uint16_t type, uint32_t new_ttl)

Materialize RRset + RRSIGs into ans->rrsets[id].

LATER(optim.): it’s slightly wasteful that we allocate knot_rrset_t for the packet


error code. They are all bad conditions and “guarded” by kresd’s assertions.

int pkt_renew(knot_pkt_t *pkt, const knot_dname_t *name, uint16_t type)

Prepare answer packet to be filled by RRs (without RR data in wire).

int pkt_append(knot_pkt_t *pkt, const struct answer_rrset *rrset, uint8_t rank)

Append RRset + its RRSIGs into the current section (shallow copy), with given rank.


it works with empty set as well (skipped)


pkt->wire is not updated in any way


KNOT_CLASS_IN is assumed


Whole RRsets are put into the pseudo-packet; normal parsed packets would only contain single-RR sets.

knot_db_val_t key_NSEC1(struct key *k, const knot_dname_t *name, bool add_wildcard)

Construct a string key for for NSEC (1) predecessor-search.


k->zlf_len is assumed to have been correctly set

  • add_wildcard – Act as if the name was extended by “*.”

int nsec1_encloser(struct key *k, struct answer *ans, const int sname_labels, int *clencl_labels, knot_db_val_t *cover_low_kwz, knot_db_val_t *cover_hi_kwz, const struct kr_query *qry, struct kr_cache *cache)

Closest encloser check for NSEC (1).

To understand the interface, see the call point.

  • k – space to store key + input: zname and zlf_len


0: success; >0: try other (NSEC3); <0: exit cache immediately.

int nsec1_src_synth(struct key *k, struct answer *ans, const knot_dname_t *clencl_name, knot_db_val_t cover_low_kwz, knot_db_val_t cover_hi_kwz, const struct kr_query *qry, struct kr_cache *cache)

Source of synthesis (SS) check for NSEC (1).

To understand the interface, see the call point.


0: continue; <0: exit cache immediately; AR_SOA: skip to adding SOA (SS was covered or matched for NODATA).

knot_db_val_t key_NSEC3(struct key *k, const knot_dname_t *nsec3_name, const nsec_p_hash_t nsec_p_hash)

Construct a string key for for NSEC3 predecessor-search, from an NSEC3 name.


k->zlf_len is assumed to have been correctly set

int nsec3_encloser(struct key *k, struct answer *ans, const int sname_labels, int *clencl_labels, const struct kr_query *qry, struct kr_cache *cache)


See nsec1_encloser(…)

int nsec3_src_synth(struct key *k, struct answer *ans, const knot_dname_t *clencl_name, const struct kr_query *qry, struct kr_cache *cache)


See nsec1_src_synth(…)

static inline uint16_t get_uint16(const void *address)
static inline uint8_t *knot_db_val_bound(knot_db_val_t val)

Useful pattern, especially as void-pointer arithmetic isn’t standard-compliant.


static const int NSEC_P_MAXLEN = sizeof(uint32_t) + 5 + 255
static const int NSEC3_HASH_LEN = 20

Hash is always SHA1; I see no plans to standardize anything else.

static const int NSEC3_HASH_TXT_LEN = 32
struct entry_h

Public Members

uint32_t time

The time of inception.

uint32_t ttl

TTL at inception moment.

Assuming it fits into int32_t ATM.

uint8_t rank

See enum kr_rank.

bool is_packet

Negative-answer packet for insecure/bogus name.

bool has_optout

Only for packets; persisted DNSSEC_OPTOUT.

uint8_t _pad

We need even alignment for data now.

uint8_t data[]
struct nsec_p
#include <impl.h>

NSEC* parameters for the chain.

Public Members

const uint8_t *raw

Pointer to raw NSEC3 parameters; NULL for NSEC.

nsec_p_hash_t hash

Hash of raw, used for cache keys.

dnssec_nsec3_params_t libknot

Format for libknot; owns malloced memory!

struct key

Public Members

const knot_dname_t *zname

current zone name (points within qry->sname)

uint8_t zlf_len

length of current zone’s lookup format

uint16_t type

Corresponding key type; e.g.

NS for CNAME. Note: NSEC type is ambiguous (exact and range key).

uint8_t buf[KR_CACHE_KEY_MAXLEN]

The key data start at buf+1, and buf[0] contains some length.

For details see key_exact* and key_NSEC* functions.

struct entry_apex
#include <impl.h>

Header of ‘E’ entry with ktype == NS.

Inside is private to ./entry_list.c

We store xNAME at NS type to lower the number of searches in closest_NS(). CNAME is only considered for equal name, of course. We also store NSEC* parameters at NS type.

Public Members

bool has_ns
bool has_cname
bool has_dname
uint8_t pad_

1 byte + 2 bytes + x bytes would be weird; let’s do 2+2+x.

int8_t nsecs[ENTRY_APEX_NSECS_CNT]

We have two slots for NSEC* parameters.

This array describes how they’re filled; values: 0: none, 1: NSEC, 3: NSEC3.

Two slots are a compromise to smoothly handle normal rollovers (either changing NSEC3 parameters or between NSEC and NSEC3).

uint8_t data[]
struct answer
#include <impl.h>

Partially constructed answer when gathering RRsets from cache.

Public Members

int rcode


struct nsec_p nsec_p

Don’t mix different NSEC* parameters in one answer.

knot_mm_t *mm

Allocator for rrsets.

struct answer.answer_rrset rrsets[1 + 1 + 3]

see AR_ANSWER and friends; only required records are filled

struct answer_rrset

Public Members

ranked_rr_array_entry_t set

set+rank for the main data

knot_rdataset_t sig_rds

RRSIG data, if any.


Provides server selection API (see kr_server_selection) and functions common to both implementations.




enum kr_selection_error

These errors are to be reported as feedback to server selection.

See kr_server_selection::error for more details.


enumerator KR_SELECTION_OK

inside an answer without an OPT record


with an OPT record


Name or type mismatch.


Too long chain, or a cycle.


Leave this last, as it is used as array size.

enum kr_transport_protocol



Selected name with no IPv4 address, it has to be resolved first.


Selected name with no IPv6 address, it has to be resolved first.



void kr_server_selection_init(struct kr_query *qry)

Initialize the server selection API for qry.

The implementation is to be chosen based on qry->flags.

int kr_forward_add_target(struct kr_request *req, const struct sockaddr *sock)

Add forwarding target to request.

This is exposed to Lua in order to add forwarding targets to request. These are then shared by all the queries in said request.

struct kr_transport *select_transport(const struct choice choices[], int choices_len, const struct to_resolve unresolved[], int unresolved_len, int timeouts, struct knot_mm *mempool, bool tcp, size_t *choice_index)

Based on passed choices, choose the next transport.

Common function to both implementations (iteration and forwarding). The *_choose_transport functions from selection_*.h preprocess the input for this one.

  • choices – Options to choose from, see struct above

  • unresolved – Array of names that can be resolved (i.e. no A/AAAA record)

  • timeouts – Number of timeouts that occurred in this query (used for exponential backoff)

  • mempool – Memory context of current request

  • tcp – Force TCP as transport protocol

  • choice_index[out] Optionally index of the chosen transport in the choices array.


Chosen transport (on mempool) or NULL when no choice is viable

void update_rtt(struct kr_query *qry, struct address_state *addr_state, const struct kr_transport *transport, unsigned rtt)

Common part of RTT feedback mechanism.

Notes RTT to global cache.

void error(struct kr_query *qry, struct address_state *addr_state, const struct kr_transport *transport, enum kr_selection_error sel_error)

Common part of error feedback mechanism.

struct rtt_state get_rtt_state(const uint8_t *ip, size_t len, struct kr_cache *cache)

Get RTT state from cache.

Returns default_rtt_state on unknown addresses.

Note that this opens a cache transaction which is usually closed by calling put_rtt_state, i.e. callee is responsible for its closing (e.g. calling kr_cache_commit).

int put_rtt_state(const uint8_t *ip, size_t len, struct rtt_state state, struct kr_cache *cache)
void bytes_to_ip(uint8_t *bytes, size_t len, uint16_t port, union kr_sockaddr *dst)
uint8_t *ip_to_bytes(const union kr_sockaddr *src, size_t len)
void update_address_state(struct address_state *state, union kr_sockaddr *address, size_t address_len, struct kr_query *qry)
bool no6_is_bad(void)
struct kr_transport
#include <selection.h>

Output of the selection algorithm.

Public Members

knot_dname_t *ns_name

Set to “.” for forwarding targets.

union kr_sockaddr address
size_t address_len
enum kr_transport_protocol protocol
unsigned timeout

Timeout in ms to be set for UDP transmission.

bool timeout_capped

Timeout was capped to a maximum value based on the other candidates when choosing this transport.

The timeout therefore can be much lower than what we expect it to be. We basically probe the server for a sudden network change but we expect it to timeout in most cases. We have to keep this in mind when noting the timeout in cache.

bool deduplicated

True iff transport was set in worker.c:subreq_finalize, that means it may be different from the one originally chosen one.

struct local_state

Public Members

int timeouts

Number of timeouts that occurred resolving this query.

bool truncated

Query was truncated, switch to TCP.

bool force_resolve

Force resolution of a new NS name (if possible) Done by selection.c:error in some cases.

bool force_udp

Used to work around auths with broken TCP.

void *private

Inner state of the implementation.

struct kr_server_selection
#include <selection.h>

Specifies a API for selecting transports and giving feedback on the choices.

The function pointers are to be used throughout resolver when some information about the transport is obtained. E.g. RTT in worker.c or RCODE in iterate.c,…

Public Members

bool initialized
void (*choose_transport)(struct kr_query *qry, struct kr_transport **transport)

Puts a pointer to next transport of qry to transport .

Allocates new kr_transport in request’s mempool, chooses transport to be used for this query. Selection may fail, so transport can be set to NULL.

Param transport:

to be filled with pointer to the chosen transport or NULL on failure

void (*update_rtt)(struct kr_query *qry, const struct kr_transport *transport, unsigned rtt)

Report back the RTT of network operation for transport in ms.

void (*error)(struct kr_query *qry, const struct kr_transport *transport, enum kr_selection_error error)

Report back error encountered with the chosen transport.

See enum kr_selection

struct local_state *local_state
struct rtt_state
#include <selection.h>

To be held per IP address in the global LMDB cache.

Public Members

int32_t srtt

Smoothed RTT, i.e.

an estimate of round-trip time.

int32_t variance

An estimate of RTT’s standard derivation (not variance).

int32_t consecutive_timeouts

Note: some TCP and TLS failures are also considered as timeouts.

uint64_t dead_since

Timestamp of pronouncing this IP bad based on KR_NS_TIMEOUT_ROW_DEAD.

struct address_state
#include <selection.h>

To be held per IP address and locally “inside” query.

Public Members

unsigned int generation

Used to distinguish old and valid records in local_state; -1 means unusable IP.

struct rtt_state rtt_state
knot_dname_t *ns_name
bool tls_capable
int choice_array_index
int error_count
bool broken
struct choice
#include <selection.h>

Array of these is one of inputs for the actual selection algorithm (select_transport)

Public Members

union kr_sockaddr address
size_t address_len
struct address_state *address_state
uint16_t port

used to overwrite the port number; if zero, select_transport determines it.

struct to_resolve
#include <selection.h>

Array of these is description of names to be resolved (i.e.

name without some address)

Public Members

knot_dname_t *name
enum kr_transport_protocol type



int kr_zonecut_init(struct kr_zonecut *cut, const knot_dname_t *name, knot_mm_t *pool)

Populate root zone cut with SBELT.

  • cut – zone cut

  • name

  • pool


0 or error code

void kr_zonecut_deinit(struct kr_zonecut *cut)

Clear the structure and free the address set.

  • cut – zone cut

void kr_zonecut_move(struct kr_zonecut *to, const struct kr_zonecut *from)

Move a zonecut, transferring ownership of any pointed-to memory.

  • to – the target - it gets deinit-ed

  • from – the source - not modified, but shouldn’t be used afterward

void kr_zonecut_set(struct kr_zonecut *cut, const knot_dname_t *name)

Reset zone cut to given name and clear address list.


This clears the address list even if the name doesn’t change. TA and DNSKEY don’t change.

  • cut – zone cut to be set

  • name – new zone cut name

int kr_zonecut_copy(struct kr_zonecut *dst, const struct kr_zonecut *src)

Copy zone cut, including all data.

Does not copy keys and trust anchor.


addresses for names in src get replaced and others are left as they were.

  • dst – destination zone cut

  • src – source zone cut


0 or an error code; If it fails with kr_error(ENOMEM), it may be in a half-filled state, but it’s safe to deinit…

int kr_zonecut_copy_trust(struct kr_zonecut *dst, const struct kr_zonecut *src)

Copy zone trust anchor and keys.

  • dst – destination zone cut

  • src – source zone cut


0 or an error code

int kr_zonecut_add(struct kr_zonecut *cut, const knot_dname_t *ns, const void *data, int len)

Add address record to the zone cut.

The record will be merged with existing data, it may be either A/AAAA type.

  • cut – zone cut to be populated

  • ns – nameserver name

  • data – typically knot_rdata_t::data

  • len – typically knot_rdata_t::len


0 or error code

int kr_zonecut_del(struct kr_zonecut *cut, const knot_dname_t *ns, const void *data, int len)

Delete nameserver/address pair from the zone cut.

  • cut

  • ns – name server name

  • data – typically knot_rdata_t::data

  • len – typically knot_rdata_t::len


0 or error code

int kr_zonecut_del_all(struct kr_zonecut *cut, const knot_dname_t *ns)

Delete all addresses associated with the given name.

  • cut

  • ns – name server name


0 or error code

pack_t *kr_zonecut_find(struct kr_zonecut *cut, const knot_dname_t *ns)

Find nameserver address list in the zone cut.


This can be used for membership test, a non-null pack is returned if the nameserver name exists.

  • cut

  • ns – name server name


pack of addresses or NULL

int kr_zonecut_set_sbelt(struct kr_context *ctx, struct kr_zonecut *cut)

Populate zone cut with a root zone using SBELT :rfc:1034

  • ctx – resolution context (to fetch root hints)

  • cut – zone cut to be populated


0 or error code

int kr_zonecut_find_cached(struct kr_context *ctx, struct kr_zonecut *cut, const knot_dname_t *name, const struct kr_query *qry, bool *restrict secured)

Populate zone cut address set from cache.

The size is limited to avoid possibility of doing too much CPU work.

  • ctx – resolution context (to fetch data from LRU caches)

  • cut – zone cut to be populated

  • name – QNAME to start finding zone cut for

  • qry – query for timestamp and stale-serving decisions

  • secured – set to true if want secured zone cut, will return false if it is provably insecure


0 or error code (ENOENT if it doesn’t find anything)

bool kr_zonecut_is_empty(struct kr_zonecut *cut)

Check if any address is present in the zone cut.

  • cut – zone cut to check



struct kr_zonecut
#include <zonecut.h>

Current zone cut representation.

Public Members

knot_dname_t *name

Zone cut name.

knot_rrset_t *key

Zone cut DNSKEY.

knot_rrset_t *trust_anchor

Current trust anchor.

struct kr_zonecut *parent

Parent zone cut.

trie_t *nsset

Map of nameserver => address_set (pack_t).

knot_mm_t *pool

Memory pool.


Module API definition and functions for (un)loading modules.



Export module API version (place this at the end of your module).

  • module – module name (e.g. policy)



typedef int (*kr_module_init_cb)(struct kr_module*)


int kr_module_load(struct kr_module *module, const char *name, const char *path)

Load a C module instance into memory.

And call its init().

  • module – module structure. Will be overwritten except for ->data on success.

  • name – module name

  • path – module search path


0 or an error

void kr_module_unload(struct kr_module *module)

Unload module instance.


currently used even for lua modules

  • module – module structure

kr_module_init_cb kr_module_get_embedded(const char *name)

Get embedded module’s init function by name (or NULL).

struct kr_module
#include <module.h>

Module representation.

The five symbols (init, …) may be defined by the module as name_init(), etc; all are optional and missing symbols are represented as NULLs;

Public Members

char *name
int (*init)(struct kr_module *self)


Called after loading the module.


error code. Lua modules: not populated, called via lua directly.

int (*deinit)(struct kr_module *self)


Called before unloading the module.


error code.

int (*config)(struct kr_module *self, const char *input)

Configure with encoded JSON (NULL if missing).


error code. Lua modules: not used and not useful from C. When called from lua, input is JSON, like for kr_prop_cb.

const kr_layer_api_t *layer

Packet processing API specs.

May be NULL. See docs on that type. Owned by the module code.

const struct kr_prop *props

List of properties.

May be NULL. Terminated by { NULL, NULL, NULL }. Lua modules: not used and not useful.

void *lib

dlopen() handle; RTLD_DEFAULT for embedded modules; NULL for lua modules.

void *data

Custom data context.

struct kr_prop
#include <module.h>

Module property (named callable).

Public Members

kr_prop_cb *cb
const char *name
const char *info


typedef struct kr_layer kr_layer_t

Packet processing context.

typedef struct kr_layer_api kr_layer_api_t


enum kr_layer_state

Layer processing states.

Only one value at a time (but see TODO).

Each state represents the state machine transition, and determines readiness for the next action. See struct kr_layer_api for the actions.

TODO: the cookie module sometimes sets (_FAIL | _DONE) on purpose (!)



Consume data.


Produce data.

enumerator KR_STATE_DONE

Finished successfully or a special case: in CONSUME phase this can be used (by iterator) to do a transition to PRODUCE phase again, in which case the packet wasn’t accepted for some reason.

enumerator KR_STATE_FAIL


enumerator KR_STATE_YIELD

Paused, waiting for a sub-query.


static inline bool kr_state_consistent(enum kr_layer_state s)

Check that a kr_layer_state makes sense.

We’re not very strict ATM.

struct kr_layer
#include <layer.h>

Packet processing context.

Public Members

int state

The current state; bitmap of enum kr_layer_state.

struct kr_request *req

The corresponding request.

const struct kr_layer_api *api
knot_pkt_t *pkt

In glue for lua kr_layer_api it’s used to pass the parameter.

struct sockaddr *dst

In glue for checkout layer it’s used to pass the parameter.

bool is_stream

In glue for checkout layer it’s used to pass the parameter.

struct kr_layer_api
#include <layer.h>

Packet processing module API.

All functions return the new kr_layer_state.

Lua modules are allowed to return nil/nothing, meaning the state shall not change.

Public Members

int (*begin)(kr_layer_t *ctx)

Start of processing the DNS request.

int (*reset)(kr_layer_t *ctx)
int (*finish)(kr_layer_t *ctx)

Paired to begin, called both on successes and failures.

int (*consume)(kr_layer_t *ctx, knot_pkt_t *pkt)

Process an answer from upstream or from cache.

Lua API: call is omitted iff (state & KR_STATE_FAIL).

int (*produce)(kr_layer_t *ctx, knot_pkt_t *pkt)

Produce either an answer to the request or a query for upstream (or fail).

Lua API: call is omitted iff (state & KR_STATE_FAIL).

int (*checkout)(kr_layer_t *ctx, knot_pkt_t *packet, struct sockaddr *dst, int type)

Finalises the outbound query packet with the knowledge of the IP addresses.

The checkout layer doesn’t persist the state, so canceled subrequests don’t affect the resolution or rest of the processing. Lua API: call is omitted iff (state & KR_STATE_FAIL).

int (*answer_finalize)(kr_layer_t *ctx)

Finalises the answer.

Last chance to affect what will get into the answer, including EDNS. Not called if the packet is being dropped.

void *data

The C module can store anything in here.

int cb_slots[]

Internal to .


struct kr_layer_pickle
#include <layer.h>

Pickled layer state (api, input, state).

Public Members

struct kr_layer_pickle *next
const struct kr_layer_api *api
knot_pkt_t *pkt
unsigned state




Maximum length (excluding null-terminator) of a presentation-form address returned by kr_straddr.


Assert() but always, regardless of -DNDEBUG.

See also kr_assert().


Check an assertion that’s recoverable.

Return the true if it fails and needs handling.

If the check fails, optionally fork()+abort() to generate coredump and continue running in parent process. Return value must be handled to ensure safe recovery from error. Use kr_require() for unrecoverable checks. The errno variable is not mangled, e.g. you can: if (kr_fails_assert(…)) return errno;


Kresd assertion without a return value.

These can be turned on or off, for mandatory unrecoverable checks, use kr_require(). For recoverable checks, use kr_fails_assert().

KR_DNAME_GET_STR(dname_str, dname)
KR_RRTYPE_GET_STR(rrtype_str, rrtype)
SWAP(x, y)

Swap two places.

Note: the parameters need to be without side effects.


typedef void (*trace_callback_f)(struct kr_request *request)

Callback for request events.

typedef void (*trace_log_f)(const struct kr_request *request, const char *msg)

Callback for request logging handler.

Param msg:

[in] Log message. Pointer is not valid after handler returns.

typedef struct kr_http_header_array_entry kr_http_header_array_entry_t
typedef see_source_code kr_http_header_array_t

Array of HTTP headers for DoH.

typedef struct timespec kr_timer_t

Timer, i.e stop-watch.


void kr_fail(bool is_fatal, const char *expr, const char *func, const char *file, int line)

Use kr_require(), kr_assert() or kr_fails_assert() instead of directly this function.

static inline bool kr_assert_func(bool result, const char *expr, const char *func, const char *file, int line)

Use kr_require(), kr_assert() or kr_fails_assert() instead of directly this function.

static inline int strcmp_p(const void *p1, const void *p2)

A strcmp() variant directly usable for qsort() on an array of strings.

static inline void get_workdir(char *out, size_t len)

Get current working directory with fallback value.

char *kr_strcatdup(unsigned n, ...)

Concatenate N strings.

char *kr_absolutize_path(const char *dirname, const char *fname)

Construct absolute file path, without resolving symlinks.


malloc-ed string or NULL (+errno in that case)

void kr_rnd_buffered(void *data, unsigned int size)

You probably want kr_rand_* convenience functions instead.

This is a buffered version of gnutls_rnd(GNUTLS_RND_NONCE, ..)

inline uint64_t kr_rand_bytes(unsigned int size)

Return a few random bytes.

static inline bool kr_rand_coin(unsigned int nomin, unsigned int denomin)

Throw a pseudo-random coin, succeeding approximately with probability nomin/denomin.

  • low precision, only one byte of randomness (or none with extreme parameters)

  • tip: use !kr_rand_coin() to get the complementary probability

int kr_memreserve(void *baton, void **mem, size_t elm_size, size_t want, size_t *have)

Memory reservation routine for knot_mm_t.

int kr_pkt_recycle(knot_pkt_t *pkt)
int kr_pkt_clear_payload(knot_pkt_t *pkt)
int kr_pkt_put(knot_pkt_t *pkt, const knot_dname_t *name, uint32_t ttl, uint16_t rclass, uint16_t rtype, const uint8_t *rdata, uint16_t rdlen)

Construct and put record to packet.

void kr_pkt_make_auth_header(knot_pkt_t *pkt)

Set packet header suitable for authoritative answer.

(for policy module)

static inline knot_dname_t *kr_pkt_qname_raw(const knot_pkt_t *pkt)

Get pointer to the in-header QNAME.

That’s normally not lower-cased. However, when receiving packets from upstream we xor-apply the secret during packet-parsing, so it would get lower-cased after that point if the case was right.

const char *kr_inaddr(const struct sockaddr *addr)

Address bytes for given family.

int kr_inaddr_family(const struct sockaddr *addr)

Address family.

int kr_inaddr_len(const struct sockaddr *addr)

Address length for given family, i.e.

sizeof(struct in*_addr).

int kr_sockaddr_len(const struct sockaddr *addr)

Sockaddr length for given family, i.e.

sizeof(struct sockaddr_in*).

ssize_t kr_sockaddr_key(struct kr_sockaddr_key_storage *dst, const struct sockaddr *addr)

Creates a packed structure from the specified addr, safe for use as a key in containers like trie_t, and writes it into dst.

On success, returns the actual length of the key.

Returns kr_error(EAFNOSUPPORT) if the family of addr is unsupported.

struct sockaddr *kr_sockaddr_from_key(struct sockaddr_storage *dst, const char *key)

Creates a struct sockaddr from the specified key created using the kr_sockaddr_key() function.

bool kr_sockaddr_key_same_addr(const char *key_a, const char *key_b)

Checks whether the two keys represent the same address; does NOT compare the ports.

int kr_sockaddr_cmp(const struct sockaddr *left, const struct sockaddr *right)

Compare two given sockaddr.

return 0 - addresses are equal, error code otherwise.

uint16_t kr_inaddr_port(const struct sockaddr *addr)


void kr_inaddr_set_port(struct sockaddr *addr, uint16_t port)

Set port.

int kr_inaddr_str(const struct sockaddr *addr, char *buf, size_t *buflen)

Write string representation for given address as “<addr>#<port>”.

  • addr[in] the raw address

  • buf[out] the buffer for output string

  • buflen[inout] the available(in) and utilized(out) length, including \0

int kr_ntop_str(int family, const void *src, uint16_t port, char *buf, size_t *buflen)

Write string representation for given address as “<addr>#<port>”.

It’s the same as kr_inaddr_str(), but the input address is input in native format like for inet_ntop() (4 or 16 bytes) and port must be separate parameter.

char *kr_straddr(const struct sockaddr *addr)
int kr_straddr_family(const char *addr)

Return address type for string.

int kr_family_len(int family)

Return address length in given family (struct in*_addr).

struct sockaddr *kr_straddr_socket(const char *addr, int port, knot_mm_t *pool)

Create a sockaddr* from string+port representation.

Also accepts IPv6 link-local and AF_UNIX starting with “/” (ignoring port)

int kr_straddr_subnet(void *dst, const char *addr)

Parse address and return subnet length (bits).


‘dst’ must be at least sizeof(struct in6_addr) long.

int kr_straddr_join(const char *addr, uint16_t port, char *buf, size_t *buflen)

Formats ip address and port in “addr#port” format.

and performs validation.


Port always formatted as five-character string with leading zeros.


kr_error(EINVAL) - addr or buf is NULL or buflen is 0 or addr doesn’t contain a valid ip address kr_error(ENOSP) - buflen is too small

int kr_bitcmp(const char *a, const char *b, int bits)

Compare memory bitwise.

The semantics is “the same” as for memcmp(). The partial byte is considered with more-significant bits first, so this is e.g. suitable for comparing IP prefixes.

void kr_bitmask(unsigned char *a, size_t a_len, int bits)

Masks bits.

The specified number of bits in a from the left (network order) will remain their original value, while the rest will be set to zero. This is useful for storing network addresses in a trie.

Check whether addr points to an AF_INET6 address and whether the address is link-local.

int kr_rrkey(char *key, uint16_t class, const knot_dname_t *owner, uint16_t type, uint16_t additional)

Create unique null-terminated string key for RR.

  • key – Destination buffer for key size, MUST be KR_RRKEY_LEN or larger.

  • class – RR class.

  • owner – RR owner name.

  • type – RR type.

  • additional – flags (for instance can be used for storing covered type when RR type is RRSIG).


key length if successful or an error

int kr_ranked_rrarray_add(ranked_rr_array_t *array, const knot_rrset_t *rr, uint8_t rank, bool to_wire, uint32_t qry_uid, knot_mm_t *pool)

Add RRSet copy to a ranked RR array.

To convert to standard RRs inside, you need to call _finalize() afterwards, and the memory of rr->rrs.rdata has to remain until then.


array index (>= 0) or error code (< 0)

int kr_ranked_rrarray_finalize(ranked_rr_array_t *array, uint32_t qry_uid, knot_mm_t *pool)

Finalize in_progress sets - all with matching qry_uid.

int kr_ranked_rrarray_set_wire(ranked_rr_array_t *array, bool to_wire, uint32_t qry_uid, bool check_dups, bool (*extraCheck)(const ranked_rr_array_entry_t*))
char *kr_pkt_text(const knot_pkt_t *pkt)

Newly allocated string representation of packet. Caller has to free() returned string.

char *kr_rrset_text(const knot_rrset_t *rr)
static inline char *kr_dname_text(const knot_dname_t *name)
static inline char *kr_rrtype_text(const uint16_t rrtype)
char *kr_module_call(struct kr_context *ctx, const char *module, const char *prop, const char *input)

Call module property.

static inline uint16_t kr_rrset_type_maysig(const knot_rrset_t *rr)

Return the (covered) type of an nonempty RRset.

uint64_t kr_now(void)

The current time in monotonic milliseconds.


it may be outdated in case of long callbacks; see uv_now().

void kr_uv_free_cb(uv_handle_t *handle)

Call free(handle->data); it’s useful e.g.

as a callback in uv_close().

int knot_dname_lf2wire(knot_dname_t *dst, uint8_t len, const uint8_t *lf)

Convert name from lookup format to wire.

See knot_dname_lf


len bytes are read and len+1 are written with normal LF, but it’s also allowed that the final zero byte is omitted in LF.


the number of bytes written (>0) or error code (<0)

static inline int kr_dname_lf(uint8_t *dst, const knot_dname_t *src, bool add_wildcard)

Patched knot_dname_lf.

LF for “.” has length zero instead of one, for consistency. (TODO: consistency?)


packet is always NULL

  • add_wildcard – append the wildcard label

static inline void kr_timer_start(kr_timer_t *start)

Start, i.e.

set the reference point.

static inline double kr_timer_elapsed(kr_timer_t *start)

Get elapsed time in floating-point seconds.

static inline uint64_t kr_timer_elapsed_us(kr_timer_t *start)

Get elapsed time in micro-seconds.

const char *kr_strptime_diff(const char *format, const char *time1_str, const char *time0_str, double *diff)

Difference between two calendar times specified as strings.

  • format[in] format for strptime

  • diff[out] result from C difftime(time1, time0)

void kr_rrset_init(knot_rrset_t *rrset, knot_dname_t *owner, uint16_t type, uint16_t rclass, uint32_t ttl)
bool kr_pkt_has_wire(const knot_pkt_t *pkt)
bool kr_pkt_has_dnssec(const knot_pkt_t *pkt)
uint16_t kr_pkt_qclass(const knot_pkt_t *pkt)
uint16_t kr_pkt_qtype(const knot_pkt_t *pkt)
uint32_t kr_rrsig_sig_inception(const knot_rdata_t *rdata)
uint32_t kr_rrsig_sig_expiration(const knot_rdata_t *rdata)
uint16_t kr_rrsig_type_covered(const knot_rdata_t *rdata)
time_t kr_file_mtime(const char *fname)
long long kr_fssize(const char *path)

Return filesystem size in bytes.

const char *kr_dirent_name(const struct dirent *de)

Simply return de->dname.

(useful from Lua)


static const size_t KR_PKT_SIZE_NOWIRE = -1

When knot_pkt is passed from cache without ->wire, this is the ->size.

bool kr_dbg_assertion_abort

Whether kr_assert() and kr_fails_assert() checks should abort.

int kr_dbg_assertion_fork

How often kr_assert() should fork the process before issuing abort (if configured).

This can be useful for debugging rare edge-cases in production. if (kr_debug_assertion_abort && kr_debug_assertion_fork), it is possible to both obtain a coredump (from forked child) and recover from the non-fatal error in the parent process.

== 0 (false): no forking

0: minimum delay between forks

(in milliseconds, each instance separately, randomized +-25%) < 0: no rate-limiting (not recommended)

const knot_dump_style_t KR_DUMP_STYLE_DEFAULT

Style used by the kr_*_text() functions.

struct kr_sockaddr_key_storage
#include <utils.h>

Used for reserving enough space for the kr_sockaddr_key function output.

Public Members

char bytes[sizeof(struct sockaddr_storage)]
struct kr_http_header_array_entry

Public Members

char *name
char *value
union kr_sockaddr
#include <utils.h>

Simple storage for IPx address and their ports or AF_UNSPEC.

Public Members

struct sockaddr ip
struct sockaddr_in ip4
struct sockaddr_in6 ip6
union kr_in_addr
#include <utils.h>

Simple storage for IPx addresses.

Public Members

struct in_addr ip4
struct in6_addr ip6




static inline int kr_error(int x)

Generics library

This small collection of “generics” was born out of frustration that I couldn’t find no such thing for C. It’s either bloated, has poor interface, null-checking is absent or doesn’t allow custom allocation scheme. BSD-licensed (or compatible) code is allowed here, as long as it comes with a test case in tests/test_generics.c.

  • array - a set of simple macros to make working with dynamic arrays easier.

  • queue - a FIFO + LIFO queue.

  • pack - length-prefixed list of objects (i.e. array-list).

  • lru - LRU-like hash table

  • trie - a trie-based key-value map, taken from knot-dns


A set of simple macros to make working with dynamic arrays easier.

MIN(array_push(arr, val), other)

May evaluate the code twice, leading to unexpected behaviour. This is a price to pay for the absence of proper generics.

Example usage:
array_t(const char*) arr;

// Reserve memory in advance
if (array_reserve(arr, 2) < 0) {
    return ENOMEM;

// Already reserved, cannot fail
array_push(arr, "princess");
array_push(arr, "leia");

// Not reserved, may fail
if (array_push(arr, "han") < 0) {
    return ENOMEM;

// It does not hide what it really is
for (size_t i = 0; i < arr.len; ++i) {

// Random delete
array_del(arr, 0);


The C has no generics, so it is implemented mostly using macros. Be aware of that, as direct usage of the macros in the evaluating macros may lead to different expectations:



Declare an array structure.


Zero-initialize the array.


Free and zero-initialize the array (plain malloc/free).

array_clear_mm(array, free, baton)

Make the array empty and free pointed-to memory.

Mempool usage: pass mm_free and a knot_mm_t* .

array_reserve(array, n)

Reserve capacity for at least n elements.


0 if success, <0 on failure

array_reserve_mm(array, n, reserve, baton)

Reserve capacity for at least n elements.

Mempool usage: pass kr_memreserve and a knot_mm_t* .


0 if success, <0 on failure

array_push_mm(array, val, reserve, baton)

Push value at the end of the array, resize it if necessary.

Mempool usage: pass kr_memreserve and a knot_mm_t* .


May fail if the capacity is not reserved.


element index on success, <0 on failure

array_push(array, val)

Push value at the end of the array, resize it if necessary (plain malloc/free).


May fail if the capacity is not reserved.


element index on success, <0 on failure


Pop value from the end of the array.

array_del(array, i)

Remove value at given index.


0 on success, <0 on failure


Return last element of the array.


Undefined if the array is empty.


static inline size_t array_next_count(size_t elm_size, size_t want, size_t have)

Choose array length when it overflows.

static inline int array_std_reserve(void *baton, void **mem, size_t elm_size, size_t want, size_t *have)
static inline void array_std_free(void *baton, void *p)


A queue, usable for FIFO and LIFO simultaneously.

Both the head and tail of the queue can be accessed and pushed to, but only the head can be popped from.

Example usage:

// define new queue type, and init a new queue instance
typedef queue_t(int) queue_int_t;
queue_int_t q;
// do some operations
queue_push(q, 1);
queue_push(q, 2);
queue_push(q, 3);
queue_push(q, 4);
kr_require(queue_head(q) == 2);
kr_require(queue_tail(q) == 4);

// you may iterate
typedef queue_it_t(int) queue_it_int_t;
for (queue_it_int_t it = queue_it_begin(q); !queue_it_finished(it);
     queue_it_next(it)) {
kr_require(queue_tail(q) == 5);

queue_push_head(q, 0);
kr_require(queue_tail(q) == 6);
// free it up

// you may use dynamic allocation for the type itself
queue_int_t *qm = malloc(sizeof(queue_int_t));


The implementation uses a singly linked list of blocks (“chunks”) where each block stores an array of values (for better efficiency).



The type for queue, parametrized by value type.


Initialize a queue.

You can malloc() it the usual way.


De-initialize a queue: make it invalid and free any inner allocations.

queue_push(q, data)

Push data to queue’s tail.

(Type-safe version; use _impl() otherwise.)

queue_push_head(q, data)

Push data to queue’s head.

(Type-safe version; use _impl() otherwise.)


Remove the element at the head.

The queue must not be empty.


Return a “reference” to the element at the head (it’s an L-value).

The queue must not be empty.


Return a “reference” to the element at the tail (it’s an L-value).

The queue must not be empty.


Return the number of elements in the queue (very efficient).


Type for queue iterator, parametrized by value type.

It’s a simple structure that owns no other resources. You may NOT use it after doing any push or pop (without _begin again).


Initialize a queue iterator at the head of the queue.

If you use this in assignment (instead of initialization), you will unfortunately need to add corresponding type-cast in front. Beware: there’s no type-check between queue and iterator!


Return a “reference” to the current element (it’s an L-value) .


Test if the iterator has gone past the last element.

If it has, you may not use _val or _next.


Advance the iterator to the next element.


A length-prefixed list of objects, also an array list.

Each object is prefixed by item length, unlike array this structure permits variable-length data. It is also equivalent to forward-only list backed by an array.


If some mistake happens somewhere, the access may end up in an infinite loop. (equality comparison on pointers)

Example usage:
pack_t pack;

// Reserve 2 objects, 6 bytes total
pack_reserve(pack, 2, 4 + 2);

// Push 2 objects
pack_obj_push(pack, U8("jedi"), 4)
pack_obj_push(pack, U8("\xbe\xef"), 2);

// Iterate length-value pairs
uint8_t *it = pack_head(pack);
while (it != pack_tail(pack)) {
    uint8_t *val = pack_obj_val(it);
    it = pack_obj_next(it);

// Remove object
pack_obj_del(pack, U8("jedi"), 4);



Maximum object size is 2^16 bytes, see pack_objlen_t



Zero-initialize the pack.


Make the pack empty and free pointed-to memory (plain malloc/free).

pack_clear_mm(pack, free, baton)

Make the pack empty and free pointed-to memory.

Mempool usage: pass mm_free and a knot_mm_t* .

pack_reserve(pack, objs_count, objs_len)

Reserve space for additional objects in the pack (plain malloc/free).


0 if success, <0 on failure

pack_reserve_mm(pack, objs_count, objs_len, reserve, baton)

Reserve space for additional objects in the pack.

Mempool usage: pass kr_memreserve and a knot_mm_t* .


0 if success, <0 on failure


Return pointer to first packed object.

Recommended way to iterate: for (uint8_t *it = pack_head(pack); it != pack_tail(pack); it = pack_obj_next(it))


Return pack end pointer.


typedef uint16_t pack_objlen_t

Packed object length type.

typedef see_source_code pack_t

Pack is defined as an array of bytes.


static inline pack_objlen_t pack_obj_len(uint8_t *it)

Return packed object length.

static inline uint8_t *pack_obj_val(uint8_t *it)

Return packed object value.

static inline uint8_t *pack_obj_next(uint8_t *it)

Return pointer to next packed object.

static inline uint8_t *pack_last(pack_t pack)

Return pointer to the last packed object.

static inline int pack_obj_push(pack_t *pack, const uint8_t *obj, pack_objlen_t len)

Push object to the end of the pack.


0 on success, negative number on failure

static inline uint8_t *pack_obj_find(pack_t *pack, const uint8_t *obj, pack_objlen_t len)

Returns a pointer to packed object.


pointer to packed object or NULL

static inline int pack_obj_del(pack_t *pack, const uint8_t *obj, pack_objlen_t len)

Delete object from the pack.


0 on success, negative number on failure

static inline int pack_clone(pack_t **dst, const pack_t *src, knot_mm_t *pool)

Clone a pack, replacing destination pack; (*dst == NULL) is valid input.


kr_error(ENOMEM) on allocation failure.


A lossy cache.

Example usage:

// Define new LRU type
typedef lru_t(int) lru_int_t;

// Create LRU
lru_int_t *lru;
lru_create(&lru, 5, NULL, NULL);

// Insert some values
int *pi = lru_get_new(lru, "luke", strlen("luke"), NULL);
if (pi)
    *pi = 42;
pi = lru_get_new(lru, "leia", strlen("leia"), NULL);
if (pi)
    *pi = 24;

// Retrieve values
int *ret = lru_get_try(lru, "luke", strlen("luke"), NULL);
if (!ret) printf("luke dropped out!\n");
    else  printf("luke's number is %d\n", *ret);

char *enemies[] = {"goro", "raiden", "subzero", "scorpion"};
for (int i = 0; i < 4; ++i) {
    int *val = lru_get_new(lru, enemies[i], strlen(enemies[i]), NULL);
    if (val)
        *val = i;

// We're done


The implementation tries to keep frequent keys and avoid others, even if “used recently”, so it may refuse to store it on lru_get_new(). It uses hashing to split the problem pseudo-randomly into smaller groups, and within each it tries to approximate relative usage counts of several most frequent keys/hashes. This tracking is done for more keys than those that are actually stored.



The type for LRU, parametrized by value type.

lru_create(ptable, max_slots, mm_ctx_array, mm_ctx)

Allocate and initialize an LRU with default associativity.

The real limit on the number of slots can be a bit larger but less than double.


The pointers to memory contexts need to remain valid during the whole life of the structure (or be NULL).

  • ptable – pointer to a pointer to the LRU

  • max_slots – number of slots

  • mm_ctx_array – memory context to use for the huge array, NULL for default If you pass your own, it needs to produce CACHE_ALIGNED allocations (ubsan).

  • mm_ctx – memory context to use for individual key-value pairs, NULL for default


Free an LRU created by lru_create (it can be NULL).


Reset an LRU to the empty state (but preserve any settings).

lru_get_try(table, key_, len_)

Find key in the LRU and return pointer to the corresponding value.

  • table – pointer to LRU

  • key_ – lookup key

  • len_ – key length


pointer to data or NULL if not found

lru_get_new(table, key_, len_, is_new)

Return pointer to value, inserting if needed (zeroed).

  • table – pointer to LRU

  • key_ – lookup key

  • len_ – key lengthkeys

  • is_new – pointer to bool to store result of operation (true if entry is newly added, false otherwise; can be NULL).


pointer to data or NULL (can be even if memory could be allocated!)

lru_apply(table, function, baton)

Apply a function to every item in LRU.

  • table – pointer to LRU

  • function – enum lru_apply_do (*function)(const char *key, uint len, val_type *val, void *baton) See enum lru_apply_do for the return type meanings.

  • baton – extra pointer passed to each function invocation


Return the real capacity - maximum number of keys holdable within.

  • table – pointer to LRU


enum lru_apply_do

Possible actions to do with an element.





typedef void *trie_val_t

Native API of QP-tries:

  • keys are char strings, not necessarily zero-terminated, the structure copies the contents of the passed keys

  • values are void* pointers, typically you get an ephemeral pointer to it

  • key lengths are limited by 2^32-1 ATM

XXX EDITORS: trie.{h,c} are synced from only with simple adjustments, mostly include lines, KR_EXPORT and assertions.

Element value.

typedef struct trie trie_t

Opaque structure holding a QP-trie.

typedef struct trie_it trie_it_t

Opaque type for holding a QP-trie iterator.


trie_t *trie_create(knot_mm_t *mm)

Create a trie instance. Pass NULL to use malloc+free.

void trie_free(trie_t *tbl)

Free a trie instance.

void trie_clear(trie_t *tbl)

Clear a trie instance (make it empty).

size_t trie_weight(const trie_t *tbl)

Return the number of keys in the trie.

trie_val_t *trie_get_try(trie_t *tbl, const char *key, uint32_t len)

Search the trie, returning NULL on failure.

trie_val_t *trie_get_first(trie_t *tbl, char **key, uint32_t *len)

Return pointer to the minimum. Optionally with key and its length.

trie_val_t *trie_get_ins(trie_t *tbl, const char *key, uint32_t len)

Search the trie, inserting NULL trie_val_t on failure.

int trie_get_leq(trie_t *tbl, const char *key, uint32_t len, trie_val_t **val)

Search for less-or-equal element.

  • tbl – Trie.

  • key – Searched key.

  • len – Key length.

  • val – Must be valid; it will be set to NULL if not found or errored.


KNOT_EOK for exact match, 1 for previous, KNOT_ENOENT for not-found, or KNOT_E*.

int trie_apply(trie_t *tbl, int (*f)(trie_val_t*, void*), void *d)

Apply a function to every trie_val_t, in order.

  • d – Parameter passed as the second argument to f().


First nonzero from f() or zero (i.e. KNOT_EOK).

int trie_apply_with_key(trie_t *tbl, int (*f)(const char*, uint32_t, trie_val_t*, void*), void *d)

Apply a function to every trie_val_t, in order.

It’s like trie_apply() but additionally passes keys and their lengths.

  • d – Parameter passed as the second argument to f().


First nonzero from f() or zero (i.e. KNOT_EOK).

int trie_del(trie_t *tbl, const char *key, uint32_t len, trie_val_t *val)

Remove an item, returning KNOT_EOK if succeeded or KNOT_ENOENT if not found.

If val!=NULL and deletion succeeded, the deleted value is set.

int trie_del_first(trie_t *tbl, char *key, uint32_t *len, trie_val_t *val)

Remove the first item, returning KNOT_EOK on success.

You may optionally get the key and/or value. The key is copied, so you need to pass sufficient len, otherwise kr_error(ENOSPC) is returned.

trie_it_t *trie_it_begin(trie_t *tbl)

Create a new iterator pointing to the first element (if any).

void trie_it_next(trie_it_t *it)

Advance the iterator to the next element.

Iteration is in ascending lexicographical order. In particular, the empty string would be considered as the very first.


You may not use this function if the trie’s key-set has been modified during the lifetime of the iterator (modifying values only is OK).

bool trie_it_finished(trie_it_t *it)

Test if the iterator has gone past the last element.

void trie_it_free(trie_it_t *it)

Free any resources of the iterator. It’s OK to call it on NULL.

const char *trie_it_key(trie_it_t *it, size_t *len)

Return pointer to the key of the current element.


The optional len is uint32_t internally but size_t is better for our usage, as it is without an additional type conversion.

trie_val_t *trie_it_val(trie_it_t *it)

Return pointer to the value of the current element (writable).