Skip to content
  • Thomas Haller's avatar
    shared: add NMDedupMultiIndex "nm-dedup-multi.h" · f9202c2a
    Thomas Haller authored
    Add the NMDedupMultiIndex cache. It basically tracks
    objects as doubly linked list. With the addition that
    each object and the list head is indexed by a hash table.
    Also, it supports tracking multiple distinct lists,
    all indexed by the idx-type instance.
    It also deduplicates the tracked objects and shares them.
    
     - the objects that can be put into the cache must be immutable
       and ref-counted. That is, the cache will deduplicate them
       and share the reference. Also, as these objects are immutable
       and ref-counted, it is safe that users outside the cache
       own them too (as long as they keep them immutable and manage
       their reference properly).
    
       The deduplication uses obj_id_hash_func() and obj_id_equal_func().
       These functions must cover *every* aspect of the objects when
       comparing equality. For example nm_platform_ip4_route_cmp()
       would be a function that qualifies as obj_id_equal_func().
    
       The cache creates references to the objects as needed and
       gives them back. This happens via obj_get_ref() and
       obj_put_ref(). Note that obj_get_ref() is free to create
       a new object, for example to convert a stack-allocated object
       to a (ref-counted) heap allocated one.
    
       The deduplication process creates NMDedupIndexBox instances
       which are the ref-counted entity. In principle, the objects
       themself don't need to be ref-counted as that is handled by
       the boxing instance.
    
     - The cache doesn't only do deduplication. It is a multi-index,
       meaning, callers add objects using a index handle NMDedupMultiIdxType.
       The NMDedupMultiIdxType instance is the access handle to lookup
       the list and objects inside the cache. Note that the idx-type
       instance may partition the objects in distinct lists.
    
    For all operations there are cross-references and  hash table lookups.
    Hence, every operation of this data structure is O(1) and the memory
    overhead for an index tracking an object is constant.
    
    The cache preserves ordering (due to linked list) and exposes the list
    as public API. This allows users to iterate the list without any
    additional copying of elements.
    f9202c2a