Skip to content
  • Thomas Haller's avatar
    libnm: refactor caching of D-Bus objects in NMClient · ce0e898f
    Thomas Haller authored
    No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
    for the NMClient cache. Instead, use GDBusConnection directly and a
    custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
    data.
    
    CHANGES
    -------
    
    - This is a complete rework. I think the previous implementation was
    difficult to understand. There were unfixed bugs and nobody understood
    the code well enough to fix them. Maybe somebody out there understood the
    code, but I certainly did not. At least nobody provided patches to fix those
    issues. I do believe that this implementation is more straightforward and
    easier to understand. It removes a lot of layers of code. Whether this claim
    of simplicity is true, each reader must decide for himself/herself. Note
    that it is still fairly complex.
    
    - There was a lingering performance issue with large number of D-Bus
    objects. The patch tries hard that the implementation scales well. Of
    course, when we cache N objects that have N-to-M references to other,
    we still are fundamentally O(N*M) for runtime and memory consumption (with
    M being the number of references between objects). But each part should behave
    efficiently and well.
    
    - Play well with GMainContext. libnm code (NMClient) is generally not
    thread safe. However, it should work to use multiple instances in
    parallel, as long as each access to a NMClient is through the caller's
    GMainContext. This follows glib's style and effectively allows to use NMClient
    in a multi threaded scenario. This implies to stick to a main context
    upon construction and ensure that callbacks are only invoked when
    iterating that context. Also, NMClient itself shall never iterate the
    caller's context. This also means, libnm must never use g_idle_add() or
    g_timeout_add(), as those enqueue sources in the g_main_context_default()
    context.
    
    - Get ordering of messages right. All events are consistently enqueued
    in a GMainContext and processed strictly in order. For example,
    previously "nm-object.c" tried to combine signals and emit them on an
    idle handler. That is wrong, signals must be emitted in the right order
    and when they happen. Note that when using GInitable's synchronous initialization
    to initialize the NMClient instance, NMClient internally still operates fully
    asynchronously. In that case NMClient has an internal main context.
    
    - NMClient takes over most of the functionality. When using D-Bus'
    ObjectManager interface, one needs to handle basically the entire state
    of the D-Bus interface. That cannot be separated well into distinct
    parts, and even if you try, you just end up having closely related code
    in different source files. Spreading related code does not make it
    easier to understand, on the contrary. That means, NMClient is
    inherently complex as it contains most of the logic. I think that is
    not avoidable, but it's not as bad as it sounds.
    
    - NMClient processes D-Bus messages and state changes in separate steps.
    First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
    keeps track of the changed data. Then we update the GObject instances
    (_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
    we emit all signals and notifications that were collected
    (_dbus_handle_changes_commit()). Note that for example during the initial
    GetManagedObjects() reply, NMClient receive a large amount of state at once.
    But we first apply all the changes to our GObject instances before
    emitting any signals. The result is that signals are always emitted in a moment
    when the cache is consistent. The unavoidable downside is that when you receive
    a property changed signal, possibly many other properties changed
    already and more signals are about to be emitted.
    
    - NMDeviceWifi no longer modifies the content of the cache from client side
    during poke_wireless_devices_with_rf_status(). The content of the cache
    should be determined by D-Bus alone and follow what NetworkManager
    service exposes. Local modifications should be avoided.
    
    - This aims to bring no API/ABI change, though it does of course bring
    various subtle changes in behavior. Those should be all for the better, but the
    goal is not to break any existing clients. This does change internal
    (albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
    property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
    
    - Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
    and NMSecretAgentOld. These are independent of NMClient/NMObject and
    should be reworked separately.
    
    - While we no longer use generated classes from gdbus-codegen, we don't
    need more glue code than before. Also before we constructed NMPropertiesInfo and
    a had large amount of code to propagate properties from NMDBus* to NMObject.
    That got completely reworked, but did not fundamentally change. You still need
    about the same effort to create the NMLDBusMetaIface. Not using
    generated bindings did not make anything worse (which tells about the
    usefulness of generated code, at least in the way it was used).
    
    - NMLDBusMetaIface and other meta data is static and immutable. This
    avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
    have compile time checks to ensure the property types matches. It's pretty hard
    to misuse them because it won't compile.
    
    - The meta data now explicitly encodes the expected D-Bus types and
    makes sure never to accept wrong data. That would only matter when the
    server (accidentally or intentionally) exposes unexpected types on
    D-Bus. I don't think that was previously ensured in all cases.
    For example, demarshal_generic() only cared about the GObject property
    type, it didn't know the expected D-Bus type.
    
    - Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
    probably indicated real bugs. In any case, it prevented us from running CI
    with G_DEBUG=fatal-warnings, because there would be just too many
    unrelated crashes. Now we log debug messages that can be enabled with
    "LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
    into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
    Together with G_DEBUG=fatal-warnings, this turns them into assertions.
    Note that such "assertion failures" might also happen because of a server
    bug (or change). Thus these are not common assertions that indicate a bug
    in libnm and are thus not armed unless explicitly requested. In our CI we
    should now always run with LIBNM_CLIENT_DEBUG=warning,error and
    G_DEBUG=fatal-warnings and to catch bugs. Note that currently
    NetworkManager has bugs in this regard, so enabling this will result in
    assertion failures. That should be fixed first.
    
    - Note that this changes the order in which we emit "notify:devices" and
    "device-added" signals. I think it makes the most sense to emit first
    "device-removed", then "notify:devices", and finally "device-added"
    signals.
    This changes behavior for commit 52ae28f6 ('libnm: queue
    added/removed signals and suppress uninitialized notifications'),
    but I don't think that users should actually rely on the order. Still,
    the new order makes the most sense to me.
    
    - In NetworkManager, profiles can be invisible to the user by setting
    "connection.permissions". Such profiles would be hidden by NMClient's
    nm_client_get_connections() and their "connection-added"/"connection-removed"
    signals.
    Note that NMActiveConnection's nm_active_connection_get_connection()
    and NMDevice's nm_device_get_available_connections() still exposes such
    hidden NMRemoteConnection instances. This behavior was preserved.
    
    NUMBERS
    -------
    
    I compared 3 versions of libnm.
    
      [1] 962297f9, current tip of nm-1-20 branch
      [2] 4fad8c7c, current master, immediate parent of this patch
      [3] this patch
    
    All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
    The libraries were build with
    
      $ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
    
    Note that RPM build already stripped the library.
    
    ---
    
    N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
      on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
      in these tests it had large (and bogus) ELF notes. Anyway, the point
      is to show the relative sizes, so it doesn't matter).
    
      [1] 4075552 (102.7%)
      [2] 3969624 (100.0%)
      [3] 3705208 ( 93.3%)
    
    ---
    
    N2) `size /usr/lib64/libnm.so.0.1.0`:
    
              text             data              bss                dec               hex   filename
      [1]  1314569 (102.0%)   69980 ( 94.8%)   10632 ( 80.4%)   1395181 (101.4%)   1549ed   /usr/lib64/libnm.so.0.1.0
      [2]  1288410 (100.0%)   73796 (100.0%)   13224 (100.0%)   1375430 (100.0%)   14fcc6   /usr/lib64/libnm.so.0.1.0
      [3]  1229066 ( 95.4%)   65248 ( 88.4%)   13400 (101.3%)   1307714 ( 95.1%)   13f442   /usr/lib64/libnm.so.0.1.0
    
    ---
    
    N3) Performance test with test-client.py. With checkout of [2], run
    
    ```
    prepare_checkout() {
        rm -rf /tmp/nm-test && \
        git checkout -B test 4fad8c7c && \
        git clean -fdx && \
        ./autogen.sh --prefix=/tmp/nm-test && \
        make -j 5 install && \
        make -j 5 check-local-clients-tests-test-client
    }
    prepare_test() {
        NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
    }
    do_test() {
      for i in {1..10}; do
          NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
      done
      echo "done!"
    }
    prepare_checkout
    prepare_test
    time do_test
    ```
    
      [1]  real 2m14.497s (101.3%)     user 5m26.651s (100.3%)     sys  1m40.453s (101.4%)
      [2]  real 2m12.800s (100.0%)     user 5m25.619s (100.0%)     sys  1m39.065s (100.0%)
      [3]  real 1m54.915s ( 86.5%)     user 4m18.585s ( 79.4%)     sys  1m32.066s ( 92.9%)
    
    ---
    
    N4) Performance. Run NetworkManager from build [2] and setup a large number
    of profiles (551 profiles and 515 devices, mostly unrealized). This
    setup is already at the edge of what NetworkManager currently can
    handle. Of course, that is a different issue. Here we just check how
    long plain `nmcli` takes on the system.
    
    ```
    do_cleanup() {
        for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
            nmcli connection delete uuid "$UUID"
        done
        for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
            nmcli device delete "$DEVICE"
        done
    }
    
    do_setup() {
        do_cleanup
        for i in {1..30}; do
            nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
            for j in $(seq $i 30); do
                nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i  ipv4.method disabled ipv6.method ignore
            done
        done
        systemctl restart NetworkManager.service
        sleep 5
    }
    
    do_test() {
        perf stat -r 50 -B nmcli 1>/dev/null
    }
    
    do_test
    ```
    
      [1]
    
       Performance counter stats for 'nmcli' (50 runs):
    
                  456.33 msec task-clock:u              #    1.093 CPUs utilized            ( +-  0.44% )
                       0      context-switches:u        #    0.000 K/sec
                       0      cpu-migrations:u          #    0.000 K/sec
                   5,900      page-faults:u             #    0.013 M/sec                    ( +-  0.02% )
           1,408,675,453      cycles:u                  #    3.087 GHz                      ( +-  0.48% )
           1,594,741,060      instructions:u            #    1.13  insn per cycle           ( +-  0.02% )
             368,744,018      branches:u                #  808.061 M/sec                    ( +-  0.02% )
               4,566,058      branch-misses:u           #    1.24% of all branches          ( +-  0.76% )
    
                 0.41761 +- 0.00282 seconds time elapsed  ( +-  0.68% )
    
      [2]
    
       Performance counter stats for 'nmcli' (50 runs):
    
                  477.99 msec task-clock:u              #    1.088 CPUs utilized            ( +-  0.36% )
                       0      context-switches:u        #    0.000 K/sec
                       0      cpu-migrations:u          #    0.000 K/sec
                   5,948      page-faults:u             #    0.012 M/sec                    ( +-  0.03% )
           1,471,133,482      cycles:u                  #    3.078 GHz                      ( +-  0.36% )
           1,655,275,369      instructions:u            #    1.13  insn per cycle           ( +-  0.02% )
             382,595,152      branches:u                #  800.433 M/sec                    ( +-  0.02% )
               4,746,070      branch-misses:u           #    1.24% of all branches          ( +-  0.49% )
    
                 0.43923 +- 0.00242 seconds time elapsed  ( +-  0.55% )
    
      [3]
    
       Performance counter stats for 'nmcli' (50 runs):
    
                  352.36 msec task-clock:u              #    1.027 CPUs utilized            ( +-  0.32% )
                       0      context-switches:u        #    0.000 K/sec
                       0      cpu-migrations:u          #    0.000 K/sec
                   4,790      page-faults:u             #    0.014 M/sec                    ( +-  0.26% )
           1,092,341,186      cycles:u                  #    3.100 GHz                      ( +-  0.26% )
           1,209,045,283      instructions:u            #    1.11  insn per cycle           ( +-  0.02% )
             281,708,462      branches:u                #  799.499 M/sec                    ( +-  0.01% )
               3,101,031      branch-misses:u           #    1.10% of all branches          ( +-  0.61% )
    
                 0.34296 +- 0.00120 seconds time elapsed  ( +-  0.35% )
    
    ---
    
    N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
    
      [1]
    
            Command being timed: "nmcli"
            User time (seconds): 0.42
            System time (seconds): 0.04
            Percent of CPU this job got: 107%
            Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
            Average shared text size (kbytes): 0
            Average unshared data size (kbytes): 0
            Average stack size (kbytes): 0
            Average total size (kbytes): 0
            Maximum resident set size (kbytes): 34456
            Average resident set size (kbytes): 0
            Major (requiring I/O) page faults: 0
            Minor (reclaiming a frame) page faults: 6128
            Voluntary context switches: 1298
            Involuntary context switches: 1106
            Swaps: 0
            File system inputs: 0
            File system outputs: 0
            Socket messages sent: 0
            Socket messages received: 0
            Signals delivered: 0
            Page size (bytes): 4096
            Exit status: 0
    
      [2]
            Command being timed: "nmcli"
            User time (seconds): 0.44
            System time (seconds): 0.04
            Percent of CPU this job got: 108%
            Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
            Average shared text size (kbytes): 0
            Average unshared data size (kbytes): 0
            Average stack size (kbytes): 0
            Average total size (kbytes): 0
            Maximum resident set size (kbytes): 34452
            Average resident set size (kbytes): 0
            Major (requiring I/O) page faults: 0
            Minor (reclaiming a frame) page faults: 6169
            Voluntary context switches: 1849
            Involuntary context switches: 142
            Swaps: 0
            File system inputs: 0
            File system outputs: 0
            Socket messages sent: 0
            Socket messages received: 0
            Signals delivered: 0
            Page size (bytes): 4096
            Exit status: 0
    
      [3]
    
            Command being timed: "nmcli"
            User time (seconds): 0.32
            System time (seconds): 0.02
            Percent of CPU this job got: 102%
            Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
            Average shared text size (kbytes): 0
            Average unshared data size (kbytes): 0
            Average stack size (kbytes): 0
            Average total size (kbytes): 0
            Maximum resident set size (kbytes): 29196
            Average resident set size (kbytes): 0
            Major (requiring I/O) page faults: 0
            Minor (reclaiming a frame) page faults: 5059
            Voluntary context switches: 919
            Involuntary context switches: 685
            Swaps: 0
            File system inputs: 0
            File system outputs: 0
            Socket messages sent: 0
            Socket messages received: 0
            Signals delivered: 0
            Page size (bytes): 4096
            Exit status: 0
    
    ---
    
    N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
      the RSS size.
    
          USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
      [1] me     1492900 21.0  0.2 461348 33248 pts/10   Sl+  15:02   0:00 nmcli monitor
      [2] me     1490721  5.0  0.2 461496 33548 pts/10   Sl+  15:00   0:00 nmcli monitor
      [3] me     1495801 16.5  0.1 459476 28692 pts/10   Sl+  15:04   0:00 nmcli monitor
    ce0e898f