libnm: refactor caching of D-Bus objects in NMClient

No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.

CHANGES
-------

- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.

- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.

- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
callers context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.

- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, then we still operate (internally) fully
asynchronous. To get this right NMClient has an internal main context.

- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds (IMO).

- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.

- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.

- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.

- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.

- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse.

- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.

- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.

- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this makes them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures that need to be fixed first.

- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6 ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.

- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.

NUMBERS
-------

I compared 3 versions of libnm.

  [1] 188911ae, currently latest on nm-1-20 branch
  [2] ec37b37a, current master, before this patch
  [3] this patch

All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with

  $ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug

Note that RPM build already stripped the library.

N1) File size of libnm.so.0.1.0 in bytes:

  [1] 2609752 ( +66816, 102.6 %)
  [2] 2542936 (      0, 100.0 %)
  [3] 2385848 (-157088,  93.8 %)

N2) `size /usr/lib64/libnm.so.0.1.0`:

          text    data     bss     dec     hex filename
  [1]  1314569   69980   10632 1395181  1549ed /usr/lib64/libnm.so.0.1.0
  [2]  1287181   73628   13224 1374033  14f751 /usr/lib64/libnm.so.0.1.0
  [3]  1228285   65104   13368 1306757  13f085 /usr/lib64/libnm.so.0.1.0

N3) Performance test with test-client.py. With checkout of [2], run

```
prepare_checkout() {
    rm -rf /tmp/nm-test && \
    git checkout -B test ec37b37a && \
    git clean -fdx && \
    ./autogen.sh --prefix=/tmp/nm-test && \
    make -j 5 install && \
    make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
    NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
  for i in {1..10}; do
      NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
  done
  echo "done!"
}
prepare_checkout
prepare_test
time do_test
```

  [1]  real 2m43.832s    user 6m40.704s    sys  2m1.209s
  [2]  real 2m44.749s    user 6m43.496s    sys  2m2.552s
  [3]  real 2m20.938s    user 5m18.675s    sys  1m52.731s

N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (~540). Most of these profiles also create unrealized devices. This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.

```
do_cleanup() {
    for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
        nmcli connection delete uuid "$UUID"
    done
    for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
        nmcli device delete "$DEVICE"
    done
}

do_setup() {
    do_cleanup
    for i in {1..30}; do
        nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
        for j in $(seq $i 30); do
            nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i  ipv4.method disabled ipv6.method ignore
        done
    done
    systemctl restart NetworkManager.service
    sleep 5
}

do_test() {
    perf stat -r 50 -B nmcli 1>/dev/null
}

do_test
```

  [1]

   Performance counter stats for 'nmcli' (50 runs):

              556.61 msec task-clock:u              #    1.104 CPUs utilized            ( +-  0.31% )
                   0      context-switches:u        #    0.000 K/sec
                   0      cpu-migrations:u          #    0.000 K/sec
               5,877      page-faults:u             #    0.011 M/sec                    ( +-  0.02% )
       1,370,782,035      cycles:u                  #    2.463 GHz                      ( +-  0.35% )
       1,554,958,802      instructions:u            #    1.13  insn per cycle           ( +-  0.02% )
         359,382,395      branches:u                #  645.667 M/sec                    ( +-  0.02% )
           4,469,900      branch-misses:u           #    1.24% of all branches          ( +-  0.61% )

             0.50438 +- 0.00220 seconds time elapsed  ( +-  0.44% )

  [2]

   Performance counter stats for 'nmcli' (50 runs):

              579.83 msec task-clock:u              #    1.101 CPUs utilized            ( +-  0.33% )
                   0      context-switches:u        #    0.000 K/sec
                   0      cpu-migrations:u          #    0.000 K/sec
               5,920      page-faults:u             #    0.010 M/sec                    ( +-  0.03% )
       1,431,887,281      cycles:u                  #    2.469 GHz                      ( +-  0.32% )
       1,614,200,290      instructions:u            #    1.13  insn per cycle           ( +-  0.02% )
         372,971,277      branches:u                #  643.237 M/sec                    ( +-  0.02% )
           4,604,624      branch-misses:u           #    1.23% of all branches          ( +-  0.56% )

             0.52655 +- 0.00238 seconds time elapsed  ( +-  0.45% )

  [3]

   Performance counter stats for 'nmcli' (50 runs):

              435.00 msec task-clock:u              #    1.010 CPUs utilized            ( +-  0.48% )
                   0      context-switches:u        #    0.000 K/sec
                   0      cpu-migrations:u          #    0.000 K/sec
               4,679      page-faults:u             #    0.011 M/sec                    ( +-  0.25% )
       1,045,216,415      cycles:u                  #    2.403 GHz                      ( +-  0.27% )
       1,162,741,531      instructions:u            #    1.11  insn per cycle           ( +-  0.01% )
         270,745,448      branches:u                #  622.398 M/sec                    ( +-  0.01% )
           2,952,768      branch-misses:u           #    1.09% of all branches          ( +-  0.51% )

             0.43077 +- 0.00144 seconds time elapsed  ( +-  0.33% )

N5) same setup as N4), but run `nmcli monitor` and look at `ps aux`

      USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
  [1] thom     1597096  6.1  0.2 461212 33424 pts/6    Sl+  09:58   0:00 nmcli monitor
  [2] thom     1598090  3.5  0.2 461452 33028 pts/6    Sl+  09:59   0:00 nmcli monitor
  [3] thom     1595883  5.1  0.1 459396 28208 pts/6    Sl+  09:57   0:00 nmcli monitor
18 jobs for th/libnm-no-dbus-codegen-4 in 31 minutes and 20 seconds (queued for 14 seconds)
Status Job ID Name Coverage
  Test
passed #1004050
allowed to fail
checkpatch

00:01:07

manual #1004056
allowed to fail manual
t_centos:7.5.1804
manual #1004057
allowed to fail manual
t_centos:7.6.1810
manual #1004063
allowed to fail manual
t_debian:10
manual #1004062
allowed to fail manual
t_debian:9
manual #1004065
allowed to fail manual
t_debian:sid
manual #1004064
allowed to fail manual
t_debian:testing
manual #1004051
allowed to fail manual
t_fedora:28
manual #1004052
allowed to fail manual
t_fedora:29
passed #1004053
t_fedora:30

00:30:59

manual #1004054
allowed to fail manual
t_fedora:31
manual #1004055
allowed to fail manual
t_fedora:rawhide
manual #1004058
allowed to fail manual
t_ubuntu:16.04
manual #1004059
allowed to fail manual
t_ubuntu:18.04
manual #1004061
allowed to fail manual
t_ubuntu:devel
manual #1004060
allowed to fail manual
t_ubuntu:rolling
 
  External
failed https://desktopqe-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/beaker-NetworkManager-gitlab-trigger-code-upstream/911/

failed https://desktopqe-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/beaker-NetworkManager-gitlab-trigger-code-upstream/911/

09:09:57