1. 12 Feb, 2019 2 commits
  2. 05 Feb, 2019 2 commits
  3. 01 Dec, 2018 1 commit
  4. 18 Sep, 2018 1 commit
  5. 02 Aug, 2018 2 commits
  6. 27 Jun, 2018 6 commits
  7. 10 May, 2018 1 commit
    • Lubomir Rintel's avatar
      all: use the elvis operator wherever possible · e69d3869
      Lubomir Rintel authored
      Coccinelle:
      
        @@
        expression a, b;
        @@
        -a ? a : b
        +a ?: b
      
      Applied with:
      
        spatch --sp-file ternary.cocci --in-place --smpl-spacing --dir .
      
      With some manual adjustments on spots that Cocci didn't catch for
      reasons unknown.
      
      Thanks to the marvelous effort of the GNU compiler developer we can now
      spare a couple of bits that could be used for more important things,
      like this commit message. Standards commitees yet have to catch up.
      e69d3869
  8. 24 Apr, 2018 2 commits
    • Thomas Haller's avatar
      core/dbus: stop NMDBusManager and reject future method calls · 3d2da8cd
      Thomas Haller authored
      During shutdown, we will need to still iterate the main loop
      to do a coordinated shutdown. Currently we do not, and we just
      exit, leaving a lot of objects hanging.
      
      If we are going to fix that, we need during shutdown tell
      NMDBusManager to reject all future operations.
      
      Note that property getters and "GetManagerObjects" call is not
      blocked. It continues to work.
      
      Certainly for some operations, we want to allow them to be called even
      during shutdown. However, these have to opt-in.
      
      This also fixes an uglyness, where nm_dbus_manager_start() would
      get the set-property-handler and the @manager as user-data. However,
      NMDBusManager will always outlife NMManager, hence, after NMManager
      is destroyed, the user-data would be a dangling pointer. Currently
      that is not an issue, because
        - we always leak NMManager
        - we don't run the mainloop during shutdown
      3d2da8cd
    • Thomas Haller's avatar
  9. 23 Apr, 2018 1 commit
    • Beniamino Galvani's avatar
      core: fix bus initialization order · 4672499b
      Beniamino Galvani authored
      We currently start the bus manager only after the creation of a
      NMManager because the NMManager is needed to handle set-property bus
      calls. However, objects created by NMManager
      (e.g. NMDnsSystemdResolved) need a bus connection and so their
      initialization currently fail.
      
      To fix this, split nm_dbus_manager_start() in two parts: first only
      create the connection and acquire the bus.  After this step the
      NMManager can be set up. In the second step, set NMManager as the
      set-property handler and start exporting objects on the bus.
      
      Fixes: 297d4985
      4672499b
  10. 10 Apr, 2018 2 commits
    • Thomas Haller's avatar
      connectivity: schedule connectivity timers per-device and probe for short outages · 0a62a0e9
      Thomas Haller authored
      It might happen, that connectivitiy is lost only for a moment and
      returns soon after. Based on that assumption, when we loose connectivity
      we want to have a probe interval where we check for returning
      connectivity more frequently.
      
      For that, we handle tracking of the timeouts per-device.
      
      The intervall shall start with 1 seconds, and double the interval time until
      the full interval is reached. Actually, due to the implementation, it's unlikely
      that we already perform the second check 1 second later. That is because commonly
      the first check returns before the one second timeout is reached and bumps the
      interval to 2 seconds right away.
      
      Also, we go through extra lengths so that manual connectivity check
      delay the periodic checks. By being more smart about that, we can reduce
      the number of connectivity checks, but still keeping the promise to
      check at least within the requested interval.
      
      The complexity of book keeping the timeouts is remarkable. But I think
      it is worth the effort and we should try hard to
      
       - have a connectivity state as accurate as possible. Clearly,
         connectivity checking means that we probing, so being more intelligent
         about timeout and backoff timers can result in a better connectivity
         state. The connectivity state is important because we use it for
         the default-route penaly and the GUI indicates bad connectivity.
      
       - be intelligent about avoiding redundant connectivity checks. While
         we want to check often to get an accurate connectivity state, we
         also want to minimize the number of HTTP requests, in case the
         connectivity is established and suppossedly stable.
      
      Also, perform connectivity checks in every state of the device.
      Even if a device is disconnected, it still might have connectivity,
      for example if the user externally adds an IP address on an unmanaged
      device.
      
      https://bugzilla.gnome.org/show_bug.cgi?id=792240
      0a62a0e9
    • Thomas Haller's avatar
      connectivity: rework async connectivity check requests · d8a31794
      Thomas Haller authored
      An asynchronous request should either be cancellable or not keep
      the target object alive. Preferably both.
      
      Otherwise, it is impossible to do a controlled shutdown when terminating
      NetworkManager. Currently, when NetworkManager is about to terminate,
      it just quits the mainloop and essentially leaks everything. That is a
      bug. If we ever want to fix that, every asynchronous request must be
      cancellable in a controlled way (or it must not prevent objects from
      getting disposed, where disposing the object automatically cancels the
      callback).
      
      Rework the asynchronous request for connectivity check to
      
      - return a handle that can be used to cancel the operation.
        Cancelling is optional. The caller may choose to ignore the handle
        because the asynchronous operation does not keep the target object
        alive. That means, it is still possible to shutdown, by everybody
        giving up their reference to the target object. In which case the
        callback will be invoked during dispose() of the target object.
      
      - also, the callback will always be invoked exactly once, and never
        synchronously from within the asynchronous start call. But during
        cancel(), the callback is invoked synchronously from within cancel().
        Note that it's only allowed to cancel an action at most once, and
        never after the callback is invoked (also not from within the callback
        itself).
      
      - also, NMConnectivity already supports a fake handler, in case
        connectivity check is disabled via configuration. Hence, reuse
        the same code paths also when compiling without --enable-concheck.
        That means, instead of having #if WITH_CONCHECK at various callers,
        move them into NMConnectivity. The downside is, that if you build
        without concheck, there is a small overhead compared to before. The
        upside is, we reuse the same code paths when compiling with or without
        concheck.
      
      - also, the patch synchronizes the connecitivty states. For example,
        previously `nmcli networking connectivity check` would schedule
        requests in parallel, and return the accumulated result of the individual
        requests.
        However, the global connectivity state of the manager might have have
        been the same as the answer to the explicit connecitivity check,
        because while the answer for the manual check is waiting for all
        pending checks to complete, the global connectivity state could
        already change. That is just wrong. There are not multiple global
        connectivity states at the same time, there is just one. A manual
        connectivity check should have the meaning of ensure that the global
        state is up to date, but it still should return the global
        connectivity state -- not the answers for several connectivity checks
        issued in parallel.
        This is related to commit b799de28
        (libnm: update property in the manager after connectivity check),
        which tries to address a similar problem client side.
        Similarly, each device has a connectivity state. While there might
        be several connectivity checks per device pending, whenever a check
        completes, it can update the per-device state (and return that device
        state as result), but the immediate answer of the individual check
        might not matter. This is especially the case, when a later request
        returns earlier and obsoletes all earlier requests. In that case,
        earlier requests return with the result of the currend devices
        connectivity state.
      
      This patch cleans up the internal API and gives a better defined behavior
      to the user (thus, the simple API which simplifies implementation for the
      caller). However, the implementation of getting this API right and properly
      handle cancel and destruction of the target object is more complicated and
      complex. But this but is not just for the sake of a nicer API. This fixes
      actual issues explained above.
      
      Also, get rid of GAsyncResult to track information about the pending request.
      Instead, allocate our own handle structure, which ends up to be nicer
      because it's strongly typed and has exactly the properties that are
      useful to track the request. Also, it gets rid of the awkward
      _finish() API by passing the relevant arguments to the callback
      directly.
      d8a31794
  11. 12 Mar, 2018 2 commits
    • Thomas Haller's avatar
      core/dbus: rework D-Bus implementation to use lower layer GDBusConnection API · 297d4985
      Thomas Haller authored
      Previously, we used the generated GDBusInterfaceSkeleton types and glued
      them via the NMExportedObject base class to our NM types. We also used
      GDBusObjectManagerServer.
      
      Don't do that anymore. The resulting code was more complicated despite (or
      because?) using generated classes. It was hard to understand, complex, had
      ordering-issues, and had a runtime and memory overhead.
      
      This patch refactors this entirely and uses the lower layer API GDBusConnection
      directly. It replaces the generated code, GDBusInterfaceSkeleton, and
      GDBusObjectManagerServer. All this is now done by NMDbusObject and NMDBusManager
      and static descriptor instances of type GDBusInterfaceInfo.
      
      This adds a net plus of more then 1300 lines of hand written code. I claim
      that this implementation is easier to understand. Note that previously we
      also required extensive and complex glue code to bind our objects to the
      generated skeleton objects. Instead, now glue our objects directly to
      GDBusConnection. The result is more immediate and gets rid of layers of
      code in between.
      Now that the D-Bus glue us more under our control, we can address issus and
      bottlenecks better, instead of adding code to bend the generated skeletons
      to our needs.
      
      Note that the current implementation now only supports one D-Bus connection.
      That was effectively the case already, although there were places (and still are)
      where the code pretends it could also support connections from a private socket.
      We dropped private socket support mainly because it was unused, untested and
      buggy, but also because GDBusObjectManagerServer could not export the same
      objects on multiple connections. Now, it would be rather straight forward to
      fix that and re-introduce ObjectManager on each private connection. But this
      commit doesn't do that yet, and the new code intentionally supports only one
      D-Bus connection.
      Also, the D-Bus startup was simplified. There is no retry, either nm_dbus_manager_start()
      succeeds, or it detects the initrd case. In the initrd case, bus manager never tries to
      connect to D-Bus. Since the initrd scenario is not yet used/tested, this is good enough
      for the moment. It could be easily extended later, for example with polling whether the
      system bus appears (like was done previously). Also, restart of D-Bus daemon isn't
      supported either -- just like before.
      
      Note how NMDBusManager now implements the ObjectManager D-Bus interface
      directly.
      
      Also, this fixes race issues in the server, by no longer delaying
      PropertiesChanged signals. NMExportedObject would collect changed
      properties and send the signal out in idle_emit_properties_changed()
      on idle. This messes up the ordering of change events w.r.t. other
      signals and events on the bus. Note that not only NMExportedObject
      messed up the ordering. Also the generated code would hook into
      notify() and process change events in and idle handle, exhibiting the
      same ordering issue too.
      No longer do that. PropertiesChanged signals will be sent right away
      by hooking into dispatch_properties_changed(). This means, changing
      a property in quick succession will no longer be combined and is
      guaranteed to emit signals for each individual state. Quite possibly
      we emit now more PropertiesChanged signals then before.
      However, we are now able to group a set of changes by using standard
      g_object_freeze_notify()/g_object_thaw_notify(). We probably should
      make more use of that.
      
      Also, now that our signals are all handled in the right order, we
      might find places where we still emit them in the wrong order. But that
      is then due to the order in which our GObjects emit signals, not due
      to an ill behavior of the D-Bus glue. Possibly we need to identify
      such ordering issues and fix them.
      
      Numbers (for contrib/rpm --without debug on x86_64):
      
      - the patch changes the code size of NetworkManager by
        - 2809360 bytes
        + 2537528 bytes (-9.7%)
      
      - Runtime measurements are harder because there is a large variance
        during testing. In other words, the numbers are not reproducible.
        Currently, the implementation performs no caching of GVariants at all,
        but it would be rather simple to add it, if that turns out to be
        useful.
        Anyway, without strong claim, it seems that the new form tends to
        perform slightly better. That would be no surprise.
      
        $ time (for i in {1..1000}; do nmcli >/dev/null || break; echo -n .;  done)
        - real    1m39.355s
        + real    1m37.432s
      
        $ time (for i in {1..2000}; do busctl call org.freedesktop.NetworkManager /org/freedesktop org.freedesktop.DBus.ObjectManager GetManagedObjects > /dev/null || break; echo -n .; done)
        - real    0m26.843s
        + real    0m25.281s
      
      - Regarding RSS size, just looking at the processes in similar
        conditions, doesn't give a large difference. On my system they
        consume about 19MB RSS. It seems that the new version has a
        slightly smaller RSS size.
        - 19356 RSS
        + 18660 RSS
      297d4985
    • Thomas Haller's avatar
      core: rename "nm-bus-manager.h" to "nm-dbus-manager.h" · a1f37964
      Thomas Haller authored
      The next commit will completely rework NMBusManager and replace
      NMExportedObject by a new type NMDBusObject.
      
      Originally, NMDBusObject was added along NMExportedObject to ease
      the rework and have compilable, intermediate stages of refactoring. Now,
      I think the new name is better, because NMDBusObject is very strongly related
      to the bus manager and the old name NMExportedObject didn't make that
      clear.
      
      I also slighly prefer the name NMDBusObject over NMBusObject, hence
      for consistancy, also rename NMBusManager to NMDBusManager.
      
      This commit only renames the file for a nicer diff in the next commit.
      It does not actually update the type name in sources. That will be done
      later.
      a1f37964
  12. 18 Jan, 2018 1 commit
  13. 15 Dec, 2017 1 commit
  14. 31 Oct, 2017 1 commit
    • Thomas Haller's avatar
      config: remove nm_config_data_get_value_cached() · 447dc874
      Thomas Haller authored
      It has almost no callers, and it is a bit of a strange API. Let's
      not cache the last accessed value inside NMConfigData. Instead, free
      it right after use. It was not reused anyway, it only hangs around
      as convenience for the caller.
      447dc874
  15. 24 May, 2017 1 commit
  16. 23 Apr, 2017 1 commit
  17. 20 Apr, 2017 2 commits
  18. 18 Apr, 2017 2 commits
    • Thomas Haller's avatar
      core: add NMNetns to bundle platform and route managers · d37b9d79
      Thomas Haller authored
      NMPlatform, NMRouteManager and NMDefaultRouteManager are singletons
      instances. Users of those are for example NMDevice, which registers
      to GObject signals of both NMPlatform and NMRouteManager.
      
      Hence, as NMDevice:dispose() disconnects the signal handlers, it must
      ensure that those singleton instances live longer then the NMDevice
      instance. That is usually accomplished by having users of singleton
      instances own a reference to those instances.
      For NMDevice that effectively means that it shall own a reference to
      several singletons.
      
      NMPlatform, NMRouteManager, and NMDefaultRouteManager are all
      per-namespace. In general it doesn't make sense to have more then
      one instances of these per name space. Nnote that currently we don't
      support multiple namespaces yet. If we will ever support multiple
      namespaces, then a NMDevice would have a reference to all of these
      manager instances. Hence, introduce a new class NMNetns which bundles
      them together.
      
      (cherry picked from commit 0af2f5c2)
      d37b9d79
    • Thomas Haller's avatar
      core: add NMNetns to bundle platform and route managers · 0af2f5c2
      Thomas Haller authored
      NMPlatform, NMRouteManager and NMDefaultRouteManager are singletons
      instances. Users of those are for example NMDevice, which registers
      to GObject signals of both NMPlatform and NMRouteManager.
      
      Hence, as NMDevice:dispose() disconnects the signal handlers, it must
      ensure that those singleton instances live longer then the NMDevice
      instance. That is usually accomplished by having users of singleton
      instances own a reference to those instances.
      For NMDevice that effectively means that it shall own a reference to
      several singletons.
      
      NMPlatform, NMRouteManager, and NMDefaultRouteManager are all
      per-namespace. In general it doesn't make sense to have more then
      one instances of these per name space. Nnote that currently we don't
      support multiple namespaces yet. If we will ever support multiple
      namespaces, then a NMDevice would have a reference to all of these
      manager instances. Hence, introduce a new class NMNetns which bundles
      them together.
      0af2f5c2
  19. 28 Mar, 2017 1 commit
    • Lubomir Rintel's avatar
      core: make connectivity checking per-device · 9d43869e
      Lubomir Rintel authored
      This moves tracking of connectivity to NMDevice and makes the NMManager
      negotiate the best of known connectivity states of devices. The NMConnectivity
      singleton handles its own configuration and scheduling of the permission
      checks, but otherwise greatly simplifies it.
      
      This will be useful to determine correct metrics for multiple default routes
      depending on actual internet connectivity.
      
      The per-device connection checks is not yet exposed on the D-Bus, since they
      probably should be per-address-family as well.
      9d43869e
  20. 24 Mar, 2017 1 commit
  21. 15 Mar, 2017 1 commit
  22. 17 Feb, 2017 1 commit
    • Thomas Haller's avatar
      core: use define for atomic-section-prefix setting for NMConfig · 9e5319db
      Thomas Haller authored
      main() should pass the same atomic-section-prefix setting to it's
      NMConfig instances. Currently both are NULL, but make it a define
      to make this explicit.
      
      Also, make static array @default_values const and sanitize value
      when setting PROP_ATOMIC_SECTION_PREFIXES property.
      9e5319db
  23. 10 Feb, 2017 1 commit
    • Thomas Haller's avatar
      dns: fix shutdown to restore non-cached DNS config · ecd3263e
      Thomas Haller authored
      The DNS manager and other singletons have the problem that
      they are not properly destroyed on exit, that is, we leak
      most of the instances. That should be eventually fixed and
      all resources/memory should be released.
      
      Anyway, fix the shutdown procedure by adding an explict command
      nm_dns_manager_shutdown(). We should not rely on cleanup actions
      to take place when the last reference is dropped, because then
      we get complex interactions where we must ensure that everybody
      drops the references at the right pointer.
      
      Since the previous shutdown action was effectively never performed,
      it is not quite clear what we actually want to do on shutdown.
      For now, move the code to nm_dns_manager_stop(). We will see if
      that is the desired behavior.
      ecd3263e
  24. 25 Nov, 2016 3 commits
  25. 21 Nov, 2016 1 commit
    • Thomas Haller's avatar
      build: don't add subdirectories to include search path but require qualified include · 44ecb415
      Thomas Haller authored
      Keep the include paths clean and separate. We use directories to group source
      files together. That makes sense (I guess), but then we should use this
      grouping also when including files. Thus require to #include files with their
      path relative to "src/".
      
      Also, we build various artifacts from the "src/" tree. Instead of having
      individual CFLAGS for each artifact in Makefile.am, the CFLAGS should be
      unified. Previously, the CFLAGS for each artifact differ and are inconsistent
      in which paths they add to the search path. Fix the inconsistency by just
      don't add the paths at all.
      44ecb415