1. 09 Dec, 2018 2 commits
    • Thomas Haller's avatar
      core: improve and fix keeping connection active based on "connection.permissions" · b635b4d4
      Thomas Haller authored
      By setting "connection.permissions", a profile is restricted to a
      particular user.
      That means for example, that another user cannot see, modify, delete,
      activate or deactivate the profile. It also means, that the profile
      will only autoconnect when the user is logged in (has a session).
      
      Note that root is always able to activate the profile. Likewise, the
      user is also allowed to manually activate the own profile, even if no
      session currently exists (which can easily happen with `sudo`).
      
      When the user logs out (the session goes away), we want do disconnect
      the profile, however there are conflicting goals here:
      
      1) if the profile was activate by root user, then logging out the user
         should not disconnect the profile. The patch fixes that by not
         binding the activation to the connection, if the activation is done
         by the root user.
      
      2) if the profile was activated by the owner when it had no session,
         then it should stay alive until the user logs in (once) and logs
         out again. This is already handled by the previous commit.
      
         Yes, this point is odd. If you first do
      
            $ sudo -u $OTHER_USER nmcli connection up $PROFILE
      
         the profile activates despite not having a session. If you then
      
            $ ssh guest@localhost nmcli device
      
         you'll still see the profile active. However, the moment the SSH session
         ends, a session closes and the profile disconnects. It's unclear, how to
         solve that any better. I think, a user who cares about this, should not
         activate the profile without having a session in the first place.
      
      There are quite some special cases, in particular with internal
      activations. In those cases we need to decide whether to bind the
      activation to the profile's visibility.
      
      Also, expose the "bind" setting in the D-Bus API. Note, that in the future
      this flag may be modified via D-Bus API. Like we may also add related API
      that allows to tweak the lifetime of the activation.
      
      Also, I think we broke handling of connection visiblity with 37e8c53e
      "core: Introduce helper class to track connection keep alive". This
      should be fixed now too, with improved behavior.
      
      Fixes: 37e8c53e
      
      https://bugzilla.redhat.com/show_bug.cgi?id=1530977
      b635b4d4
    • Thomas Haller's avatar
      core: add nm_manager_for_each_active_connection_safe() and nm_manager_for_each_device_safe() · 936c6d0b
      Thomas Haller authored
      Analog to c_list_for_each_safe(), which allows to delete the current instance
      while iterating. Note that modifying the list any other way is unsafe.
      936c6d0b
  2. 03 Dec, 2018 1 commit
  3. 08 Aug, 2018 1 commit
    • Thomas Haller's avatar
      core: extend nm_manager_get_activatable_connections() for autoconnect and multi-connect · 07a42191
      Thomas Haller authored
      In general, a activatable connection is one that is currently not
      active, or supports to be activatable multiple times according to
      multi-connect setting. In addition, during autoconnect, a profile
      which is marked as multi-connect=manual-multiple will not be avalable.
      Hence, add an argument "for_auto_activation".
      
      The code is mostly unused but will be used next (except for connections,
      which set connection.multi-connect=multiple).
      07a42191
  4. 02 Aug, 2018 1 commit
  5. 30 Apr, 2018 1 commit
  6. 18 Apr, 2018 1 commit
  7. 08 Apr, 2018 1 commit
  8. 04 Apr, 2018 1 commit
  9. 27 Mar, 2018 1 commit
    • Thomas Haller's avatar
      core: track devices in manager via embedded CList · 4a705e1a
      Thomas Haller authored
      Instead of using a GSList for tracking the devices, use a CList.
      I think a CList is in most cases the more suitable data structure
      then GSList:
      
       - you can find out in O(1) whether the object is linked. That
         is nice, for example to assert in NMDevice's destructor that
         the object was unlinked, and we will use that later in
         nm_manager_get_device_by_path().
       - you can unlink the element in O(1) and you can unlink the
         element without having access to the link's head
       - Contrary to GSList, this does not require an extra slice
         allocation for the link node. It quite possibliy consumes
         slightly less memory because the CList structure is embedded
         in a struct that we already allocate. Even if slice allocation
         would be perfect to only consume 2*sizeof(gpointer) for the link
         note, it would at most be as-good as CList. Quite possibly,
         there is an overhead though.
       - CList possibly has better memory locality, because the link
         structure and the data are close to each other.
      
      Something which could be seen as disavantage, is that with CList
      one device can only be tracked in one NMManager instance at a time.
      But that is fine. There exists only one NMManager instance for now,
      and even if we would ever introduce multiple managers, we probably
      would not associate one NMDevice instance with multiple managers.
      
      The advantages are arguably not huge, but CList is IMHO clearly the
      more suited data structure. No need to stick to a suboptimal data
      structure for the job. Refactor it.
      4a705e1a
  10. 12 Mar, 2018 1 commit
    • Thomas Haller's avatar
      core/dbus: rework D-Bus implementation to use lower layer GDBusConnection API · 297d4985
      Thomas Haller authored
      Previously, we used the generated GDBusInterfaceSkeleton types and glued
      them via the NMExportedObject base class to our NM types. We also used
      GDBusObjectManagerServer.
      
      Don't do that anymore. The resulting code was more complicated despite (or
      because?) using generated classes. It was hard to understand, complex, had
      ordering-issues, and had a runtime and memory overhead.
      
      This patch refactors this entirely and uses the lower layer API GDBusConnection
      directly. It replaces the generated code, GDBusInterfaceSkeleton, and
      GDBusObjectManagerServer. All this is now done by NMDbusObject and NMDBusManager
      and static descriptor instances of type GDBusInterfaceInfo.
      
      This adds a net plus of more then 1300 lines of hand written code. I claim
      that this implementation is easier to understand. Note that previously we
      also required extensive and complex glue code to bind our objects to the
      generated skeleton objects. Instead, now glue our objects directly to
      GDBusConnection. The result is more immediate and gets rid of layers of
      code in between.
      Now that the D-Bus glue us more under our control, we can address issus and
      bottlenecks better, instead of adding code to bend the generated skeletons
      to our needs.
      
      Note that the current implementation now only supports one D-Bus connection.
      That was effectively the case already, although there were places (and still are)
      where the code pretends it could also support connections from a private socket.
      We dropped private socket support mainly because it was unused, untested and
      buggy, but also because GDBusObjectManagerServer could not export the same
      objects on multiple connections. Now, it would be rather straight forward to
      fix that and re-introduce ObjectManager on each private connection. But this
      commit doesn't do that yet, and the new code intentionally supports only one
      D-Bus connection.
      Also, the D-Bus startup was simplified. There is no retry, either nm_dbus_manager_start()
      succeeds, or it detects the initrd case. In the initrd case, bus manager never tries to
      connect to D-Bus. Since the initrd scenario is not yet used/tested, this is good enough
      for the moment. It could be easily extended later, for example with polling whether the
      system bus appears (like was done previously). Also, restart of D-Bus daemon isn't
      supported either -- just like before.
      
      Note how NMDBusManager now implements the ObjectManager D-Bus interface
      directly.
      
      Also, this fixes race issues in the server, by no longer delaying
      PropertiesChanged signals. NMExportedObject would collect changed
      properties and send the signal out in idle_emit_properties_changed()
      on idle. This messes up the ordering of change events w.r.t. other
      signals and events on the bus. Note that not only NMExportedObject
      messed up the ordering. Also the generated code would hook into
      notify() and process change events in and idle handle, exhibiting the
      same ordering issue too.
      No longer do that. PropertiesChanged signals will be sent right away
      by hooking into dispatch_properties_changed(). This means, changing
      a property in quick succession will no longer be combined and is
      guaranteed to emit signals for each individual state. Quite possibly
      we emit now more PropertiesChanged signals then before.
      However, we are now able to group a set of changes by using standard
      g_object_freeze_notify()/g_object_thaw_notify(). We probably should
      make more use of that.
      
      Also, now that our signals are all handled in the right order, we
      might find places where we still emit them in the wrong order. But that
      is then due to the order in which our GObjects emit signals, not due
      to an ill behavior of the D-Bus glue. Possibly we need to identify
      such ordering issues and fix them.
      
      Numbers (for contrib/rpm --without debug on x86_64):
      
      - the patch changes the code size of NetworkManager by
        - 2809360 bytes
        + 2537528 bytes (-9.7%)
      
      - Runtime measurements are harder because there is a large variance
        during testing. In other words, the numbers are not reproducible.
        Currently, the implementation performs no caching of GVariants at all,
        but it would be rather simple to add it, if that turns out to be
        useful.
        Anyway, without strong claim, it seems that the new form tends to
        perform slightly better. That would be no surprise.
      
        $ time (for i in {1..1000}; do nmcli >/dev/null || break; echo -n .;  done)
        - real    1m39.355s
        + real    1m37.432s
      
        $ time (for i in {1..2000}; do busctl call org.freedesktop.NetworkManager /org/freedesktop org.freedesktop.DBus.ObjectManager GetManagedObjects > /dev/null || break; echo -n .; done)
        - real    0m26.843s
        + real    0m25.281s
      
      - Regarding RSS size, just looking at the processes in similar
        conditions, doesn't give a large difference. On my system they
        consume about 19MB RSS. It seems that the new version has a
        slightly smaller RSS size.
        - 19356 RSS
        + 18660 RSS
      297d4985
  11. 20 Dec, 2017 1 commit
    • Thomas Haller's avatar
      core: persist aspired default route-metric in device's state file · 4277bc0e
      Thomas Haller authored
      NMManager tries to assign unique route-metrics in an increasing manner
      so that the device which activates first keeps to have the best routes.
      
      This information is also persisted in the device's state file, however
      we not only need to persist the effective route-metric which was
      eventually chosen by NMManager, but also the aspired metric.
      
      The reason is that when a metric is chosen for a device, the entire
      range between aspired and effective route-metric is reserved for that
      device. We must remember the entire range so that after restart the
      entire range is still considered to be in use.
      
      Fixes: 6a32c64d
      4277bc0e
  12. 15 Dec, 2017 1 commit
    • Thomas Haller's avatar
      device: generate unique default route-metrics per interface · 6a32c64d
      Thomas Haller authored
      In the past we had NMDefaultRouteManager which would coordinate adding
      the default-route with identical metrics. That especially happened, when
      activating two devices of the same type, without explicitly specifying
      ipv4.route-metric. For example, with ethernet devices, the routes on
      both interfaces would get a metric of 100.
      
      Coordinating routes was especially necessary, because we added
      routes with NLM_F_EXCL flag, akin to `ip route replace`. We not
      only had to avoid that activating two devices in NetworkManager would
      result in a fight over the default-route, but more importently
      to preserve externally added default-routes on unmanaged interfaces.
      
      NMDefaultRouteManager would ensure that in case of duplicate
      metrics, that the device that activated first would keep the
      best default-route. It would do so by bumping the metric
      of the second device to find a unused metric. The bumping itself
      was not very important -- MDefaultRouteManager could also just not
      configure any default-routes that show up as second, the result
      would be quite similar. More important was to keep the best
      default-route on the first activating device until the device
      deactivates or a device activates that really has a better
      default-route..
      
      Likewise, NMRouteManager would globally manage non-default-routes.
      It would not do any bumping of metrics, but it would also ensure that the routes
      of the device that activates first are not overwritten by a device activating
      later.
      
      However, the `ip route replace` approach has downsides, especially
      that it messes with routes on other interfaces, interfaces that are
      possibly not managed by NetworkManager. Another downside is, that
      binding a socket to an interface might not result in correct
      routes, because the route might just not be there (in case of
      NMRouteManager, which wouldn't configure duplicate routes by bumping
      their metric).
      
      Since commit 77ec3027 we would no longer
      use NLM_F_EXCL, but add routes akin to `ip route append`. When
      activating for example two ethernet devices with no explict route
      metric configuration, there are two routes like
      
         default via 10.16.122.254 dev eth0 proto dhcp metric 100
         default via 192.168.100.1 dev eth1 proto dhcp metric 100
      
      This does not only affect default routes. In case of a multi-homing
      setup you'd get
      
        192.168.100.0/24 dev eth0 proto kernel scope link src 192.168.100.1 metric 100
        192.168.100.0/24 dev eth1 proto kernel scope link src 192.168.100.1 metric 100
      
      but it's visible the most for default-routes.
      
      Note that we would append the routes that are activated later, as the order
      of `ip route show` confirms. One might hence expect, that kernel selects
      a route based on the order in the routing tables. However, that isn't
      the case, and activating the second interface will non-deterministically
      re-route traffic via the new interface. That will interfere badly with
      with NAT, stateful firewalls, and existing connections (like TCP).
      
      The solution is to have NMManager keep a global index of the default route-metrics
      currently in use. So, instead of determining the default-route metric based solely
      on the device-type, we now in addition generate default metrics that do not
      overlap. For example, if you activate eth0 first, it gets route-metric 100,
      and if you then activate eth1, it gets 101. Note that if you deactivate
      and re-activate eth0, then it will get route-metric 102, because the
      best route should stick on eth1 (which reserves the range 100 to 101).
      
      Note that when a connection explititly selects a particular metric, then that
      choice is honored (contrary to NMDefaultRouteManager which was more concerned
      with avoiding conflicts, then keeping the exact metric).
      
      https://bugzilla.redhat.com/show_bug.cgi?id=1505893
      6a32c64d
  13. 27 Nov, 2017 1 commit
    • Thomas Haller's avatar
      core: track NMActiveConnection in manager with CList · 3a907377
      Thomas Haller authored
      Using CList, we embed the list element in NMActiveConnection struct
      itself. That means for example, that you couldn't track a
      NMActiveConnection more then once. But we anyway never want that.
      
      The advantage is, that removing an active connection from the list
      is O(1), and we safe additional GSlice allocations for each node
      element.
      3a907377
  14. 09 Nov, 2017 3 commits
  15. 30 Oct, 2017 1 commit
  16. 17 Aug, 2017 1 commit
  17. 05 Aug, 2017 1 commit
    • Beniamino Galvani's avatar
      core: implement activation of PPP devices · 6c319593
      Beniamino Galvani authored
      Add code to NMPppDevice to activate new-style PPPoE connections. This
      is a bit tricky because we can't create the link as usual in
      create_and_realize(). Instead, we create a device without ifindex and
      start pppd in stage2; when pppd reports a new configuration, we rename
      the platform link to the correct name and set the ifindex into the
      device.
      
      This mechanism is inherently racy, but there is no way to tell pppd to
      create an arbitrary interface name.
      6c319593
  18. 12 May, 2017 1 commit
    • Thomas Haller's avatar
      hostname: cache hostname-manager's hostname property · 54f5407a
      Thomas Haller authored
      A property preferably only emits a notify-changed signal when
      the value actually changes and it caches the value (so that
      between property-changed signals the value is guaranteed not to change).
      
      NMSettings and NMManager both already cache the hostname, because
      NMHostnameManager didn't guarantee this basic concept.
      
      Implement it and rely on it from NMSettings and NMPolicy.
      And remove the copy of the property from NMManager.
      
      Move the call for nm_dispatcher_call_hostname() from NMHostnameManager
      to NMManager. Note that NMPolicy also has a call to the dispatcher
      when set-transient-hostname returns. This should be cleaned up later.
      54f5407a
  19. 16 Mar, 2017 1 commit
  20. 10 Feb, 2017 1 commit
  21. 27 Jan, 2017 1 commit
  22. 15 Dec, 2016 1 commit
  23. 21 Nov, 2016 1 commit
    • Thomas Haller's avatar
      build: don't add subdirectories to include search path but require qualified include · 44ecb415
      Thomas Haller authored
      Keep the include paths clean and separate. We use directories to group source
      files together. That makes sense (I guess), but then we should use this
      grouping also when including files. Thus require to #include files with their
      path relative to "src/".
      
      Also, we build various artifacts from the "src/" tree. Instead of having
      individual CFLAGS for each artifact in Makefile.am, the CFLAGS should be
      unified. Previously, the CFLAGS for each artifact differ and are inconsistent
      in which paths they add to the search path. Fix the inconsistency by just
      don't add the paths at all.
      44ecb415
  24. 26 Sep, 2016 3 commits
  25. 23 Sep, 2016 2 commits
  26. 17 Aug, 2016 1 commit
  27. 28 Apr, 2016 2 commits
  28. 07 Apr, 2016 1 commit
  29. 04 Apr, 2016 1 commit
  30. 17 Feb, 2016 1 commit
  31. 18 Jan, 2016 1 commit
  32. 07 Dec, 2015 1 commit
  33. 04 Dec, 2015 1 commit