1. 13 Jul, 2020 1 commit
  2. 28 Jun, 2020 2 commits
  3. 15 Jun, 2020 3 commits
  4. 03 Jun, 2020 1 commit
  5. 29 May, 2020 2 commits
  6. 15 May, 2020 1 commit
    • Beniamino Galvani's avatar
      device: use the nm-shared firewalld zone in shared mode · 3e2b7235
      Beniamino Galvani authored
      When the interface is in IPv4 or IPv6 shared mode and the user didn't
      specify an explicit zone, use the nm-shared one.
      
      Note that masquerade is still done through iptables direct calls
      because at the moment it is not possible for a firewalld zone to do
      masquerade based on the input interface.
      
      The firewalld zone is needed on systems where firewalld is using the
      nftables backend and the 'iptables' binary uses the iptables API
      (instead of the nftables one). On such systems, even if the traffic is
      allowed in iptables by our direct rules, it can still be dropped in
      nftables by firewalld.
      3e2b7235
  7. 08 May, 2020 1 commit
  8. 02 May, 2020 1 commit
  9. 24 Apr, 2020 1 commit
  10. 22 Apr, 2020 2 commits
  11. 21 Apr, 2020 2 commits
  12. 10 Apr, 2020 1 commit
  13. 04 Apr, 2020 1 commit
  14. 17 Dec, 2019 1 commit
  15. 29 Nov, 2019 1 commit
  16. 28 Nov, 2019 1 commit
  17. 27 Nov, 2019 1 commit
    • Thomas Haller's avatar
      all: add support for "scope" attribute for IPv4 routes · b9f1beb0
      Thomas Haller authored
      - systemd-networkd and initscripts both support it.
      
      - it seems suggested to configure routes with scope "link" on AWS.
      
      - the scope is only supported for IPv4 routes. Kernel ignores the
        attribute for IPv6 routes.
      
      - we don't support the aliases like "link" or "global". Instead
        only the numeric value is supported. This is different from
        systemd-networkd, which accepts names like "global" and "link",
        but no numerical values. I think restricting ourself only to
        the aliases unnecessarily limits what is possible on netlink.
        The alternative would be to allow aliases and numbers both,
        but that causes multiple ways to define something and has
        thus downsides. So, only numeric values.
      
      - when setting rtm_scope to RT_SCOPE_NOWHERE (0, the default), kernel
        will coerce that to RT_SCOPE_LINK. This ambiguity of nowhere vs. link
        is a problem, but we don't do anything about it.
      
      - The other problem is, that when deleting a route with scope RT_SCOPE_NOWHERE,
        this acts as a wild care and removes the first route that matches (given the
        other route attributes). That means, NetworkManager has no meaningful
        way to delete a route with scope zero, there is always the danger that
        we might delete the wrong route. But this is nothing new to this
        patch. The problem existed already previously, except that
        NetworkManager could only add routes with scope nowhere (i.e. link).
      b9f1beb0
  18. 25 Nov, 2019 1 commit
    • Thomas Haller's avatar
      libnm: refactor caching of D-Bus objects in NMClient · ce0e898f
      Thomas Haller authored
      No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
      for the NMClient cache. Instead, use GDBusConnection directly and a
      custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
      data.
      
      CHANGES
      -------
      
      - This is a complete rework. I think the previous implementation was
      difficult to understand. There were unfixed bugs and nobody understood
      the code well enough to fix them. Maybe somebody out there understood the
      code, but I certainly did not. At least nobody provided patches to fix those
      issues. I do believe that this implementation is more straightforward and
      easier to understand. It removes a lot of layers of code. Whether this claim
      of simplicity is true, each reader must decide for himself/herself. Note
      that it is still fairly complex.
      
      - There was a lingering performance issue with large number of D-Bus
      objects. The patch tries hard that the implementation scales well. Of
      course, when we cache N objects that have N-to-M references to other,
      we still are fundamentally O(N*M) for runtime and memory consumption (with
      M being the number of references between objects). But each part should behave
      efficiently and well.
      
      - Play well with GMainContext. libnm code (NMClient) is generally not
      thread safe. However, it should work to use multiple instances in
      parallel, as long as each access to a NMClient is through the caller's
      GMainContext. This follows glib's style and effectively allows to use NMClient
      in a multi threaded scenario. This implies to stick to a main context
      upon construction and ensure that callbacks are only invoked when
      iterating that context. Also, NMClient itself shall never iterate the
      caller's context. This also means, libnm must never use g_idle_add() or
      g_timeout_add(), as those enqueue sources in the g_main_context_default()
      context.
      
      - Get ordering of messages right. All events are consistently enqueued
      in a GMainContext and processed strictly in order. For example,
      previously "nm-object.c" tried to combine signals and emit them on an
      idle handler. That is wrong, signals must be emitted in the right order
      and when they happen. Note that when using GInitable's synchronous initialization
      to initialize the NMClient instance, NMClient internally still operates fully
      asynchronously. In that case NMClient has an internal main context.
      
      - NMClient takes over most of the functionality. When using D-Bus'
      ObjectManager interface, one needs to handle basically the entire state
      of the D-Bus interface. That cannot be separated well into distinct
      parts, and even if you try, you just end up having closely related code
      in different source files. Spreading related code does not make it
      easier to understand, on the contrary. That means, NMClient is
      inherently complex as it contains most of the logic. I think that is
      not avoidable, but it's not as bad as it sounds.
      
      - NMClient processes D-Bus messages and state changes in separate steps.
      First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
      keeps track of the changed data. Then we update the GObject instances
      (_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
      we emit all signals and notifications that were collected
      (_dbus_handle_changes_commit()). Note that for example during the initial
      GetManagedObjects() reply, NMClient receive a large amount of state at once.
      But we first apply all the changes to our GObject instances before
      emitting any signals. The result is that signals are always emitted in a moment
      when the cache is consistent. The unavoidable downside is that when you receive
      a property changed signal, possibly many other properties changed
      already and more signals are about to be emitted.
      
      - NMDeviceWifi no longer modifies the content of the cache from client side
      during poke_wireless_devices_with_rf_status(). The content of the cache
      should be determined by D-Bus alone and follow what NetworkManager
      service exposes. Local modifications should be avoided.
      
      - This aims to bring no API/ABI change, though it does of course bring
      various subtle changes in behavior. Those should be all for the better, but the
      goal is not to break any existing clients. This does change internal
      (albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
      property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
      
      - Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
      and NMSecretAgentOld. These are independent of NMClient/NMObject and
      should be reworked separately.
      
      - While we no longer use generated classes from gdbus-codegen, we don't
      need more glue code than before. Also before we constructed NMPropertiesInfo and
      a had large amount of code to propagate properties from NMDBus* to NMObject.
      That got completely reworked, but did not fundamentally change. You still need
      about the same effort to create the NMLDBusMetaIface. Not using
      generated bindings did not make anything worse (which tells about the
      usefulness of generated code, at least in the way it was used).
      
      - NMLDBusMetaIface and other meta data is static and immutable. This
      avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
      have compile time checks to ensure the property types matches. It's pretty hard
      to misuse them because it won't compile.
      
      - The meta data now explicitly encodes the expected D-Bus types and
      makes sure never to accept wrong data. That would only matter when the
      server (accidentally or intentionally) exposes unexpected types on
      D-Bus. I don't think that was previously ensured in all cases.
      For example, demarshal_generic() only cared about the GObject property
      type, it didn't know the expected D-Bus type.
      
      - Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
      probably indicated real bugs. In any case, it prevented us from running CI
      with G_DEBUG=fatal-warnings, because there would be just too many
      unrelated crashes. Now we log debug messages that can be enabled with
      "LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
      into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
      Together with G_DEBUG=fatal-warnings, this turns them into assertions.
      Note that such "assertion failures" might also happen because of a server
      bug (or change). Thus these are not common assertions that indicate a bug
      in libnm and are thus not armed unless explicitly requested. In our CI we
      should now always run with LIBNM_CLIENT_DEBUG=warning,error and
      G_DEBUG=fatal-warnings and to catch bugs. Note that currently
      NetworkManager has bugs in this regard, so enabling this will result in
      assertion failures. That should be fixed first.
      
      - Note that this changes the order in which we emit "notify:devices" and
      "device-added" signals. I think it makes the most sense to emit first
      "device-removed", then "notify:devices", and finally "device-added"
      signals.
      This changes behavior for commit 52ae28f6 ('libnm: queue
      added/removed signals and suppress uninitialized notifications'),
      but I don't think that users should actually rely on the order. Still,
      the new order makes the most sense to me.
      
      - In NetworkManager, profiles can be invisible to the user by setting
      "connection.permissions". Such profiles would be hidden by NMClient's
      nm_client_get_connections() and their "connection-added"/"connection-removed"
      signals.
      Note that NMActiveConnection's nm_active_connection_get_connection()
      and NMDevice's nm_device_get_available_connections() still exposes such
      hidden NMRemoteConnection instances. This behavior was preserved.
      
      NUMBERS
      -------
      
      I compared 3 versions of libnm.
      
        [1] 962297f9, current tip of nm-1-20 branch
        [2] 4fad8c7c, current master, immediate parent of this patch
        [3] this patch
      
      All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
      The libraries were build with
      
        $ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
      
      Note that RPM build already stripped the library.
      
      ---
      
      N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
        on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
        in these tests it had large (and bogus) ELF notes. Anyway, the point
        is to show the relative sizes, so it doesn't matter).
      
        [1] 4075552 (102.7%)
        [2] 3969624 (100.0%)
        [3] 3705208 ( 93.3%)
      
      ---
      
      N2) `size /usr/lib64/libnm.so.0.1.0`:
      
                text             data              bss                dec               hex   filename
        [1]  1314569 (102.0%)   69980 ( 94.8%)   10632 ( 80.4%)   1395181 (101.4%)   1549ed   /usr/lib64/libnm.so.0.1.0
        [2]  1288410 (100.0%)   73796 (100.0%)   13224 (100.0%)   1375430 (100.0%)   14fcc6   /usr/lib64/libnm.so.0.1.0
        [3]  1229066 ( 95.4%)   65248 ( 88.4%)   13400 (101.3%)   1307714 ( 95.1%)   13f442   /usr/lib64/libnm.so.0.1.0
      
      ---
      
      N3) Performance test with test-client.py. With checkout of [2], run
      
      ```
      prepare_checkout() {
          rm -rf /tmp/nm-test && \
          git checkout -B test 4fad8c7c && \
          git clean -fdx && \
          ./autogen.sh --prefix=/tmp/nm-test && \
          make -j 5 install && \
          make -j 5 check-local-clients-tests-test-client
      }
      prepare_test() {
          NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
      }
      do_test() {
        for i in {1..10}; do
            NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
        done
        echo "done!"
      }
      prepare_checkout
      prepare_test
      time do_test
      ```
      
        [1]  real 2m14.497s (101.3%)     user 5m26.651s (100.3%)     sys  1m40.453s (101.4%)
        [2]  real 2m12.800s (100.0%)     user 5m25.619s (100.0%)     sys  1m39.065s (100.0%)
        [3]  real 1m54.915s ( 86.5%)     user 4m18.585s ( 79.4%)     sys  1m32.066s ( 92.9%)
      
      ---
      
      N4) Performance. Run NetworkManager from build [2] and setup a large number
      of profiles (551 profiles and 515 devices, mostly unrealized). This
      setup is already at the edge of what NetworkManager currently can
      handle. Of course, that is a different issue. Here we just check how
      long plain `nmcli` takes on the system.
      
      ```
      do_cleanup() {
          for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
              nmcli connection delete uuid "$UUID"
          done
          for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
              nmcli device delete "$DEVICE"
          done
      }
      
      do_setup() {
          do_cleanup
          for i in {1..30}; do
              nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
              for j in $(seq $i 30); do
                  nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i  ipv4.method disabled ipv6.method ignore
              done
          done
          systemctl restart NetworkManager.service
          sleep 5
      }
      
      do_test() {
          perf stat -r 50 -B nmcli 1>/dev/null
      }
      
      do_test
      ```
      
        [1]
      
         Performance counter stats for 'nmcli' (50 runs):
      
                    456.33 msec task-clock:u              #    1.093 CPUs utilized            ( +-  0.44% )
                         0      context-switches:u        #    0.000 K/sec
                         0      cpu-migrations:u          #    0.000 K/sec
                     5,900      page-faults:u             #    0.013 M/sec                    ( +-  0.02% )
             1,408,675,453      cycles:u                  #    3.087 GHz                      ( +-  0.48% )
             1,594,741,060      instructions:u            #    1.13  insn per cycle           ( +-  0.02% )
               368,744,018      branches:u                #  808.061 M/sec                    ( +-  0.02% )
                 4,566,058      branch-misses:u           #    1.24% of all branches          ( +-  0.76% )
      
                   0.41761 +- 0.00282 seconds time elapsed  ( +-  0.68% )
      
        [2]
      
         Performance counter stats for 'nmcli' (50 runs):
      
                    477.99 msec task-clock:u              #    1.088 CPUs utilized            ( +-  0.36% )
                         0      context-switches:u        #    0.000 K/sec
                         0      cpu-migrations:u          #    0.000 K/sec
                     5,948      page-faults:u             #    0.012 M/sec                    ( +-  0.03% )
             1,471,133,482      cycles:u                  #    3.078 GHz                      ( +-  0.36% )
             1,655,275,369      instructions:u            #    1.13  insn per cycle           ( +-  0.02% )
               382,595,152      branches:u                #  800.433 M/sec                    ( +-  0.02% )
                 4,746,070      branch-misses:u           #    1.24% of all branches          ( +-  0.49% )
      
                   0.43923 +- 0.00242 seconds time elapsed  ( +-  0.55% )
      
        [3]
      
         Performance counter stats for 'nmcli' (50 runs):
      
                    352.36 msec task-clock:u              #    1.027 CPUs utilized            ( +-  0.32% )
                         0      context-switches:u        #    0.000 K/sec
                         0      cpu-migrations:u          #    0.000 K/sec
                     4,790      page-faults:u             #    0.014 M/sec                    ( +-  0.26% )
             1,092,341,186      cycles:u                  #    3.100 GHz                      ( +-  0.26% )
             1,209,045,283      instructions:u            #    1.11  insn per cycle           ( +-  0.02% )
               281,708,462      branches:u                #  799.499 M/sec                    ( +-  0.01% )
                 3,101,031      branch-misses:u           #    1.10% of all branches          ( +-  0.61% )
      
                   0.34296 +- 0.00120 seconds time elapsed  ( +-  0.35% )
      
      ---
      
      N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
      
        [1]
      
              Command being timed: "nmcli"
              User time (seconds): 0.42
              System time (seconds): 0.04
              Percent of CPU this job got: 107%
              Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
              Average shared text size (kbytes): 0
              Average unshared data size (kbytes): 0
              Average stack size (kbytes): 0
              Average total size (kbytes): 0
              Maximum resident set size (kbytes): 34456
              Average resident set size (kbytes): 0
              Major (requiring I/O) page faults: 0
              Minor (reclaiming a frame) page faults: 6128
              Voluntary context switches: 1298
              Involuntary context switches: 1106
              Swaps: 0
              File system inputs: 0
              File system outputs: 0
              Socket messages sent: 0
              Socket messages received: 0
              Signals delivered: 0
              Page size (bytes): 4096
              Exit status: 0
      
        [2]
              Command being timed: "nmcli"
              User time (seconds): 0.44
              System time (seconds): 0.04
              Percent of CPU this job got: 108%
              Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
              Average shared text size (kbytes): 0
              Average unshared data size (kbytes): 0
              Average stack size (kbytes): 0
              Average total size (kbytes): 0
              Maximum resident set size (kbytes): 34452
              Average resident set size (kbytes): 0
              Major (requiring I/O) page faults: 0
              Minor (reclaiming a frame) page faults: 6169
              Voluntary context switches: 1849
              Involuntary context switches: 142
              Swaps: 0
              File system inputs: 0
              File system outputs: 0
              Socket messages sent: 0
              Socket messages received: 0
              Signals delivered: 0
              Page size (bytes): 4096
              Exit status: 0
      
        [3]
      
              Command being timed: "nmcli"
              User time (seconds): 0.32
              System time (seconds): 0.02
              Percent of CPU this job got: 102%
              Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
              Average shared text size (kbytes): 0
              Average unshared data size (kbytes): 0
              Average stack size (kbytes): 0
              Average total size (kbytes): 0
              Maximum resident set size (kbytes): 29196
              Average resident set size (kbytes): 0
              Major (requiring I/O) page faults: 0
              Minor (reclaiming a frame) page faults: 5059
              Voluntary context switches: 919
              Involuntary context switches: 685
              Swaps: 0
              File system inputs: 0
              File system outputs: 0
              Socket messages sent: 0
              Socket messages received: 0
              Signals delivered: 0
              Page size (bytes): 4096
              Exit status: 0
      
      ---
      
      N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
        the RSS size.
      
            USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
        [1] me     1492900 21.0  0.2 461348 33248 pts/10   Sl+  15:02   0:00 nmcli monitor
        [2] me     1490721  5.0  0.2 461496 33548 pts/10   Sl+  15:00   0:00 nmcli monitor
        [3] me     1495801 16.5  0.1 459476 28692 pts/10   Sl+  15:04   0:00 nmcli monitor
      ce0e898f
  19. 23 Nov, 2019 1 commit
    • Thomas Haller's avatar
      dhcp: switch IPv4 "internal" DHCP client to use "nettools" backend instead of "systemd" · d3c7083f
      Thomas Haller authored
      Previously, our "internal" DHCPv4 client is based on a fork of
      systemd code. This manner of maintaining the fork is problematic.
      The solution is to use a proper library: n-dhcp4 from the nettools
      project.
      
      We already have these two as undocumented plugins available, by
      setting either "dhcp=systemd" or "dhcp=nettools". This is only for
      testing. Users are only supposed to use the "internal" plugin.
      
      Up until now, the "internal" DHCPv4 plugin was based on "systemd" code.
      Change that to use "nettools" instead.
      
      Possibly this breaks something, and we need to fix it. But do this
      early so we have time to test the nettools plugin and identify issues.
      
      For the user, this change should be entirely transparant.
      
      !302
      d3c7083f
  20. 23 Oct, 2019 1 commit
    • Thomas Haller's avatar
      libnm: retire deprecated WiMAX NMObject types · dab1d780
      Thomas Haller authored
      WiMAX is deprecated since NetworkManager 1.2.0. Note that also
      NetworkManager on server side no longer supports this type, hence
      the server's D-Bus API will never expose devices of this type.
      
      Note that NMDeviceWimax and NMWimaxNsp are NMObject types. That means,
      they are instantiated by NMClient to represent information on the D-Bus
      interface. As NetworkManager no longer exposes WiMAX devices, such
      devices are never created. Note that it makes no sense that a user would
      directly instantiate NMObject types, because they only work together with
      NMClient.
      
      Don't drop the related symbols and definitions from libnm, so that there
      is no API/ABI change (as far as building and linking is concerned). But
      make the types defunctional (which of course is a behavioral API change).
      Calling the API now triggers a g_return_*() warning.
      
      Also belatedly mark the WimaxNsp API as deprecated. It should have been
      done in 1.2. Note that here we deprecate the API and retire it at the
      same time. Optimally, we would have deprecated it a few releases ago,
      before retiring it. However, marking something for deprecation is anyway
      no excuse for anything. I mean, removing or retiring API is usually
      painful, regardless whether it was marked for deprecation or not. In this
      case, there is no possibility that a libnm user gets hold on a NMDeviceWimax
      or NMWimaxNsp instance, because NMClient simply no longer instantiates
      them. Hence, this change should not affect any user in practice.
      
      !316
      dab1d780
  21. 22 Oct, 2019 1 commit
    • Thomas Haller's avatar
      libnm: hide GObject structs from public API and embed private data · 57aa5e2a
      Thomas Haller authored
      These types are all subclasses of NMObject. These instances are commonly
      created by NMClient itself. It makes no sense that a user would
      instantiate the type. Much less does it make sense to subclass them.
      
      Hide the object and class structures from public API.
      
      This is an API and ABI break, but of something that is very likely
      unused.
      
      This is mainly done to embed the private structure in the object itself.
      This has benefits for performance and debugability. But most
      importantly, we can obtain a static offset where to access the private data.
      That means, we can use the information to access the data pointer
      generically, as we will need later.
      
      This is not done for the internal types NMManager, NMRemoteSettings,
      and NMDnsManager. These types will be dropped later.
      57aa5e2a
  22. 14 Oct, 2019 1 commit
    • Thomas Haller's avatar
      device: don't delay startup complete for pending-actions "autoconf", "dhcp4" and "dhcp6" · 1e520641
      Thomas Haller authored
      These "pending-actions" only have one purpose: to mark the device
      as busy and thereby delay "startup complete" to be reached. That
      in turn delays "NetworkManager-wait-online" service.
      
      Of course, "NetworkManager-wait-online" waits for some form of readiness
      and is not extensively configurable (e.g. you cannot exclude devices from
      being waited). However, the intent is to wait that all devices are "settled".
      That means among others, that the timeouts waiting for carrier and Wi-Fi scan
      results passed, and devices either don't have a connection profile to autoactivate,
      or they autoactivated profiles and are in state "connected".
      
      A major point here is that the device is considered ready, once it
      reaches the state "connected". Note that if you configure both IPv4 and
      IPv6 addressing modes, than "ipv4.may-fail=yes" and "ipv6.may-fail=yes"
      means, that the device is considered fully activated once one address
      family completes. Again, this is not very configurable, but by setting
      "ipv6.may-fail=no", you can require that the device has indeed IPv6
      addressing completed.
      
      Now, the determining factor for declaring "startup complete" is whether the
      device is in state "connected". That may or may not mean that DHCPv4,
      autoconf or DHCPv6 completed, as it depends on a overall state of the
      device. So, it is wrong to have distinct pending actions for these operations.
      
      Remove them.
      
      This fixes that we wrongly would wait too long before declaring startup
      complete. But it is also a change in behavior.
      1e520641
  23. 12 Aug, 2019 1 commit
  24. 10 Aug, 2019 1 commit
  25. 06 Aug, 2019 1 commit
  26. 29 Jul, 2019 2 commits
  27. 26 Jul, 2019 2 commits
  28. 20 Jun, 2019 1 commit
    • Thomas Haller's avatar
      settings: drop ibft settings plugin · 74641be8
      Thomas Haller authored
      The functionality of the ibft settings plugin is now handled by
      nm-initrd-generator. There is no need for it anymore, drop it.
      
      Note that ibft called iscsiadm, which requires CAP_SYS_ADMIN to work
      ([1]). We really want to drop this capability, so the current solution
      of a settings plugin (as it is implemented) is wrong. The solution
      instead is nm-initrd-generator.
      
      Also, on Fedora the ibft was disabled and probably on most other
      distributions as well. This was only used on RHEL.
      
      [1] https://bugzilla.redhat.com/show_bug.cgi?id=1371201#c7
      74641be8
  29. 11 Jun, 2019 1 commit
  30. 28 May, 2019 1 commit
    • Thomas Haller's avatar
      settings: drop deprecated NetworkManager.conf option "main.monitor-connection-files" · 1ae5e646
      Thomas Haller authored
      It's deprecated and off by default for a long time.
      
      It is bad to automatically reload connection profiles. For example, ifcfg
      files may consist of multiple files, there is no guarantee that we
      pick up the connection when it's fully written.
      
      Just don't do this anymore.
      
      Users should use D-Bus API or `nmcli connection reload` or `nmcli
      connection load $FILENAME` to reload profiles from disk.
      1ae5e646
  31. 03 May, 2019 1 commit
  32. 21 Apr, 2019 1 commit