libnice issueshttps://gitlab.freedesktop.org/libnice/libnice/-/issues2022-02-04T22:26:55Zhttps://gitlab.freedesktop.org/libnice/libnice/-/issues/151nondeterministic test-bsd failure2022-02-04T22:26:55ZApteryksnondeterministic test-bsd failureSeen while building at commit 47a96334448838c43d7e72f4ef51b317befbfae1 with GNU Guix (nondeterministic):
```
1/40 test-parse OK 0.01s
2/40 test-format OK 0.00s
3/40 test-...Seen while building at commit 47a96334448838c43d7e72f4ef51b317befbfae1 with GNU Guix (nondeterministic):
```
1/40 test-parse OK 0.01s
2/40 test-format OK 0.00s
3/40 test-conncheck OK 0.00s
4/40 test-hmac OK 0.00s
5/40 nice-random OK 0.00s
6/40 libnice-doc-check OK 0.07s
7/40 test-pseudotcp OK 0.01s
8/40 test-bsd FAIL 0.01s killed by signal 6 SIGABRT
>>> MALLOC_PERTURB_=81 /tmp/guix-build-libnice-0.1.18-0.47a9633.drv-0/build/tests/nice-test-bsd
――――――――――――――――――――――――――――――――――――― ✀ ―――――――――――――――――――――――――――――――――――――
stdout:
Bail out! libnice-tests:ERROR:../source/tests/test-bsd.c:120:test_simple_send_recv: assertion failed (sock
et_recv (server, &tmp, 5, buf) == 5): (0 == 5)
stderr:
**
libnice-tests:ERROR:../source/tests/test-bsd.c:120:test_simple_send_recv: assertion failed (socket_recv (s
erver, &tmp, 5, buf) == 5): (0 == 5)
――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――
9/40 test OK 0.01s
10/40 test-address OK 0.00s
11/40 test-add-remove-stream OK 0.01s
12/40 test-build-io-stream OK 0.01s
13/40 test-io-stream-thread OK 0.03s
14/40 test-io-stream-closing-write OK 0.03s
15/40 test-io-stream-closing-read OK 0.03s
16/40 test-io-stream-cancelling OK 0.11s
17/40 test-io-stream-pollable OK 0.03s
18/40 test-send-recv OK 0.39s
19/40 test-socket-is-based-on OK 0.01s
20/40 test-udp-turn-fragmentation OK 0.03s
21/40 test-priority OK 0.01s
22/40 test-fullmode OK 10.35s
23/40 test-fullmode-with-stun OK 11.62s
24/40 test-different-number-streams OK 0.05s
25/40 test-restart OK 0.09s
26/40 test-fallback OK 0.01s
27/40 test-thread OK 0.03s
28/40 test-trickle OK 2.54s
29/40 test-tcp OK 0.01s
30/40 test-icetcp OK 0.29s
31/40 test-credentials OK 0.03s
32/40 test-turn OK 0.01s
33/40 test-drop-invalid OK 0.05s
34/40 test-nomination OK 0.17s
35/40 test-interfaces OK 0.00s
36/40 test-consent OK 4.96s
37/40 test-pseudotcp-fin OK 0.02s
38/40 test-new-trickle OK 12.19s
39/40 test-gstreamer OK 0.12s
40/40 test-pseudotcp-random OK 3.96s
Summary of Failures:
8/40 test-bsd FAIL 0.01s killed by signal 6 SIGABRT
Ok: 39
Expected Fail: 0
Fail: 1
Unexpected Pass: 0
Skipped: 0
Timeout: 0
Full log written to /tmp/guix-build-libnice-0.1.18-0.47a9633.drv-0/build/meson-logs/testlog.txt
error: in phase 'check': uncaught exception:
%exception #<&invoke-error program: "meson" arguments: ("test" "--print-errorlogs" "-t" "0") exit-status: 1 term-signal: #f stop-signal: #f>
phase `check' failed after 56.7 seconds
command "meson" "test" "--print-errorlogs" "-t" "0" failed with status 1
```https://gitlab.freedesktop.org/libnice/libnice/-/issues/150non-deterministic test-bind test failure2022-02-04T22:26:58ZApteryksnon-deterministic test-bind test failureHello,
I'm building this fairly recent commit of libnice on GNU Guix, and encountering the following build failure:
```
[...]
starting phase `unpack'
`/gnu/store/dbg8nk16m1a76fdpjhfddr177lxr5fhp-libnice-0.1.18-0.47a9633-checkout/.gitig...Hello,
I'm building this fairly recent commit of libnice on GNU Guix, and encountering the following build failure:
```
[...]
starting phase `unpack'
`/gnu/store/dbg8nk16m1a76fdpjhfddr177lxr5fhp-libnice-0.1.18-0.47a9633-checkout/.gitignore' -> `./.gitignore'
`/gnu/store/dbg8nk16m1a76fdpjhfddr177lxr5fhp-libnice-0.1.18-0.47a9633-checkout/.gitlab-ci.yml' -> `./.gitlab-ci.yml'
`/gnu/store/dbg8nk16m1a76fdpjhfddr177lxr5fhp-libnice-0.1.18-0.47a9633-checkout/AUTHORS' -> `./AUTHORS'
`/gnu/store/dbg8nk16m1a76fdpjhfddr177lxr5fhp-libnice-0.1.18-0.47a9633-checkout/COPYING' -> `./COPYING'
`/gnu/store/dbg8nk16m1a76fdpjhfddr177lxr5fhp-libnice-0.1.18-0.47a9633-checkout/COPYING.MPL' -> `./COPYING.MPL'
`/gnu/store/dbg8nk16m1a76fdpjhfddr177lxr5fhp-libnice-0.1.18-0.47a9633-checkout/NEWS' -> `./NEWS'
`/gnu/store/dbg8nk16m1a76fdpjhfddr177lxr5fhp-libnice-0.1.18-0.47a9633-checkout/README' -> `./README'
`/gnu/store/dbg8nk16m1a76fdpjhfddr177lxr5fhp-libnice-0.1.18-0.47a9633-checkout/TODO' -> `./TODO'
`/gnu/store/dbg8nk16m1a76fdpjhfddr177lxr5fhp-libnice-0.1.18-0.47a9633-checkout/meson.build' -> `./meson.build'
`/gnu/store/dbg8nk16m1a76fdpjhfddr177lxr5fhp-libnice-0.1.18-0.47a9633-checkout/meson_options.txt' -> `./meson_options.txt'
`/gnu/store/dbg8nk16m1a76fdpjhfddr177lxr5fhp-libnice-0.1.18-0.47a9633-checkout/COPYING.LGPL' -> `./COPYING.LGPL'
`/gnu/store/dbg8nk16m1a76fdpjhfddr177lxr5fhp-libnice-0.1.18-0.47a9633-checkout/tests/libnice.supp' -> `./tests/libnice.supp'
`/gnu/store/dbg8nk16m1a76fdpjhfddr177lxr5fhp-libnice-0.1.18-0.47a9633-checkout/tests/meson.build' -> `./tests/meson.build'
`/gnu/store/dbg8nk16m1a76fdpjhfddr177lxr5fhp-libnice-0.1.18-0.47a9633-checkout/tests/test-add-remove-stream.c' -> `./tests/test-add-remove-stream.c'
`/gnu/store/dbg8nk16m1a76fdpjhfddr177lxr5fhp-libnice-0.1.18-0.47a9633-checkout/tests/test-address.c' -> `....skipping...
Error: no ID for constraint linkend: "shutdown".
Error: no ID for constraint linkend: "gboolean".
Error: no ID for constraint linkend: "TRUE:CAPS".
Error: no ID for constraint linkend: "FALSE:CAPS".
Error: no ID for constraint linkend: "gboolean".
Error: no ID for constraint linkend: "TRUE:CAPS".
Error: no ID for constraint linkend: "TRUE:CAPS".
Error: no ID for constraint linkend: "TRUE:CAPS".
Error: no ID for constraint linkend: "TRUE:CAPS".
Error: no ID for constraint linkend: "FALSE:CAPS".
Error: no ID for constraint linkend: "int".
Error: no ID for constraint linkend: "gboolean".
Error: no ID for constraint linkend: "guint64".
Error: no ID for constraint linkend: "TRUE:CAPS".
Error: no ID for constraint linkend: "FALSE:CAPS".
Error: no ID for constraint linkend: "void".
Error: no ID for constraint linkend: "void".
Error: no ID for constraint linkend: "guint16".
Error: no ID for constraint linkend: "gboolean".
Error: no ID for constraint linkend: "gchar".
Error: no ID for constraint linkend: "guint32".
Error: no ID for constraint linkend: "TRUE:CAPS".
Error: no ID for constraint linkend: "FALSE:CAPS".
Error: no ID for constraint linkend: "void".
Error: no ID for constraint linkend: "gint".
Error: no ID for constraint linkend: "gboolean".
Error: no ID for constraint linkend: "TRUE:CAPS".
Error: no ID for constraint linkend: "FALSE:CAPS".
Error: no ID for constraint linkend: "gsize".
Error: no ID for constraint linkend: "gboolean".
Error: no ID for constraint linkend: "TRUE:CAPS".
Error: no ID for constraint linkend: "FALSE:CAPS".
Error: no ID for constraint linkend: "void".
Error: no ID for constraint linkend: "guint32".
Error: no ID for constraint linkend: "g-get-monotonic-time".
Error: no ID for constraint linkend: "gpointer".
Error: no ID for constraint linkend: "shutdown".
Error: no ID for constraint linkend: "guint".
Error: no ID for constraint linkend: "gpointer".
Error: no ID for constraint linkend: "guint".
Error: no ID for constraint linkend: "gboolean".
Error: no ID for constraint linkend: "guint".
Error: no ID for constraint linkend: "guint".
Error: no ID for constraint linkend: "guint".
Error: no ID for constraint linkend: "gboolean".
[1/2] Running all tests.
1/41 test-parse OK 0.01s
2/41 test-format OK 0.01s
3/41 test-bind FAIL 0.07s killed by signal 6 SIGABRT
>>> MALLOC_PERTURB_=122 /tmp/guix-build-libnice-0.1.18-0.47a9633.drv-0/build/stun/tests/test-bind
――――――――――――――――――――――――――――――――――――― ✀ ―――――――――――――――――――――――――――――――――――――
stderr:
test-bind: ../source/stun/tests/test-bind.c:234: bad_responses: Assertion `len >= 20' failed.
――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――
4/41 test-conncheck OK 0.01s
5/41 test-hmac OK 0.01s
Error: no ID for constraint linkend: "size-t".
Error: no ID for constraint linkend: "void".
Error: no ID for constraint linkend: "void".
Error: no ID for constraint linkend: "unsigned".
Error: no ID for constraint linkend: "void".
Error: no ID for constraint linkend: "int".
Error: no ID for constraint linkend: "int".
Error: no ID for constraint linkend: "void".
Error: no ID for constraint linkend: "int".
Error: no ID for constraint linkend: "unsigned".
Error: no ID for constraint linkend: "stun-usage-timer-refresh".
Error: no ID for constraint linkend: "gboolean".
Error: no ID for constraint linkend: "gint".
Error: no ID for constraint linkend: "gint".
Error: no ID for constraint linkend: "void".
Error: no ID for constraint linkend: "void".
Error: no ID for constraint linkend: "gboolean".
Error: no ID for constraint linkend: "gboolean".
Error: no ID for constraint linkend: "int".
Error: no ID for constraint linkend: "gboolean".
Error: no ID for constraint linkend: "void".
Error: no ID for constraint linkend: "void".
Error: no ID for constraint linkend: "gboolean".
Error: no ID for constraint linkend: "void".
Error: no ID for constraint linkend: "gint".
Error: no ID for constraint linkend: "gboolean".
Error: no ID for constraint linkend: "gsize".
Error: no ID for constraint linkend: "gboolean".
Error: no ID for constraint linkend: "void".
Error: no ID for constraint linkend: "guint".
Error: no ID for constraint linkend: "gpointer".
Error: no ID for constraint linkend: "guint".
Error: no ID for constraint linkend: "gboolean".
Error: no ID for constraint linkend: "guint".
Error: no ID for constraint linkend: "guint".
Error: no ID for constraint linkend: "guint".
Error: no ID for constraint linkend: "gboolean".
Error: no ID for constraint linkend: "GEnum".
Error: no ID for constraint linkend: "GObject".
Error: no ID for constraint linkend: "guint32".
Error: no ID for constraint linkend: "NULL:CAPS".
Error: no ID for constraint linkend: "gboolean".
Error: no ID for constraint linkend: "TRUE:CAPS".
Error: no ID for constraint linkend: "FALSE:CAPS".
Error: no ID for constraint linkend: "TCP-LISTEN:CAPS".
Error: no ID for constraint linkend: "gint".
Error: no ID for constraint linkend: "char".
Error: no ID for constraint linkend: "size-t".
Error: no ID for constraint linkend: "gint".
Error: no ID for constraint linkend: "char".
Error: no ID for constraint linkend: "guint32".
Error: no ID for constraint linkend: "void".
Error: no ID for constraint linkend: "gboolean".
Error: no ID for constraint linkend: "FALSE:CAPS".
Error: no ID for constraint linkend: "TRUE:CAPS".
Error: no ID for constraint linkend: "TRUE:CAPS".
Error: no ID for constraint linkend: "FALSE:CAPS".
Error: no ID for constraint linkend: "void".
Error: no ID for constraint linkend: "shutdown".
Error: no ID for constraint linkend: "gboolean".
Error: no ID for constraint linkend: "TRUE:CAPS".
Error: no ID for constraint linkend: "FALSE:CAPS".
Error: no ID for constraint linkend: "gboolean".
Error: no ID for constraint linkend: "TRUE:CAPS".
Error: no ID for constraint linkend: "TRUE:CAPS".
Error: no ID for constraint linkend: "TRUE:CAPS".
Error: no ID for constraint linkend: "TRUE:CAPS".
Error: no ID for constraint linkend: "FALSE:CAPS".
Error: no ID for constraint linkend: "int".
Error: no ID for constraint linkend: "gboolean".
Error: no ID for constraint linkend: "guint64".
Error: no ID for constraint linkend: "TRUE:CAPS".
Error: no ID for constraint linkend: "FALSE:CAPS".
Error: no ID for constraint linkend: "void".
Error: no ID for constraint linkend: "void".
Error: no ID for constraint linkend: "guint16".
Error: no ID for constraint linkend: "gboolean".
Error: no ID for constraint linkend: "gchar".
Error: no ID for constraint linkend: "guint32".
Error: no ID for constraint linkend: "TRUE:CAPS".
Error: no ID for constraint linkend: "FALSE:CAPS".
Error: no ID for constraint linkend: "void".
Error: no ID for constraint linkend: "gint".
Error: no ID for constraint linkend: "gboolean".
Error: no ID for constraint linkend: "TRUE:CAPS".
Error: no ID for constraint linkend: "FALSE:CAPS".
Error: no ID for constraint linkend: "gsize".
Error: no ID for constraint linkend: "gboolean".
Error: no ID for constraint linkend: "TRUE:CAPS".
Error: no ID for constraint linkend: "FALSE:CAPS".
Error: no ID for constraint linkend: "void".
Error: no ID for constraint linkend: "guint32".
Error: no ID for constraint linkend: "g-get-monotonic-time".
Error: no ID for constraint linkend: "gpointer".
Error: no ID for constraint linkend: "shutdown".
Error: no ID for constraint linkend: "guint".
Error: no ID for constraint linkend: "gpointer".
Error: no ID for constraint linkend: "guint".
Error: no ID for constraint linkend: "gboolean".
Error: no ID for constraint linkend: "guint".
Error: no ID for constraint linkend: "guint".
Error: no ID for constraint linkend: "guint".
Error: no ID for constraint linkend: "gboolean".
[1/2] Running all tests.
1/41 test-parse OK 0.01s
2/41 test-format OK 0.01s
3/41 test-bind FAIL 0.07s killed by signal 6 SIGABRT
>>> MALLOC_PERTURB_=122 /tmp/guix-build-libnice-0.1.18-0.47a9633.drv-0/build/stun/tests/test-bind
――――――――――――――――――――――――――――――――――――― ✀ ―――――――――――――――――――――――――――――――――――――
stderr:
test-bind: ../source/stun/tests/test-bind.c:234: bad_responses: Assertion `len >= 20' failed.
――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――
4/41 test-conncheck OK 0.01s
5/41 test-hmac OK 0.01s
6/41 nice-random OK 0.00s
7/41 libnice-doc-check OK 0.07s
8/41 test-pseudotcp OK 0.01s
9/41 test-bsd OK 0.01s
10/41 test OK 0.01s
11/41 test-address OK 0.00s
12/41 test-add-remove-stream OK 0.01s
13/41 test-build-io-stream OK 0.01s
14/41 test-io-stream-thread OK 0.08s
15/41 test-io-stream-closing-write OK 0.08s
16/41 test-io-stream-closing-read OK 0.08s
17/41 test-io-stream-cancelling OK 0.14s
18/41 test-io-stream-pollable OK 0.09s
19/41 test-send-recv OK 0.74s
20/41 test-socket-is-based-on OK 0.01s
21/41 test-udp-turn-fragmentation OK 0.03s
22/41 test-priority OK 0.01s
23/41 test-fullmode OK 10.37s
24/41 test-fullmode-with-stun OK 11.63s
25/41 test-different-number-streams OK 0.05s
26/41 test-restart OK 0.09s
27/41 test-fallback OK 0.01s
28/41 test-thread OK 0.06s
29/41 test-trickle OK 2.55s
30/41 test-tcp OK 0.01s
31/41 test-icetcp OK 0.30s
32/41 test-credentials OK 0.03s
33/41 test-turn OK 0.01s
34/41 test-drop-invalid OK 0.05s
35/41 test-nomination OK 0.17s
36/41 test-interfaces OK 0.00s
37/41 test-consent OK 5.95s
38/41 test-pseudotcp-fin OK 0.02s
39/41 test-new-trickle OK 12.20s
40/41 test-gstreamer OK 0.10s
41/41 test-pseudotcp-random OK 3.96s
Summary of Failures:
3/41 test-bind FAIL 0.07s killed by signal 6 SIGABRT
Ok: 40
Expected Fail: 0
Fail: 1
Unexpected Pass: 0
Skipped: 0
Timeout: 0
Full log written to /tmp/guix-build-libnice-0.1.18-0.47a9633.drv-0/build/meson-logs/testlog.txt
FAILED: meson-test
/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/bin/meson test --no-rebuild --print-errorlogs
ninja: build stopped: subcommand failed.
error: in phase 'check': uncaught exception:
%exception #<&invoke-error program: "ninja" arguments: ("test") exit-status: 1 term-signal: #f stop-signal: #f>
phase `check' failed after 59.4 seconds
command "ninja" "test" failed with status 1
```
The package has the following direct inputs:
```
dependencies: glib-networking@2.70.rc glib@2.70.0 gnutls@3.7.2 gobject-introspection@1.66.1
+ graphviz@2.49.0 gst-plugins-base@1.18.5 gstreamer@1.18.5 gtk-doc@1.33.2 libnsl@1.3.0 pkg-config@0.29.2
```
Thank you!https://gitlab.freedesktop.org/libnice/libnice/-/issues/148Getting segfault on master2021-09-18T17:23:27ZdnishGetting segfault on masterHey,
we are using the Janus WebRTC server for our video application. I've noticed, that the app crashes very often due to a problem with libnice. The logs are showing the following details:
`kernel: [139206.554923] hloop 820286986[32263...Hey,
we are using the Janus WebRTC server for our video application. I've noticed, that the app crashes very often due to a problem with libnice. The logs are showing the following details:
`kernel: [139206.554923] hloop 820286986[32263]: segfault at 40 ip 00007f7d75ccd090 sp 00007f7c15fa23e8 error 4 in libnice.so.10.11.0[7f7d75ca0000+50000]`
Is this problem known? Any idea what's going wrong?https://gitlab.freedesktop.org/libnice/libnice/-/issues/147Issue on running the test.2021-09-08T18:24:47ZKumail NIssue on running the test.![image](/uploads/1e26a5bdf8f081cd3454b02b36625383/image.png)![image](/uploads/1e26a5bdf8f081cd3454b02b36625383/image.png)https://gitlab.freedesktop.org/libnice/libnice/-/issues/142segfault with TCP sockets2021-09-09T08:11:26ZGwenael FOURREsegfault with TCP socketsAfter a sudden burst of packets i get a segfault regularly because send_queue length is 1 but head and tail are NULL.
I cannot get my head around why the length is wrong when arriving to nice_socket_queue_send_with_callback
The callstac...After a sudden burst of packets i get a segfault regularly because send_queue length is 1 but head and tail are NULL.
I cannot get my head around why the length is wrong when arriving to nice_socket_queue_send_with_callback
The callstack is (on master):
```
nice_socket_queue_send_with_callback (socket.c)
nice_socket_flush_send_queue_to_socket (socket.c => inside G_IO_ERROR_WOULD_BLOCK condition)
socket_send_more (tcp-bsd.c)
```
In what condition can this happen ?https://gitlab.freedesktop.org/libnice/libnice/-/issues/136Support for Consent Freshness (RFC7675) breaks ICE2021-02-17T22:51:06ZAlessandro AmiranteSupport for Consent Freshness (RFC7675) breaks ICE!164 breaks ICE when using binding requests as keepalives instead of binding indications, without enabling consent freshness. (`keepalive-conncheck=true` and `consent-freshness=false`).
The issue is that `p->remote_consent.last_received...!164 breaks ICE when using binding requests as keepalives instead of binding indications, without enabling consent freshness. (`keepalive-conncheck=true` and `consent-freshness=false`).
The issue is that `p->remote_consent.last_received` is only updated [here](https://gitlab.freedesktop.org/libnice/libnice/-/blob/4a378b880ee18eb74b3054c1f2148aea3f3e9e6e/agent/conncheck.c#L1462) when consent freshness is enabled. So, [this check](https://gitlab.freedesktop.org/libnice/libnice/-/blob/4a378b880ee18eb74b3054c1f2148aea3f3e9e6e/agent/conncheck.c#L1257) makes ICE disconnect after 50 seconds.https://gitlab.freedesktop.org/libnice/libnice/-/issues/135libnice not changing selected-pair when multiple pairs are available2024-03-29T10:41:00ZLorenzo Minierolibnice not changing selected-pair when multiple pairs are availableHi all,
we encountered an issue that, after a bit of research, we managed to replicate fairly easily, and so I wanted to figure out how this could be addressed.
To give some context before diving into the details, we first noticed this...Hi all,
we encountered an issue that, after a bit of research, we managed to replicate fairly easily, and so I wanted to figure out how this could be addressed.
To give some context before diving into the details, we first noticed this issue when trying to use an iPhone to establish a WebRTC PeerConnection with Janus. In this specific instance, the iPhone was connected both on 5G and WiFi, and so would gather candidates for both. Due to how the client network worked, Janus got a `prflx` candidate for the 5G network, and a `srflx` candidate for the WiFi: connectivity checks succeeded for both, and since `prflx` has priority, the 5G pair ended up being used for media delivery. So far so good.
What happened, though, was that when the 5G network went down on the iPhone, the browser stack would automatically start using the WiFi network (and keep on sending connectivity checks via STUN there, that libnice would answer to), while libnice would keep on trying to send packets to the (now unreachable) 5G address. This resulted in a broken session, as while ICE was working as expected (Chrome kept on sending STUN checks, and libnice kept on answering), media would go to the wrong place.
After studying traffic and traces, we managed to replicate this fairly easily in our local setup as well, and discovered what we consider a potential issue in libnice when testing Chromium based clients with multiple network interfaces.
In our testbed, nice agents are being created with `keepalive-conncheck` set to `TRUE`, so libnice will indeed send STUN requests to check the connection liveness. Since the testbed includes multiple network interfaces on the client side, when an "ice-controlled" Chromium client establishes a connection, it considers as valid both the available pairs that are established with the server (let's call them P1 and P2). Only the highest priority one (P1) is picked on both sides (as was the `prflx` one in my example above) and so initially everything works as expected: periodic checks are being sent from libnice for P1, while the client sends periodic checks for both P1 and P2. As anticipated, though, when the NIC associated to P1 local candidate goes down on the client side, the following happens:
1. after some seconds, Chromium (ice-controlled) switches the connection to P2;
2. Chromium increases the STUN checks frequency for P2, and libnice replies correctly, as you can see from this log snippet:
```
libnice-DEBUG: 17:32:18.027: Agent 0x7f8a94009320: inbound STUN packet for 1/1 (stream/component) from [192.168.1.64]:57830 (96 octets) :
libnice-stun-DEBUG: 17:32:18.027: STUN demux: OK!
libnice-stun-DEBUG: 17:32:18.027: Comparing username/ufrag of len 9 and 4, equal=0
libnice-stun-DEBUG: 17:32:18.027: username: 0x4e7350553a6a41646b
libnice-stun-DEBUG: 17:32:18.027: ufrag: 0x4e735055
libnice-stun-DEBUG: 17:32:18.027: Found valid username, returning password: 'FscWdiTItNtxtJFWA5wcOz'
libnice-stun-DEBUG: 17:32:18.028: Message HMAC-SHA1 fingerprint:
libnice-stun-DEBUG: 17:32:18.028: key : 0x4673635764695449744e7478744a4657413577634f7a
libnice-stun-DEBUG: 17:32:18.028: expected: 0x02c59d3d0442b6a80687f1500b16e977d6070714
libnice-stun-DEBUG: 17:32:18.028: received: 0x02c59d3d0442b6a80687f1500b16e977d6070714
libnice-stun-DEBUG: 17:32:18.028: STUN auth: OK!
libnice-stun-DEBUG: 17:32:18.028: STUN unknown: 0 mandatory attribute(s)!
libnice-stun-DEBUG: 17:32:18.028: STUN Reply (buffer size = 1300)...
libnice-stun-DEBUG: 17:32:18.028: Message HMAC-SHA1 message integrity:
libnice-stun-DEBUG: 17:32:18.028: key : 0x4673635764695449744e7478744a4657413577634f7a
libnice-stun-DEBUG: 17:32:18.028: sent : 0x7ca414e24eb109c7b1aa54e1ee3c6400954a83f0
libnice-stun-DEBUG: 17:32:18.028: Message HMAC-SHA1 fingerprint: 0x999ec91f
libnice-stun-DEBUG: 17:32:18.028: All done (response size: 80)
libnice-DEBUG: 17:32:18.028: Agent 0x7f8a94009320 : STUN-CC RESP to '192.168.1.64:57830', socket=34, len=80, cand=0x7f8a68007960 (c-id:1), use-cand=0.
libnice-DEBUG: 17:32:18.028: Agent 0x7f8a94009320 : scheduling triggered check with socket=0x7f8a940158f0 and remote cand=0x7f8a68007960.
libnice-DEBUG: 17:32:18.028: Agent 0x7f8a94009320 : Adding a triggered check to conn.check list (local=0x7f8a940133e0).
libnice-DEBUG: 17:32:18.028: Agent 0x7f8a94009320 : do not create a pair that would have a priority 782000ff:7e7c1eff:0 lower than selected pair priority 782000ff:7e7e1eff:0.
```
3. libnice does not detect the failure on the now gone pair yet, and so still sends media through P1;
4. eventually libnice starts receiving ICMP for the periodic checks to P1, as shown below:
```
libnice-DEBUG: 17:32:28.795: Agent 0x7f8a94009320: conncheck created 92 - 0x7f8a9400e8c8
libnice-DEBUG: 17:32:29.295: Agent 0x7f8a94009320 : Retransmitting keepalive conncheck
libnice-DEBUG: 17:32:30.296: Agent 0x7f8a94009320 : Retransmitting keepalive conncheck
libnice-DEBUG: 17:32:30.796: Agent 0x7f8a94009320 : Keepalive conncheck timed out!! peer probably lost connection
libnice-DEBUG: 17:32:30.796: Agent 0x7f8a94009320 : stream 1 component 1 STATE-CHANGE ready -> failed.
```
5. after some seconds, since it did not receive any response for the connchecks, despite P2 being available libnice marks the component as `FAILED`, thus shutting down the PeerConnection.
In this scenario, we'd expect libnice to switch to P2 too (as Chromium did before), since it is a valid nominated pair and the next in line priority-wise, but that doesn't seem to the happening. Unfortunately, that's not a really unusual scenario, especially with mobile endpoints, so it might be problematic to handle: we tried to see if it could be somehow detected on the client side first, e.g., to force an ICE restart there, but couldn't find anything that might help there (the client briefly goes from `disconnected` to `connected` again, since Chromium does switch to P2 successfully).
What do you think would be the right way to address this in libnice? We'd of course be interested to help, here: pinging @atoppi and @amirante as they've been the one who first found about the issue and dug in the libnice code, and so may have some ideas on where to fix things.
Thanks!https://gitlab.freedesktop.org/libnice/libnice/-/issues/134support for lite ice?2021-02-05T07:10:56Zchuan huangsupport for lite ice?is there any support for lite ice? initialize a agent like this:
`
agent = nice_agent_new_full (NULL, NICE_COMPATIBILITY_RFC5245, NICE_AGENT_OPTION_LITE_MODE);
`
however, when i capture udp packets during ice process, found it still issu...is there any support for lite ice? initialize a agent like this:
`
agent = nice_agent_new_full (NULL, NICE_COMPATIBILITY_RFC5245, NICE_AGENT_OPTION_LITE_MODE);
`
however, when i capture udp packets during ice process, found it still issue a connectivity check req, not just respond this one as rfc 8445.https://gitlab.freedesktop.org/libnice/libnice/-/issues/133Datachannel with NICE_RELAY_TYPE_TURN_TLS2021-01-16T08:32:21Zzhang7788Datachannel with NICE_RELAY_TYPE_TURN_TLSI want to use TLS + certificate authentication to realize TLS communication function. Which methods should I use in libnice. I don't see its functions in the existing API.I want to use TLS + certificate authentication to realize TLS communication function. Which methods should I use in libnice. I don't see its functions in the existing API.https://gitlab.freedesktop.org/libnice/libnice/-/issues/132ABI breakage in `NiceCandidate`2020-12-03T07:30:16ZSebastian DrögeABI breakage in `NiceCandidate`833c1aa4cc442f2e25216dc411651a674d36c09a moved some "private" fields from the `NiceCandidate` struct into an actual private header.
As `candidate.h` is a public header this is ABI breakage and indeed broke the [Nim](https://discourse.gn...833c1aa4cc442f2e25216dc411651a674d36c09a moved some "private" fields from the `NiceCandidate` struct into an actual private header.
As `candidate.h` is a public header this is ABI breakage and indeed broke the [Nim](https://discourse.gnome.org/t/size-of-nicecandidate-of-libnice/4913) bindings.https://gitlab.freedesktop.org/libnice/libnice/-/issues/131conncheck_list removed when pair is selected but need use in renomination2020-12-02T03:47:25Zleaffeiconncheck_list removed when pair is selected but need use in renomination1. Stream's conncheck_list removed when pair is selected(priv_prune_pending_checks());
2. But conncheck_list need use in renomination(conn_check_handle_renomination());
3. So when renomination, how to get Stream's conncheck_list.[checkl...1. Stream's conncheck_list removed when pair is selected(priv_prune_pending_checks());
2. But conncheck_list need use in renomination(conn_check_handle_renomination());
3. So when renomination, how to get Stream's conncheck_list.[checklist_removed.txt](/uploads/23208006cc0a39ad61a3ec30762eebad/checklist_removed.txt)
@ocrete please help me, thankshttps://gitlab.freedesktop.org/libnice/libnice/-/issues/130Crash when free resource2023-01-11T07:04:46ZmightZhongCrash when free resourceHi, I encounter a crash where ice free resource. Below is the core info, would you have a look, thanks
```
(gdb) bt
#0 0x0000000000000001 in ?? ()
#1 0x00007f759c4a51c8 in g_source_callback_unref (cb_data=0x7f752004e3d0) at gmain.c:156...Hi, I encounter a crash where ice free resource. Below is the core info, would you have a look, thanks
```
(gdb) bt
#0 0x0000000000000001 in ?? ()
#1 0x00007f759c4a51c8 in g_source_callback_unref (cb_data=0x7f752004e3d0) at gmain.c:1566
#2 0x00007f759c4a5bb7 in g_source_destroy_internal (source=0x7f7520022f20, context=0x7f7520055810, have_lock=0) at gmain.c:1255
#3 0x00007f759c4a6385 in g_source_destroy (source=<optimized out>) at gmain.c:1304
#4 0x00007f759dd063aa in nice_component_finalize (obj=0x7f7520023800) at component.c:1227
#5 0x00007f759c98e4d1 in g_object_unref (_object=0x7f7520023800) at gobject.c:3330
#6 0x00007f759c4c5b68 in g_slist_foreach (list=<optimized out>, list@entry=0x7f752c0021c0, func=0x7f759c98e330 <g_object_unref>, user_data=user_data@entry=0x0) at gslist.c:856
#7 0x00007f759c4c5b8b in g_slist_free_full (list=0x7f752c0021c0, free_func=<optimized out>) at gslist.c:174
#8 0x00007f759dd13d75 in nice_stream_finalize (obj=0x7f752001fc90) at stream.c:174
#9 0x00007f759c98e4d1 in g_object_unref (_object=0x7f752001fc90) at gobject.c:3330
#10 0x00007f759dd09936 in nice_agent_dispose (object=0x7f7518007780) at agent.c:5411
#11 0x00007f759c98e443 in g_object_unref (_object=0x7f7518007780) at gobject.c:3293
#12 0x0000000000439cbb in janus_ice_webrtc_free (handle=0x7f7534009bc0) at ice.c:1439
#13 0x000000000044e919 in janus_ice_outgoing_traffic_handle (handle=0x7f7534009bc0, pkt=0x70bac0 <janus_ice_hangup_peerconnection>) at ice.c:3858
#14 0x000000000042ecde in janus_ice_outgoing_traffic_dispatch (source=0x7f7534007530, callback=0x0, user_data=0x0) at ice.c:381
#15 0x00007f759c4a8544 in g_main_dispatch (context=0x7f7534009cf0) at gmain.c:3182
#16 g_main_context_dispatch (context=context@entry=0x7f7534009cf0) at gmain.c:3847
#17 0x00007f759c4a8798 in g_main_context_iterate (context=0x7f7534009cf0, block=block@entry=1, dispatch=dispatch@entry=1, self=<optimized out>) at gmain.c:3920
#18 0x00007f759c4a8a5a in g_main_loop_run (loop=0x7f7534001640) at gmain.c:4116
#19 0x0000000000436a6b in janus_ice_handle_thread (data=0x7f7534009bc0) at ice.c:1172
#20 0x00007f759c4cea05 in g_thread_proxy (data=0x7f75280329e0) at gthread.c:784
#21 0x00007f759b5c6ea5 in start_thread () from /lib64/libpthread.so.0
#22 0x00007f759a27d8dd in clone () from /lib64/libc.so.6
(gdb) f 4
#4 0x00007f759dd063aa in nice_component_finalize (obj=0x7f7520023800) at component.c:1227
1227 component.c: No such file or directory.
(gdb) p cmp
$1 = (NiceComponent *) 0x7f7520023800
(gdb) p cmp->stop_cancellable_source
$2 = (GSource *) 0x7f7520022f20
(gdb) p *cmp->stop_cancellable_source
$3 = {callback_data = 0x0, callback_funcs = 0x0, source_funcs = 0x7f759d5b3b60 <cancellable_source_funcs>, ref_count = 2, context = 0x7f7520055810, priority = 0, flags = 0, source_id = 1, poll_fds = 0x0, prev = 0x0, next = 0x0,
name = 0x7f7520020f60 "GCancellable", priv = 0x7f7528035000}https://gitlab.freedesktop.org/libnice/libnice/-/issues/129Windows 7 stuck to close agent2020-11-17T22:54:29Zalnet82Windows 7 stuck to close agentI see a strange behavior I can't explain on Windows 7 (Windows 10 is working fine).
When I'm trying to close the agent, g_main_loop_thread is stuck and never joins.
Does anyone see any problem with the code below?
~~~c++
g_main_loop_t...I see a strange behavior I can't explain on Windows 7 (Windows 10 is working fine).
When I'm trying to close the agent, g_main_loop_thread is stuck and never joins.
Does anyone see any problem with the code below?
~~~c++
g_main_loop_thread = std::thread(g_main_loop_run, this->loop.get());
if (!(loop = std::unique_ptr<GMainLoop, void (*)(GMainLoop *)>(g_main_loop_new(context.get(), FALSE),
g_main_loop_unref))) {
printf("Failed to initialize GMainLoop");
return false;
}
agent = std::unique_ptr<NiceAgent, decltype(&g_object_unref)>(
nice_agent_new_full(g_main_loop_get_context(loop.get()), NICE_COMPATIBILITY_RFC5245, NiceAgentOption(
NICE_AGENT_OPTION_REGULAR_NOMINATION | NICE_AGENT_OPTION_ICE_TRICKLE)),
g_object_unref);
if (!agent) {
printf(" Failed to initialize nice agent");
return false;
}
.
.
.
nice_agent_close_async(agent.get(), AgentClosed, &agent_closed);
while (!agent_closed) {
g_main_context_iteration (NULL, TRUE);
}
agent.reset();
g_main_loop_quit(loop.get());
if (g_main_loop_thread.joinable()) {
g_main_loop_thread.join(); //<---- Join never returns
}
~~~https://gitlab.freedesktop.org/libnice/libnice/-/issues/125SegFault: Missing file descriptor when using turn server2020-12-07T16:07:32ZSystemKeeperSegFault: Missing file descriptor when using turn serverThe following is based on current libnice master (804a4c0) in connection with janus-gateway 0.10.16 and was previously reported there ([see here](https://github.com/meetecho/janus-gateway/issues/2416))
When I enable the use of a turn se...The following is based on current libnice master (804a4c0) in connection with janus-gateway 0.10.16 and was previously reported there ([see here](https://github.com/meetecho/janus-gateway/issues/2416))
When I enable the use of a turn server in janus-gateway, I can reliably reproduce the following segfault in libnice:
<details>
```
(process:1074): libnice-DEBUG: 19:49:26.111: Agent 0x55982e113980: inbound STUN packet for 1/1 (stream/component) from [172.31.2.63]:64095 (96 octets) :
(process:1074): libnice-stun-DEBUG: 19:49:26.111: STUN demux: OK!
(process:1074): libnice-stun-DEBUG: 19:49:26.111: Comparing username/ufrag of len 9 and 4, equal=0
(process:1074): libnice-stun-DEBUG: 19:49:26.111: username: 0x61512f463a4a4d7866
(process:1074): libnice-stun-DEBUG: 19:49:26.111: ufrag: 0x61512f46
(process:1074): libnice-stun-DEBUG: 19:49:26.111: Found valid username, returning password: 'X....u'
(process:1074): libnice-stun-DEBUG: 19:49:26.111: Message HMAC-SHA1 fingerprint:
(process:1074): libnice-stun-DEBUG: 19:49:26.111: key : 0x5858....
(process:1074): libnice-stun-DEBUG: 19:49:26.111: expected: 0xc536955676f160a9802092a3a7bd858c3bb4d752
(process:1074): libnice-stun-DEBUG: 19:49:26.111: received: 0xc536955676f160a9802092a3a7bd858c3bb4d752
(process:1074): libnice-stun-DEBUG: 19:49:26.111: STUN auth: OK!
(process:1074): libnice-stun-DEBUG: 19:49:26.111: STUN unknown: 0 mandatory attribute(s)!
(process:1074): libnice-stun-DEBUG: 19:49:26.111: STUN Reply (buffer size = 1300)...
(process:1074): libnice-stun-DEBUG: 19:49:26.111: Message HMAC-SHA1 message integrity:
(process:1074): libnice-stun-DEBUG: 19:49:26.111: key : 0x5858.....
(process:1074): libnice-stun-DEBUG: 19:49:26.111: sent : 0x52ee7f8d3819393c8a906addba52fc31423222d8
(process:1074): libnice-stun-DEBUG: 19:49:26.111: Message HMAC-SHA1 fingerprint: 0x8796f9c3
(process:1074): libnice-stun-DEBUG: 19:49:26.111: All done (response size: 80)
(process:1074): libnice-DEBUG: 19:49:26.111: Agent 0x55982e113980 : STUN-CC RESP to '172.31.2.63:64095', socket=4294967295, len=80, cand=0x55982e180c90 (c-id:1), use-cand =0, transactionId=2112a44259352b2b646a4266584a7646
Thread 42 "hloop 429041650" received signal SIGSEGV, Segmentation fault.
[Switching to LWP 1118]
0x00007f2da5e53561 in socket_send_message (to=0x7f2da3bca480, message=0x7f2da3bc97c0, reliable=0, sock=<optimized out>) at ../socket/udp-turn.c:785
785 socket_send_message (NiceSocket *sock, const NiceAddress *to,
(gdb) backtrace
#0 0x00007f2da5e53561 in socket_send_message (to=0x7f2da3bca480, message=0x7f2da3bc97c0, reliable=0, sock=<optimized out>) at ../socket/udp-turn.c:785
#1 0x00007f2da5e53df0 in socket_send_messages (sock=0x55982e3156e0, to=0x7f2da3bca480, messages=<optimized out>, n_messages=1) at ../socket/udp-turn.c:990
#2 0x00007f2da5e4d7be in nice_socket_send (sock=sock@entry=0x55982e3156e0, to=to@entry=0x7f2da3bca480, len=len@entry=80, buf=buf@entry=0x7f2da3bc9ea0 "\001\001") at ../socket/socket.c:226
#3 0x00007f2da5e37747 in agent_socket_send (sock=sock@entry=0x55982e3156e0, addr=addr@entry=0x7f2da3bca480, len=len@entry=80, buf=0x7f2da3bc9ea0 "\001\001") at ../agent/agent.c:7012
#4 0x00007f2da5e412ed in priv_reply_to_conn_check (use_candidate=<optimized out>, msg=0x7f2da3bc99f0, rbuf_len=80, sockptr=0x55982e3156e0, toaddr=0x7f2da3bca480, rcand=<optimized out>, lcand=0x55982e2920b0, component=0x55982e170640, stream=0x55982e1dd6c0, agent=0x55982e113980) at ../agent/conncheck.c:3244
#5 0x00007f2da5e412ed in conn_check_handle_inbound_stun (agent=agent@entry=0x55982e113980, stream=stream@entry=0x55982e1dd6c0, component=component@entry=0x55982e170640, nicesock=0x55982e3156e0, from=0x7f2da3bca480, buf=buf@entry=0x55982e3a47c0 "", len=96) at ../agent/conncheck.c:4785
#6 0x00007f2da5e35791 in agent_recv_message_unlocked(agent=agent@entry=0x55982e113980, stream=stream@entry=0x55982e1dd6c0, component=component@entry=0x55982e170640, nicesock=<optimized out>, message=message@entry=0x7f2 da3bca540) at ../agent/agent.c:4430
#7 0x00007f2da5e35def in component_io_cb (gsocket=<optimized out>, condition=<optimized out>, user_data=0x55982e3626c0) at ../agent/agent.c:5753
#8 0x00007f2da5d162f4 in () at /usr/lib/libgio-2.0.so.0
#9 0x00007f2da5b88703 in g_main_context_dispatch () at /usr/lib/libglib-2.0.so.0
#10 0x00007f2da5b8896b in () at /usr/lib/libglib-2.0.so.0
#11 0x00007f2da5b88cb1 in g_main_loop_run () at /usr/lib/libglib-2.0.so.0
#12 0x000055982cea6786 in janus_ice_handle_thread (data=0x55982e202aa0) at ice.c:1165
#13 0x00007f2da5ba5df3 in () at /usr/lib/libglib-2.0.so.0
#14 0x00007f2da5ed471e in () at /lib/ld-musl-x86_64.so.1
#15 0x0000000000000000 in ()
(gdb)
```
</details>
The reported socket (socket=4294967295) looks suspicious as it equals 0xFFFF FFFF. I also found 2 STUN-REQ with this socket:
<details>
```
(process:1074): libnice-stun-DEBUG: 19:49:26.042: STUN demux: OK!
(process:1074): libnice-stun-DEBUG: 19:49:26.042: STUN unknown: 0 mandatory attribute(s)!
(process:1074): libnice-stun-DEBUG: 19:49:26.042: STUN error message received (code: 401)
(process:1074): libnice-DEBUG: 19:49:26.042: Agent 0x55982e113340 : stun_turn_process/disc for 0x55982e464840 res 2.
(process:1074): libnice-DEBUG: 19:49:26.042: agent_recv_message_unlocked: Valid STUN packet received.
(process:1074): libnice-DEBUG: 19:49:26.046: Agent 0x55982e113340: inbound STUN packet for 1/1 (stream/component) from [<...>]:7869 (108 octets) :
(process:1074): libnice-stun-DEBUG: 19:49:26.046: STUN demux: OK!
(process:1074): libnice-DEBUG: 19:49:26.046: Agent 0x55982e113340 : Valid STUN response for which we don't have a request, ignoring
(process:1074): libnice-DEBUG: 19:49:26.046: agent_recv_message_unlocked: Valid STUN packet received.
(process:1074): libnice-DEBUG: 19:49:26.054: Agent 0x55982e113980 : pair 0x55982e1393a0 state IN_PROGRESS (priv_conn_check_initiate)
(process:1074): libnice-DEBUG: 19:49:26.054: Agent 0x55982e113980 : STUN-CC REQ [172.30.150.100]:52897 --> [172.31.2.63]:64095, socket=4294967295, pair=0x55982e1393a0 (c-id:1), tie=3521818653514815344, username='<...>' (9), password='<...>' (24), prio=1e2000ff, controlling.
(process:1074): libnice-DEBUG: 19:49:26.054: Agent 0x55982e113980 : conn_check_send: set cand_use=1 (aggressive nomination).
(process:1074): libnice-stun-DEBUG: 19:49:26.054: Message HMAC-SHA1 message integrity:
(process:1074): libnice-stun-DEBUG: 19:49:26.054: key : 0x31654e50482b774733753453654772456571724f45517574
(process:1074): libnice-stun-DEBUG: 19:49:26.054: sent : 0x3ce16101a03de707d023c83976db923d67b4fab2
(process:1074): libnice-stun-DEBUG: 19:49:26.054: Message HMAC-SHA1 fingerprint: 0x45854073
(process:1074): libnice-DEBUG: 19:49:26.054: Agent 0x55982e113980: conncheck created 92 - 0x55982e13a9a8
(process:1074): libnice-DEBUG: 19:49:26.054: Agent 0x55982e113980 : timer set to 500ms, waiting+in_progress=6
```
</details>
The devs over at janus-gateway suggested, that this is more likely a problem of libnice as the `0xffffffff` socket only happens, when the socket does not have a file descriptor. [See here](https://github.com/libnice/libnice/blob/804a4c095ff689939e2f21fbede5f1ced18667a6/agent/conncheck.c#L3236)
That's as far as I was able to debug this issue. Please let me know what you need.https://gitlab.freedesktop.org/libnice/libnice/-/issues/120Setting NICE_AGENT_OPTION_ICE_TRICKLE option doesn't postpone changing compon...2020-08-26T17:52:57ZMichał ŚledźSetting NICE_AGENT_OPTION_ICE_TRICKLE option doesn't postpone changing component state to FAILEDHi,
it seems that setting `NICE_AGENT_OPTION_ICE_TRICKLE` to TRUE doesn't postpone changing component state to FAILED.
I didn't `invoke nice_agent_peer_candidate_gathering_done()` function.
I tried setting `NICE_AGENT_OPTION_ICE_TRICKL...Hi,
it seems that setting `NICE_AGENT_OPTION_ICE_TRICKLE` to TRUE doesn't postpone changing component state to FAILED.
I didn't `invoke nice_agent_peer_candidate_gathering_done()` function.
I tried setting `NICE_AGENT_OPTION_ICE_TRICKLE` in the following ways:
```c
g_object_set (G_OBJECT (agent), "ice-trickle", TRUE, NULL);
or
nice_agent_new_full(g_main_loop_get_context(state->gloop),
NICE_COMPATIBILITY_RFC5245,
NICE_AGENT_OPTION_ICE_TRICKLE);
```
Here are logs. I am using `libnice` with Elixir in CNode so these lines are a little longer. I replaced my IP addresses with `xxxx` phrase.
```
19:02:08.368 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:08.367: Agent 0x5625f3ed70a0: set_remote_candidates 1 1
19:02:08.368 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:08.368: Agent 0x5625f3ed70a0 : Adding UDP remote candidate with addr xxxx for s1/c1. U/P '(null)'/'(null)' prio: 642000ff
19:02:08.369 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:08.368: Agent 0x5625f3ed70a0 : creating a new pair
19:02:08.369 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:08.369: Agent 0x5625f3ed70a0 : pair 0x5625f3eea030 state FROZEN (priv_add_new_check_pair)
19:02:08.369 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:08.369: Agent 0x5625f3ed70a0 : new pair 0x5625f3eea030 : xxxx --> xxxx
19:02:08.369 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:08.369: Unknown sockaddr family: 17
19:02:08.369 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:08.369: Failed to convert address to string for interface ?lo?.
19:02:08.370 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:08.369: Unknown sockaddr family: 17
19:02:08.370 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:08.369: Failed to convert address to string for interface ?eth0?.
19:02:08.370 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:08.369: Interface: lo
19:02:08.370 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:08.369: IP Address: 127.0.0.1
19:02:08.370 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:08.369: Interface: eth0
19:02:08.370 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:08.369: IP Address: xxxx
19:02:08.370 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:08.369: Interface: lo
19:02:08.370 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:08.369: IP Address: ::1
19:02:08.370 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:08.369: Interface: eth0
19:02:08.370 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:08.369: IP Address: xxxx
19:02:08.370 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:08.369: Interface: eth0
19:02:08.370 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:08.369: IP Address: xxxx
19:02:08.370 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:08.369: Agent 0x5625f3ed70a0 : added a new pair 0x5625f3eea030 with foundation '4:7' and transport udp:udp to stream 1 component 1
19:02:08.370 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:08.369: Agent 0x5625f3ed70a0 : stream 1 component 1 STATE-CHANGE gathering -> connecting.
19:02:08.370 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:08.369: Agent 0x5625f3ed70a0 : conn_check_remote_candidates_set 1 1
19:02:08.391 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:08.390: Agent 0x5625f3ed70a0 : Pair 0x5625f3eea030 with s/c-id 1/1 (4:7) unfrozen.
19:02:08.391 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:08.390: Agent 0x5625f3ed70a0 : pair 0x5625f3eea030 state WAITING (priv_conn_check_unfreeze_next)
19:02:08.391 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:08.390: Agent 0x5625f3ed70a0 : *** conncheck list DUMP (called from priv_conn_check_unfreeze_next)
19:02:08.391 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:08.390: Agent 0x5625f3ed70a0 : *** agent nomination mode regular, controlled
19:02:08.391 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:08.390: Agent 0x5625f3ed70a0 : *** sc=1/1 : pair 0x5625f3eea030 : f=4:7 t=host:srflx sock=udp udp:xxxx > udp:xxxx prio=642000ff:782001ff:0/6e2001ff state=W
19:02:08.391 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:08.390: Agent 0x5625f3ed70a0 : pair 0x5625f3eea030 state IN_PROGRESS (priv_conn_check_initiate)
19:02:08.391 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:08.390: Agent 0x5625f3ed70a0 : STUN-CC REQ xxxx --> xxxx, socket=10, pair=0x5625f3eea030 (c-id:1), tie=13681306306651731150, username='Qpr8:bguS' (9), password='b7QDMYYnyfFJArfQC+oUyx' (22), prio=6e2001ff, controlled.
19:02:08.391 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:08.390: Agent 0x5625f3ed70a0 : conn_check_send: set cand_use=0 (regular nomination).
19:02:08.391 [info] cnode#PID<0.302.0>: (process:202248): libnice-stun-DEBUG: 19:02:08.390: Message HMAC-SHA1 message integrity:
19:02:08.391 [info] cnode#PID<0.302.0>: (process:202248): libnice-stun-DEBUG: 19:02:08.391: key : 0x623751444d59596e7966464a41726651432b6f557978
19:02:08.391 [info] cnode#PID<0.302.0>: (process:202248): libnice-stun-DEBUG: 19:02:08.391: sent : 0x011e1fccb2d7a56c25de0f0d3fbb71f25fa037bf
19:02:08.391 [info] cnode#PID<0.302.0>: (process:202248): libnice-stun-DEBUG: 19:02:08.391: Message HMAC-SHA1 fingerprint: 0x80c239c4
19:02:08.391 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:08.391: Agent 0x5625f3ed70a0: conncheck created 88 - 0x7f3f540086a8
19:02:08.392 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:08.391: Agent 0x5625f3ed70a0 : timer set to 500ms, waiting+in_progress=1
19:02:08.392 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:08.391: Agent 0x5625f3ed70a0 : *** conncheck list DUMP (called from priv_conn_check_ordinary_check, initiated an ordinary connection check)
19:02:08.392 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:08.391: Agent 0x5625f3ed70a0 : *** agent nomination mode regular, controlled
19:02:08.392 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:08.391: Agent 0x5625f3ed70a0 : *** sc=1/1 : pair 0x5625f3eea030 : f=4:7 t=host:srflx sock=udp udp:xxxx > udp:xxxx prio=642000ff:782001ff:0/6e2001ff state=I
19:02:08.392 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:08.391: Agent 0x5625f3ed70a0 : *** sc=1/1 : pair 0x5625f3eea030 : stun#=0 timer=1/3 1/500ms buf=0x7f3f540086a8 (R)
19:02:08.392 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:08.391: Agent 0x5625f3ed70a0 : stream 1: timer tick #1: 0 frozen, 1 in-progress, 0 waiting, 0 succeeded, 0 discovered, 0 nominated, 0 waiting-for-nom, 0 valid
19:02:08.393 [info] cnode#PID<0.302.0>: (process:202248): libnice-stun-DEBUG: 19:02:08.392: Message HMAC-SHA1 fingerprint: 0x04bef0ac
19:02:08.393 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:08.392: Agent 0x5625f3ed70a0 : resending STUN to keep the local candidate xxxx alive in s1/c1.
19:02:08.413 [info] cnode#PID<0.302.0>: (process:202248): libnice-stun-DEBUG: 19:02:08.413: Message HMAC-SHA1 fingerprint: 0x2131d496
19:02:08.428 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:08.427: agent_recv_message_unlocked: Agent 0x5625f3ed70a0 : Packet received on local socket 0x5625f3ee5560 (fd 10) from [64.233.161.127]:19302 (32 octets).
19:02:08.428 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:08.427: compact_input_message: **WARNING: SLOW PATH**
19:02:08.428 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:08.427: Message 0x7f3f5c1b8bf0 (from: 0x7f3f5c1b8b30, length: 32)
19:02:08.428 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:08.427: Buffer 0x7f3f5c1b8c50 (length: 65535)
19:02:08.428 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:08.427: Agent 0x5625f3ed70a0: inbound STUN packet for 1/1 (stream/component) from [64.233.161.127]:19302 (32 octets) :
19:02:08.428 [info] cnode#PID<0.302.0>: (process:202248): libnice-stun-DEBUG: 19:02:08.427: STUN demux error: no FINGERPRINT attribute!
19:02:08.428 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:08.427: Agent 0x5625f3ed70a0 : Incorrectly multiplexed STUN message ignored.
19:02:08.428 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:08.427: agent_recv_message_unlocked: Packet passed fast STUN validation but failed slow validation.
19:02:08.428 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:08.427: Agent 0x5625f3ed70a0 : 1:1 DROPPING packet from unknown source 64.233.161.127:19302 sock-type: 0
19:02:08.428 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:08.427: agent_recv_message_unlocked: Agent 0x5625f3ed70a0: no message available on read attempt
19:02:08.428 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:08.427: component_io_cb: 0x5625f3ed70a0: no message available on read attempt
19:02:08.896 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:08.896: Agent 0x5625f3ed70a0 :STUN transaction retransmitted on pair 0x5625f3eea030 (timer=2/3 0/1000ms).
19:02:09.401 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:09.401: Agent 0x5625f3ed70a0 : stream 1: timer tick #51: 0 frozen, 1 in-progress, 0 waiting, 0 succeeded, 0 discovered, 0 nominated, 0 waiting-for-nom, 0 valid
19:02:09.907 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:09.906: Agent 0x5625f3ed70a0 :STUN transaction retransmitted on pair 0x5625f3eea030 (timer=3/3 0/500ms).
19:02:10.412 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:10.412: Agent 0x5625f3ed70a0 : Retransmissions failed, giving up on pair 0x5625f3eea030
19:02:10.413 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:10.412: Agent 0x5625f3ed70a0 : Failed pair is xxxx --> xxxx
19:02:10.413 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:10.413: Agent 0x5625f3ed70a0 : pair 0x5625f3eea030 state FAILED (candidate_check_pair_fail)
19:02:10.413 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:10.413: Agent 0x5625f3ed70a0 : conn.check list status: 0 nominated, 0 valid, c-id 1.
19:02:10.414 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:10.413: Agent 0x5625f3ed70a0 : *** conncheck list DUMP (called from priv_conn_check_tick_stream, retransmission failed)
19:02:10.414 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:10.414: Agent 0x5625f3ed70a0 : *** agent nomination mode regular, controlled
19:02:10.414 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:10.414: Agent 0x5625f3ed70a0 : *** sc=1/1 : pair 0x5625f3eea030 : f=4:7 t=host:srflx sock=udp udp:xxxx > udp:xxxx prio=642000ff:782001ff:0/6e2001ff state=F
19:02:10.414 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:10.414: Agent 0x5625f3ed70a0 : stream 1: timer tick #101: 0 frozen, 0 in-progress, 0 waiting, 0 succeeded, 0 discovered, 0 nominated, 0 waiting-for-nom, 0 valid
19:02:10.415 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:10.415: Agent 0x5625f3ed70a0 : waiting 5000 msecs before checking for failed components.
19:02:11.423 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:11.422: Agent 0x5625f3ed70a0 : stream 1: timer tick #151: 0 frozen, 0 in-progress, 0 waiting, 0 succeeded, 0 discovered, 0 nominated, 0 waiting-for-nom, 0 valid
19:02:12.434 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:12.433: Agent 0x5625f3ed70a0 : stream 1: timer tick #201: 0 frozen, 0 in-progress, 0 waiting, 0 succeeded, 0 discovered, 0 nominated, 0 waiting-for-nom, 0 valid
19:02:13.445 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:13.444: Agent 0x5625f3ed70a0 : stream 1: timer tick #251: 0 frozen, 0 in-progress, 0 waiting, 0 succeeded, 0 discovered, 0 nominated, 0 waiting-for-nom, 0 valid
19:02:14.455 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:14.455: Agent 0x5625f3ed70a0 : stream 1: timer tick #301: 0 frozen, 0 in-progress, 0 waiting, 0 succeeded, 0 discovered, 0 nominated, 0 waiting-for-nom, 0 valid
19:02:15.449 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:15.449: Agent 0x5625f3ed70a0 : checking for failed components now.
19:02:15.450 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:15.450: Agent 0x5625f3ed70a0 : stream 1 component 1 STATE-CHANGE connecting -> failed.
19:02:15.450 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:15.450: Agent 0x5625f3ed70a0 : priv_conn_check_tick_agent_locked: stopping conncheck timer
19:02:15.450 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:15.450: Agent 0x5625f3ed70a0 : *** conncheck list DUMP (called from priv_conn_check_tick_agent_locked, conncheck timer stopped)
19:02:15.450 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:15.450: Agent 0x5625f3ed70a0 : *** agent nomination mode regular, controlled
19:02:15.451 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:15.451: Agent 0x5625f3ed70a0 : *** sc=1/1 : pair 0x5625f3eea030 : f=4:7 t=host:srflx sock=udp udp:xxxx > xxxx prio=642000ff:782001ff:0/6e2001ff state=F
19:02:15.451 [info] cnode#PID<0.302.0>: (process:202248): libnice-DEBUG: 19:02:15.451: Agent 0x5625f3ed70a0 : changing conncheck state to COMPLETED.
```
Thanks in advance!https://gitlab.freedesktop.org/libnice/libnice/-/issues/108libnice proposes candidate IP addresses which are unreachable to peer (VPN wi...2020-05-07T22:18:24ZAlain K.libnice proposes candidate IP addresses which are unreachable to peer (VPN with corporate server)Recently I came upon an issue in Pidgin with SIPE, where communications were disrupted, or could not be established in the first place, because the software was announcing IP addresses which the peer (a corporate Skype for Business serve...Recently I came upon an issue in Pidgin with SIPE, where communications were disrupted, or could not be established in the first place, because the software was announcing IP addresses which the peer (a corporate Skype for Business server) could not reach.
Indeed, communication with this corporate server is supposed to be done solely via VPN, however libnice was picking my LAN address.
I was told in https://sourceforge.net/p/sipe/bugs/362/ that this was not a bug in SIPE, but in libnice. So I'm reporting it here.
Attached is a stacktrace of the call to libnice where this happens.
This happened with libnice 0.1.14-1, the version which is used by my distribution with the most recent SIPE.
The addresses involved were the following:
192.168.178.42: LAN address. Not reachable by server. Even an STUN server would not help, as this would solely get through the NAT but not take into account that our corporate Skype server would not route its stream over the public network.
10.202.77.9: VPN address. This is the address that should be used. Even though it is syntactically in a private network (10.x.y.z), it is actually reachable from our corporate Skype for business server, without NAT.
The VPN software sets a default route through its own tunnel, removes the previous default route (to the Fritz box), and just adds a specific host route to the VPN server via the Fritz box:
```
$ /sbin/route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 0.0.0.0 0.0.0.0 U 0 0 0 tun0
10.202.77.9 0.0.0.0 255.255.255.255 UH 0 0 0 tun0
185.106.27.46 192.168.178.1 255.255.255.255 UGH 0 0 0 br0
192.168.178.0 0.0.0.0 255.255.255.0 U 0 0 0 br0
```
So, if libnice was only proposing candidate addresses on interfaces which have a default route, it might solve this issue? Or alternatively, allow user to "blacklist" addresses using an environment variable or a configuration file.https://gitlab.freedesktop.org/libnice/libnice/-/issues/107CPU pegged on thread in Janus likely due to libnice2020-11-18T17:44:00ZGreg FodorCPU pegged on thread in Janus likely due to libniceCopied from: https://github.com/meetecho/janus-gateway/issues/2015 (was asked to escalate here)
Hi all, I am an engineer working on the same project as @mqp was when this issue was opened: https://github.com/meetecho/janus-gateway/issue...Copied from: https://github.com/meetecho/janus-gateway/issues/2015 (was asked to escalate here)
Hi all, I am an engineer working on the same project as @mqp was when this issue was opened: https://github.com/meetecho/janus-gateway/issues/1260
At the time, that issue was closed due to the fact the CPU pegging behavior we had seen seemed to discontinue, but in fact it hadn't, and we never followed up. In the interim, we decided to wait and see how things would fare when we upgraded janus and its various dependencies.
Unfortunately, after the upgrade, we still see the behavior of pegged threads. Some relevant info:
- We have upgraded to janus 0.7.6 - we cannot fully upgrade to the latest version due to plugin incompatibilities that need to be addressed. I realize it's frowned upon to open bugs for older versions, but at least would like to know if there is reason to believe this problem is fixed in a newer version so we can prioritize the plugin migration process.
- We have upgraded the various dependencies, notably, we have upgraded libnice to master HEAD as of a few days ago.
The behavior we see is occasionally an hloop thread gets pegged, reporting ~50% CPU utilization in ps and one of the CPUs becomes pegged at 100%. Attached is a perf trace [perf.report.txt](/uploads/cc4dd49f801867d8249b7bbaf0538acd/perf.report.txt), showing that the time is spent polling in g_poll. These events seem relatively rare, and are hard to reproduce. We managed to reproduce it once under load testing but in general there's no systematic way to do it - it seems to be a very low probability event, though it does seem to happen more frequently against production traffic vs artificial load testing (our load test uses headless Chrome browsers.)
On one of our production nodes, we managed to grab all the relevant metadata from the admin API re: handles. The only fishy thing was one session had a relatively large number of handles (25 or so) compared to others and all of them but one had false flags around offer negotiation. Other sessions had similar non-negotiated handles but only this one 'degenerate' session had so many handles, and only a single successfully negotiated handle. Hard to say if this is useful info or not. Here is a dump of the handles: [all_handles.json](/uploads/f2fea695fd3b81c45a0689420a54418b/all_handles.json) I did a bunch of ad hoc analysis to see if there were any telltale signs that a session was 'corrupted' based upon its metadata - the best I could come up with is that there is a session with only a single handle successfully negotiated with a lot of other non-negotiated ones - the other sessions all had at least two successfully negotiated handles.
Some things we tried/discovered:
- Forcibly detaching the handles and/or destroying the sessions from the admin API failed to resolve the CPU being pegged.
- The CPU pegging does not result in any other negative side effects other than capacity reduction - of course, once all the CPUs become pegged, the server becomes unavailable, but we are mitigating this problem for now by running servers with many CPUs and doing nightly restarts.
- The problem manifests if a thread is spawned for each handle, or if we enable the new fixed thread feature (we are running 128 threads in production.)
- Running lsof shows that each thread has the same number of open file handles, and the non-eventd ones are identical (files, etc.)
- Digging through older issues, with the caveat that I realize these may be long resolved (could not determine it), I found a few relevant things:
- This issue with libnice: https://gitlab.freedesktop.org/libnice/libnice/issues/14 - there did not seem to be a way to determine if/when this degenerate case occurs, though our stack traces from perf seem consistent. We are not running this under call grind so there's not a clear way to see non-sampled call counts.
- This issue from a year ago seems consistent as well, though again I can't be 100% certain: https://gitlab.freedesktop.org/libnice/libnice/issues/72
Thanks for any assist!https://gitlab.freedesktop.org/libnice/libnice/-/issues/103Why discover relay and server reflexive candidates from *all* local interfaces ?2020-04-14T08:45:08ZFabrice Belletfabrice@bellet.infoWhy discover relay and server reflexive candidates from *all* local interfaces ?Even if it's clearly stated in the RFC (*), I cannot imagine a real-world example, where it makes sense to discover our server-reflexive address by sending an stun request from ***all*** our local addresses, and to obtain a relay address...Even if it's clearly stated in the RFC (*), I cannot imagine a real-world example, where it makes sense to discover our server-reflexive address by sending an stun request from ***all*** our local addresses, and to obtain a relay address from a turn server, by sending discovery requests from ***all*** our local addresses too.
In a setup with a single stun server and a single turn server, we can expect to discover a unique server-reflexive address, from whatever local interface we send the stun request from. The same applies for the turn server. These stun requests are all supposed to go out the host by the default route. The local network interface used to send the packet shouldn't matter a lot. I can imagine a setup with several default routes, but the metric should prefer one, and all stun request should use that one ?
What I observe from my tests is always the same server-reflexive address added N-times as a different local candidate: the same server-reflexive address, with a different base for each local IP address available on the host. The same candidates duplication applies to relay candidates. This causes a lot of resources waste, and pairing all these similar local candidates to remotes ones also amplify the problem.
Ignoring some local (virtual) interfaces is a possibility, but I think it doesn't handle the root of the problem.
Would it make sense to stop the discovery process after the first server-reflexive candidate is obtained, and after the first relayed candidate ? Would we miss pairs combinations that would be the only possibility to establish a connection ?
(*) Section B.2 of RFC-8445 "Candidates with Multiple Bases" gives an example of a network topology, where it makes sense to keep candidates with same transport address and different base. But from this example, I have difficulty to see how the stun server on the B:net10 network can be reached from the initiator, because outgoing packets for this network should normally go to the c:net10 instead: the net10 network directly reachable from the initiator.https://gitlab.freedesktop.org/libnice/libnice/-/issues/102TURN TCP: No relay candidates gathered2021-02-09T21:41:53ZSebastian SchmidTURN TCP: No relay candidates gatheredI switched from TURN-UDP to TURN-TCP and realised that libnice just stops gathering relay candidates at that point.
What I got so far:
1.) TURN UDP is working (libnice gathers relay candidates)
2.) libnice stops gathering relay candid...I switched from TURN-UDP to TURN-TCP and realised that libnice just stops gathering relay candidates at that point.
What I got so far:
1.) TURN UDP is working (libnice gathers relay candidates)
2.) libnice stops gathering relay candidates after using `NICE_RELAY_TYPE_TURN_TCP` in `nice_agent_set_relay_info ()`
3.) The TURN server works with Googles Trickle-ICE example:
https://webrtc.github.io/samples/src/content/peerconnection/trickle-ice/
URL: `turn:myturnserver.com:80?transport=tcp`
Gathers: `rtp relay 2376455387 udp x.x.x.x`
4.) libnice accepts the TURN config:
`libnice-DEBUG: 12:17:58.438: Agent 0x7fdb24015140: added relay server [x.x.x.x]:80 of type 1 to s/c 1/1 with user/pass : xxx -- ****`
5.) Wireshark shows that my machine is not starting any communication to the TURN server at all after switching from turn-udp to turn-tcp (there are also no STUN packets at all).
I am using the newest libnice master 0.1.16.10.1.17Fabrice Belletfabrice@bellet.infoFabrice Belletfabrice@bellet.infohttps://gitlab.freedesktop.org/libnice/libnice/-/issues/101drop old nice compatibility modes ?2020-10-06T11:46:50ZFabrice Belletfabrice@bellet.infodrop old nice compatibility modes ?What about the removal of old nice compatibilities modes ?
in ``agent.h``, we have:
```
typedef enum
{
NICE_COMPATIBILITY_RFC5245 = 0,
NICE_COMPATIBILITY_DRAFT19 = NICE_COMPATIBILITY_RFC5245,
NICE_COMPATIBILITY_GOOGLE,
NICE_COMP...What about the removal of old nice compatibilities modes ?
in ``agent.h``, we have:
```
typedef enum
{
NICE_COMPATIBILITY_RFC5245 = 0,
NICE_COMPATIBILITY_DRAFT19 = NICE_COMPATIBILITY_RFC5245,
NICE_COMPATIBILITY_GOOGLE,
NICE_COMPATIBILITY_MSN,
NICE_COMPATIBILITY_WLM2009,
NICE_COMPATIBILITY_OC2007,
NICE_COMPATIBILITY_OC2007R2,
NICE_COMPATIBILITY_LAST = NICE_COMPATIBILITY_OC2007R2,
} NiceCompatibility;
```
* for example, is it still useful to continue to maintain everything except NICE_COMPATIBILITY_RFC5245 and NICE_COMPATIBILITY_OC2007R2? ``conncheck.c`` for example, contains several code paths and code logic for these other modes, that are difficult if not possible to test, outside the test-suite. As we make more changes without the possibility to test the legacy modes in real world, the risks are great that we just break them silently.
* What consequence would have a nice compatibility modes clean up on stun compatibility modes ?