commit | 8f1975e31d8ed0c021a1993a4d9123dd39c740ea | [log] [tgz] |
---|---|---|
author | Eric Dumazet <edumazet@google.com> | Mon Sep 25 09:14:14 2017 -0700 |
committer | David S. Miller <davem@davemloft.net> | Thu Sep 28 09:40:59 2017 -0700 |
tree | c6a89ea132e322e203ae6cd3817819c71062f5fd | |
parent | a2e4a21906e1ff7d2db9ce2e446e76abc905b79f [diff] |
inetpeer: speed up inetpeer_invalidate_tree() As measured in my prior patch ("sch_netem: faster rb tree removal"), rbtree_postorder_for_each_entry_safe() is nice looking but much slower than using rb_next() directly, except when tree is small enough to fit in CPU caches (then the cost is the same) From: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>