[Intel-wired-lan] [REGRESSION] Intel ICE Ethernet driver in linux >= 6.6.9 triggers extra memory consumption and cause continous kswapd* usage and continuous swapping

Igor Raits igor at gooddata.com
Sun Jan 14 12:05:52 UTC 2024


Hi Jesse,

On Wed, Jan 10, 2024 at 7:08 PM Jesse Brandeburg
<jesse.brandeburg at intel.com> wrote:
>
> Also, I'm curious if your problem goes away if you change / reduce the
> number of queues per port. use ethtool -L eth0 combined 4 ?

I've tried a similar thing out on our servers and it does reclaim a
lot of memory.

# nodestats | awk '/MemFree:/ { for (i = 2; i <= NF; i++) total += $i
} END { print "Total MemFree: " total " MiB" }'
Total MemFree: 53934 MiB
# for i in p3p2 em2; do ethtool -L $i combined 2; done # <--- these
are the ports we don't use at all (they are DOWN, not part of any LAG,
etc.)
# nodestats | awk '/MemFree:/ { for (i = 2; i <= NF; i++) total += $i
} END { print "Total MemFree: " total " MiB" }'
Total MemFree: 55279 MiB
# echo "Hey, here is my 1.4GiB"
# for i in p3p1 em1; do ethtool -L $i combined 2; done # <--- these
are the ports that we do use
# nodestats | awk '/MemFree:/ { for (i = 2; i <= NF; i++) total += $i
} END { print "Total MemFree: " total " MiB" }'
Total MemFree: 58371 MiB
# echo "Hey, here is another 3GiB"

P.S> I've done these tests on a slightly different server that has
more memory (hence more HPs and different amounts of memory per NUMA)
but we have the same problem on both setups.

Maybe an important point, we set irq affinity to one specific NUMA as
that was increasing performance when we did tests so maybe for us the
better setup would be to set up combined 6 for ports that we use and
combined 2 for ports that we do not use.


More information about the Intel-wired-lan mailing list