ssh – How can 1gig hyperlink be slower to WAN than 10gig hyperlink, on the identical machine?

0
1
ssh – How can 1gig hyperlink be slower to WAN than 10gig hyperlink, on the identical machine?


I am utilizing the next command to check community pace to my.distant.server:

dd if=/dev/random | pv -L 39M | ssh my.distant.server "dd of=/dev/null"

Once I set en0 to 10Gb, efficiency drops considerably (15MB/s add).
However once I set en0 to 1Gb, I get significantly better outcomes (56MB/s add).

This occurs regardless of the 10Gb hyperlink working completely in each different state of affairs!

  • 10Gb performs reliably, giving me round 900MB/s through Samba, and over 10Gbps to an area server utilizing iperf
  • netstat -ibn | grep -i en0 exhibits no errors or collisions on both 1Gb or 10Gb
  • iperf to my.distant.server exhibits anticipated efficiency (56MB/s add) — it is solely SSH that’s gradual!

I’ve analyzed tcpdump captures, and this is what I discovered when utilizing 10Gb:

  • SACK packets seem → packets are arriving out of order on the server (this does not occur on 1Gb)
  • packet measurement is smaller (1440 bytes with 10Gb vs 8640 with 1Gb)

The shopper is macOS. I in contrast settings utilizing sysctl -a and ifconfig en0 for each 10Gb and 1Gb, and confirmed no sudden variations when switching between them.

I’ve examined a number of Mac Studios (together with a brand-new M3 Extremely) utilizing each built-in Ethernet and an OWC exterior Thunderbolt 10Gb adapter. All of them seem to make use of the identical driver:
AppleEthernetAquantiaAqtionInterface

May this sort of conduct be attributable to one thing aside from a buggy community driver? Are there any concepts or workarounds I may strive?

(For now, my plan is to make use of two totally different NICs: one 10GbE for LAN and one 1GbE for WAN.)

Some extra weirdness from testing:

  • When connecting through the VPN IP of the distant server, 10Gb efficiency improves barely (25MB/s vs 15MB/s), however continues to be worse than 1Gb
  • Once I use Cyberduck (SFTP over port 22) to connect with the identical my.distant.server, I constantly get anticipated speeds (50+MB/s)

Issues I’ve tried:

  • sshhpn enabled on each shopper and server (sure, I get the “warning: enabled none cipher” message exhibiting it’s energetic)
  • throttling SSH utilizing pv -L
  • guaranteeing dd makes use of symmetric bs=… choices on each ends
  • sadly, I don’t have entry to non-Apple machines with 10Gb NICs for testing
  • tweaking each related sysctl setting I may discover or that was advisable in macOS efficiency tuning guides

LEAVE A REPLY

Please enter your comment!
Please enter your name here