Recommended TCP Profile Settings for Full Tunnel VPN/ICAProxy from NetScaler Gateway 11.1 Onwards

Recommended TCP Profile Settings for Full Tunnel VPN/ICAProxy from NetScaler Gateway 11.1 Onwards

book

Article ID: CTX232321

calendar_today

Updated On:

Description

This article describes the recommended TCP profile settings for full tunnel VPN/ICAProxy from NetScaler Gateway 11.1 onwards.

Background

From NetScaler 11.1 build onwards there have been many improvements/optimizations done in the area of NetScaler TCP stack to cater to new requirements, especially in the area of Selective Acknowledgement Recovery. Based on this Citrix has re-evaluated the TCP profiles generally being used for ICAProxy and full VPN tunnels. Today for ICAProxy cases we use the tcp profile “nstcp_default_XA_XD_profile” on the WAN side of the VPN vserver.
 

Instructions

  1. Using the nstcp_default_profile should suffice both Full Tunnel VPN and ICAProxy cases.

  2. Set below parameters on nstcp_default_profile:
    set ns tcpprofile nstcp_default_profile -nagle DISABLED -flavor BIC -SACK ENABLED -WS ENABLED -WSVal 8 -minRTO 600 -bufferSize 600000 -sendBuffsize 600000

  3. Do not bind any other TCP profile to the VPN vserver. This will ensure that the nstcp_default_profile will act on the VPN vserver.

  4. Apart from the above NetScaler settings, Citrix also optionally recommends to ‘Disable’ the ‘TCP Slow Start after idle’ in the backend server.
    For example on Linux machine, this is done using the sysctl:
    - sysctl -w net.ipv4.tcp_slow_start_after_idle=0

If the above recommendation is does not help in achieving the desired performance, then additional tweaks can be proposed after analyzing the network traces from the your setup.

Issue/Introduction

This article describes the recommended TCP profile settings for full tunnel VPN/ICAProxy from NetScaler Gateway 11.1 onwards.

Additional Information

Please note disabling TCP Slow Start after idle option will start sending the traffic on loadbalancing methods as soon as the server state is up instead of gradual increase.

Also note that increasing the BufferSize at the NetScaler will require NetScaler to hold more data in buffer, if the other end is unable to process the data. So it may cause the NetScaler TCP buffer to go slightly high and in rare cases can impact packet CPU if NetScaler has too much data in the buffer.