Socket Level Options
What are Socket-Level Options?
Socket-level options are configurable parameters applied to a socket, which control the behavior of communication endpoints in a TCP or UDP connection.
Why are Socket-Level Options useful?
They allow fine-tuning of socket behavior, such as enabling keepalive messages, setting timeouts, or allowing address reuse. This enhances flexibility, performance, and reliability of network applications.
How do Socket-Level Options work?
Applications use system calls like setsockopt() and getsockopt() to set or retrieve these options at runtime. These affect the socket’s behavior at the OS level.
Where are Socket-Level Options used?
They are used in network applications across operating systems — such as web servers, database clients, or messaging services — to control things like buffering, timeout behavior, and keepalive.
Which OSI layer do Socket-Level Options belong to?
Socket options primarily affect Transport Layer (Layer 4) behavior but may influence Session Layer (Layer 5) aspects depending on the application logic.
Are Socket-Level Options Windows specific?
No, socket-level options are supported across all major platforms including Windows, but with OS-specific APIs (e.g., setsockopt in Winsock).
Are Socket-Level Options Linux specific?
No, while Linux provides extensive control through socket options, they are part of the POSIX standard and available on most Unix-like systems.
Which Transport Protocols use Socket-Level Options?
Both TCP and UDP can use socket-level options to configure behavior such as timeouts, buffer sizes, and more.
Are Socket-Level Options used in the client-server model?
Yes, socket-level options are commonly configured in client-server applications to optimize connection reliability, manage timeouts, or handle concurrency efficiently.
In this section, you are going to learn
Terminology
Version Info
TCP SO_REUSEADDR C Code (Client-Server Model with SO_REUSEADDR Option)
- This C program demonstrates the use of SO_REUSEADDR socket option:
SO_REUSEADDR allows re-binding a socket to an address/port even if it is still in TIME_WAIT state.
Without it, restarting a TCP server quickly may fail with bind: Address already in use.
With it, the server can immediately reuse the same port.
The TCP server listens on a TCP port and uses SO_REUSEADDR before binding.
The TCP client connects to the server, sends messages, and the server echoes them back.
This setup is useful to demonstrate how to restart a server quickly without waiting for the TIME_WAIT timeout.
Step-1: Download C source code for TCP server and client (with SO_REUSEADDR).
Step-2: Compile the programs.
test:~$ gcc so_reuseaddr_server.c -o so_reuseaddr_server test:~$ gcc so_reuseaddr_client.c -o so_reuseaddr_client
Step-3: Run TCP server and client.
# Start TCP server test:~$ ./so_reuseaddr_server TCP server listening on port 8080 (SO_REUSEADDR enabled)... # Run TCP client test:~$ ./so_reuseaddr_client Sent test message to server, received echo back
Step-4: Observe Effect of SO_REUSEADDR
Where can we observe SO_REUSEADDR?
- Stop the server and immediately restart it:
Without SO_REUSEADDR: Server may fail to bind → bind: Address already in use.
With SO_REUSEADDR: Server binds successfully on restart.
Use socket state commands to confirm:
# Show TIME_WAIT sockets test:~$ ss -tan state TIME-WAIT | grep 8080 # add ur PORT Number # Show listening server socket test:~$ ss -tulnp | grep 8080
- In Wireshark, you will still see normal TCP packets (SYN, ACK, FIN, PSH).
Important: SO_REUSEADDR does not appear in Wireshark since it is a kernel-level option, not part of TCP headers.
- Wireshark will only show the regular connection traffic:
3-way handshake (SYN, SYN-ACK, ACK)
Data exchange (PSH, ACK with message payload)
Connection teardown (FIN, ACK sequence)
The difference is visible in server behavior and socket states, not in the packet trace.
Step-5: To Observe the Effect of SO_REUSEADDR download here
Download the picture
Download the picture
Download the picture
Download the picture
Step-6: Wireshark Capture & Explanation
Expected Wireshark Observations:
Normal TCP 3-way handshake, data transfer, and connection teardown.
No explicit indication of SO_REUSEADDR in packets.
The only observable effect is that the server can restart quickly and re-bind to port 9090 while old sockets are in TIME_WAIT.
Use ss or netstat alongside Wireshark to confirm multiple sockets on the same port.
Demonstrates how kernel socket options affect binding/reuse, not the TCP header format.
TCP SO_REUSEPORT C Code (Client-Server Model with SO_REUSEPORT Option)
- This C program demonstrates the use of SO_REUSEPORT socket option:
SO_REUSEPORT allows multiple sockets to bind to the same IP/port combination.
It is mainly used for load balancing: multiple server processes can listen on the same port, and the kernel distributes incoming connections among them.
Unlike SO_REUSEADDR, which allows re-binding after TIME_WAIT, SO_REUSEPORT enables true parallel listening sockets.
The TCP server uses SO_REUSEPORT before binding.
Multiple server instances can be started on the same port, each receiving different client connections.
The TCP client connects, sends a message, and receives an echo back.
This setup is useful to demonstrate multi-process socket sharing.
Step-1: Download C source code for TCP server and client (with SO_REUSEPORT).
Step-2: Compile the programs.
test:~$ gcc so_reuseport_server.c -o so_reuseport_server test:~$ gcc so_reuseport_client.c -o so_reuseport_client
Step-3: Run multiple TCP servers and then clients.
# Start two server instances test:~$ ./so_reuseport_server TCP server (PID 1234) listening on port 9090 (SO_REUSEPORT enabled)... test:~$ ./so_reuseport_server TCP server (PID 1235) listening on port 9090 (SO_REUSEPORT enabled)... # Run client multiple times test:~$ ./so_reuseport_client Sent test message to server, received echo back test:~$ ./so_reuseport_client Sent test message to server, received echo back # Observe: different server PIDs receive the connections
Step-4: Observe Effect of SO_REUSEPORT
Where can we observe SO_REUSEPORT?
- Start multiple servers bound to the same port:
Without SO_REUSEPORT: The second bind fails with bind: Address already in use.
With SO_REUSEPORT: Both servers bind successfully, and the kernel load balances client connections.
Use socket state commands to confirm:
# Show listening server sockets test:~$ ss -tulnp | grep 9090 tcp LISTEN 0 50 0.0.0.0:9090 0.0.0.0:* users:("so_reuseport_server",pid=1234) tcp LISTEN 0 50 0.0.0.0:9090 0.0.0.0:* users:("so_reuseport_server",pid=1235)
- In Wireshark, you will still see normal TCP packets (SYN, ACK, FIN, PSH).
Important: SO_REUSEPORT does not appear in Wireshark since it is a kernel-level option, not part of TCP headers.
- Wireshark will only show the regular connection traffic:
3-way handshake (SYN, SYN-ACK, ACK)
Data exchange (PSH, ACK with message payload)
Connection teardown (FIN, ACK sequence)
The difference is visible in server behavior and socket states, not in the packet trace.
Step-5: To Observe the Effect of SO_REUSEPORT download here
Step-6: Wireshark Capture & Explanation
Expected Wireshark Observations:
Normal TCP 3-way handshake, data transfer, and connection teardown.
No explicit indication of SO_REUSEPORT in packets.
The only observable effect is that different server processes on the same port handle different connections.
Use ss or netstat alongside Wireshark to confirm multiple listening sockets.
Demonstrates how kernel socket options affect binding and load balancing, not the TCP header format.
TCP SO_SNDBUF/SO_RCVBUF C Code (Client-Server Model with SO_SNDBUF and SO_RCVBUF Options)
- This C program demonstrates the use of SO_SNDBUF and SO_RCVBUF socket options:
SO_SNDBUF sets the size of the send buffer for outgoing data.
SO_RCVBUF sets the size of the receive buffer for incoming data.
These buffers control how much data can be queued in the kernel before being sent/received by the application.
Larger buffers can improve throughput for high-latency/high-bandwidth networks.
Smaller buffers reduce memory usage but may cause throttling.
The TCP server sets the receive buffer size using SO_RCVBUF.
The TCP client sets the send buffer size using SO_SNDBUF.
Both client and server exchange a simple test message to demonstrate functionality.
Step-1: Download C source code for TCP server and client (with SO_SNDBUF / SO_RCVBUF).
Step-2: Compile the programs.
test:~$ gcc so_send_recv_buf_server.c -o so_rcvbuf_server test:~$ gcc so_send_recv_buf_client.c -o so_sndbuf_client
Step-3: Run TCP server and then client.
# Start server (with SO_RCVBUF) test:~$ ./so_rcvbuf_server TCP server listening on port 9090 (SO_RCVBUF set to 64 KB)... # Run client (with SO_SNDBUF) test:~$ ./so_sndbuf_client TCP client connected (SO_SNDBUF set to 64 KB)... Sent test message to server, received echo back
Step-4: Observe Effect of SO_SNDBUF / SO_RCVBUF
Where can we observe SO_SNDBUF and SO_RCVBUF?
Check buffer sizes via getsockopt inside the code (program prints configured values).
Use socket state commands:
# Show socket buffer sizes in detail test:~$ ss -tulnmi | grep 9090 tcp LISTEN 0 128 0.0.0.0:9090 0.0.0.0:* skmem:(r64KB, w64KB, f..., t...) # r = receive buffer (SO_RCVBUF), w = send buffer (SO_SNDBUF)
- In Wireshark, you will still see normal TCP packets (SYN, ACK, FIN, PSH).
Important: Buffer sizes do not appear in Wireshark (they are kernel-level socket options).
- Indirect effects may be seen:
Larger send buffer → client may push more data before waiting for ACKs.
Larger receive buffer → server can accept more incoming data without applying backpressure.
Best verified via ss -tulnmi or getsockopt output, not packet headers.
Step-5: Wireshark Capture & Explanation
Expected Wireshark Observations:
Normal TCP 3-way handshake, data transfer, and connection teardown.
No explicit indication of SO_SNDBUF or SO_RCVBUF in packets.
Indirect effect: larger buffers may allow bigger bursts of packets before ACKs throttle transmission.
Use ss -tulnmi or program logs to confirm buffer sizes.
Demonstrates how buffer tuning affects performance, not TCP header format.
TCP SO_SNDLOWAT/SO_RCVLOWAT C Code (Client-Server Model with SO_RCVLOWAT and SNDLOWAT)
- This C program demonstrates:
SO_RCVLOWAT: Minimum number of bytes in receive buffer before recv() returns.
SNDLOWAT: Minimum number of bytes required to send (emulated in Linux, as native SO_SNLOWAT is not supported).
Non-blocking TCP sockets using select() to ensure handshake completes before sending.
The TCP server sets SO_RCVLOWAT = 10 bytes.
The TCP client skips sending messages smaller than 10 bytes (emulating SNDLOWAT) and sends larger messages after socket is writable.
Both client and server exchange a test message “HELLO_WORLD” to demonstrate functionality.
Step-1: Download C source code for TCP server and client.
Step-2: Compile the programs.
test:~$ gcc so_snd_rcv_lowat_server.c -o server test:~$ gcc so_snd_rcv_lowat_client.c -o client
Step-3: Run TCP server and then client.
# Start server (SO_RCVLOWAT = 10) test:~$ ./server [Server] SO_RCVLOWAT = 10 [Server] Waiting for at least 10 bytes... # Run client (emulated SNDLOWAT = 10) test:~$ ./client [Client] Connected (non-blocking) [Client] Skipping send for small message (2 bytes < 10) [Client] Sent 11 bytes: HELLO_WORLD
Step-4: Observe Effect of SO_RCVLOWAT & SNDLOWAT
Where can we observe the effect?
Server recv() blocks until ≥10 bytes arrive.
Client skips sending “Hi” (2 bytes < 10), only “HELLO_WORLD” is sent.
Emulation ensures small messages are not sent prematurely.
Verify with ss -tulnmi or getsockopt logs.
Step-5: Wireshark Capture & Explanation
Expected Observations:
TCP handshake completes normally.
Only “HELLO_WORLD” is seen in Wireshark; “Hi” is skipped (SNDLOWAT emulation).
Server recv() unblocks after 10 bytes arrive (SO_RCVLOWAT effect).
Demonstrates controlled send & receive behavior in Linux.
Use server logs and Wireshark capture to validate behavior.
Notes:
Linux ignores native SO_SNLOWAT; application code emulates SNDLOWAT.
SO_RCVLOWAT works normally.
This example is useful for testing TCP behavior and analyzing Wireshark captures.
TCP SO_RCVTIMEO/SO_SNDTIMEO C Code (Client-Server Model with SO_RCVTIMEO and SO_SNDTIMEO)
- This C program demonstrates:
SO_RCVTIMEO: Maximum time recv() waits for incoming data.
SO_SNDTIMEO: Maximum time send() waits to send data.
Handles timeout errors (EAGAIN / EWOULDBLOCK) gracefully.
The TCP server sets receive timeout = 5 seconds and send timeout = 5 seconds.
The TCP client sets send timeout = 5 seconds and receive timeout = 5 seconds.
Both client and server exchange a test message “Hello from client” and “Hello from server” to demonstrate functionality.
Step-1: Download C source code for TCP server and client.
Step-2: Compile the programs.
test:~$ gcc so_snd_rcv_timeo_server.c -o server test:~$ gcc so_snd_rcv_timeo_client.c -o client
Step-3: Run TCP server and then client.
* Normal Scenario # Start server (SO_RCVTIMEO = 5s, SO_SNDTIMEO = 5s) test:~$ ./server [Server] Listening on port 9090 [Server] Waiting for data (5 sec timeout)... [Server] Received: Hello from client [Server] Sending response (5 sec timeout)... # Run client test:~$ ./client [Client] Connected to server [Client] Sending message (5 sec timeout)... [Client] Received from server: Hello from server * Timeout Scenario test:~$ ./server [Server] Listening on port 9090 [Server] Connected to client [Server] Receive timeout occurred # Run client test:~$ ./client [Client] Connected to server [Client] Receive timeout occurred
Step-4: Observe Normal Operation vs Timeout
Normal Output:
TCP handshake completes successfully.
Client sends “Hello from client”.
Server receives data within 5 seconds and responds.
Client receives “Hello from server” within timeout.
Timeout Scenario:
- If client delays sending data >5 seconds:
Server recv() prints: “Receive timeout occurred”.
- If server delays sending response >5 seconds:
Client recv() prints: “Client receive timeout occurred”.
Timeout prevents indefinite blocking on recv() / send().
Step-5: Wireshark Capture & Explanation
Download Wireshark capture
Download Wireshark capture
Expected Observations:
TCP handshake (SYN, SYN-ACK, ACK) completes normally.
- Data packets:
Client-Server: “Hello from client”
Server-Client: “Hello from server”
If timeout occurs, no packet appears, only program prints timeout.
Sequence and ACK numbers match normal TCP behavior.
Timeout is socket-level, not visible in Wireshark.
Server and client logs help verify timeout handling.
Notes:
SO_RCVTIMEO / SO_SNDTIMEO are useful for avoiding blocking indefinitely.
Wireshark shows actual TCP packets, but timeout is seen only in logs.
Adjust tv_sec / tv_usec to test different timeout intervals.
This example is useful for testing TCP timeout behavior and analyzing packet captures.
TCP SO_LINGER C Code (Client-Server Model with SO_LINGER Options)
- This C program demonstrates the use of SO_LINGER socket option in TCP connections:
Linger Disabled: l_onoff = 0 → default TCP close behavior.
Linger Enabled with Delay: l_onoff = 1, l_linger > 0 → TCP tries to send remaining data for l_linger seconds.
Linger Enabled, Immediate Abort: l_onoff = 1, l_linger = 0 → TCP aborts immediately, sends RST.
The TCP server listens on port 9090 and responds with a simple message.
The TCP client sets SO_LINGER options and sends a test message to the server.
Step-1: Download C source code for TCP server and client (with SO_LINGER).
Step-2: Compile the programs.
test:~$ gcc so_linger_server.c -o so_linger_server test:~$ gcc so_linger_client.c -o so_linger_client
Step-3: Run TCP server and then client.
# Start server test:~$ ./so_linger_server TCP server listening on port 9090... # Run client (example: linger disabled) test:~$ ./so_linger_client 0 0 Client sent message Client socket closed # Run client (example: linger enabled, 5 seconds) test:~$ ./so_linger_client 1 5 Client sent message Client socket closed # Run client (example: linger enabled, immediate abort) test:~$ ./so_linger_client 1 0 Client sent message Client socket closed
Step-4: Observe Effect of SO_LINGER
Where can we observe SO_LINGER behavior?
- Linger Disabled (`l_onoff=0`)
Normal TCP FIN/ACK handshake.
Server receives all data and closes gracefully.
- Linger Enabled with Delay (`l_onoff=1, l_linger>0`)
TCP waits up to l_linger seconds to send remaining data before closing.
FIN may be delayed; observe in Wireshark.
- Linger Enabled, Immediate Abort (`l_onoff=1, l_linger=0`)
Connection closes immediately.
Client sends RST; server may see “connection reset by peer”.
Step-5: Wireshark Capture & Explanation
Download Wireshark capture
Download Wireshark capture
Download Wireshark capture
Expected Wireshark Observations:
Linger Disabled: Normal FIN/ACK sequence; graceful connection teardown.
Linger Enabled with Delay: FIN packet appears after a short delay; remaining data is sent before close.
Linger Enabled, Immediate Abort: RST packet sent by client; server may drop unsent data.
Use Wireshark filter tcp.port==9090 to observe packets.
Demonstrates how SO_LINGER controls TCP close behavior, not data content.
Reference links