HTTP/S DoS

HTTP/S Ping

When performing stress testing or DoS simulations, the following commands ar used to make HTTP requests and measure website performance. Its main objectives include:

  1. Sending HTTP/HTTPS requests to a web server.

  2. Measuring response times and performance metrics

  3. Obtaining HTTP status codes and response sizes.

  4. Performing performance tests and comparisons between websites.

While curl can be used for stress testing or DoS simulations, it's crucial to emphasize that such activities without authorization are illegal and unethical. However, for authorized load testing, curl can be employed in the following ways:

  1. Performing multiple requests in a loop to simulate heavy traffic.

  2. Using the --limit-rate option to test server behavior under different connection speeds.

  3. Combining curl with tools like "ntimes" to execute a specific number of requests and analyze response time percentiles.

Using cURL

Using curl can provide a more accurate measurement of round-trip time compared to the wget method (refer to Using wget). curl offers built-in timing options that can give you precise information about various stages of the HTTP request.

One-line Command

Here's a one-line command using curl to measure the round-trip time:

while true; do response=$(curl -s -o /dev/null -w "Status:%{http_code}; Time:%{time_total}; DNS:%{time_namelookup}; Connect:%{time_connect}; TTFB:%{time_starttransfer}" https://site.com); echo "$(date '+%Y-%m-%d %H:%M:%S'); $response"; sleep 0; done

This command:

  1. Uses curl's -w option to format the output, showing the HTTP status code and total time.

  2. The -s option silences curl's progress meter.

  3. -o /dev/null redirects the response body to /dev/null, as we're only interested in timing information.

  4. DNS Lookup time: Time taken for DNS resolution.

  5. Connect time: Time to establish the TCP connection.

  6. TTFB (Time to First Byte): Time until the first byte is received.

  7. Total time: Overall time for the entire request.

The output will look something like this:

2025-01-27 11:15:39; Status:200; Time:0.841107; DNS:0.025360; Connect:0.141437; TTFB:0.841054

Using wget

One-line command

url="https://site.com"; while true; do start=$(date +%s%N); status=$(wget -qS --spider "${url}" 2>&1 | grep "HTTP/" | awk '{print $2}'); end=$(date +%s%N); duration=$(( (end - start) / 1000000 )); echo "$(date '+%Y-%m-%d %H:%M:%S') - Site status: $status - Response time: ${duration}ms"; sleep 60; done

Script

while true; do
  start=$(date +%s%N)
  status=$(wget -qS --spider http://example.com 2>&1 | grep "HTTP/" | awk '{print $2}')
  end=$(date +%s%N)
  duration=$(( (end - start) / 1000000 ))
  echo "$(date '+%Y-%m-%d %H:%M:%S') - Site status: $status - Response time: ${duration}ms"
  sleep 60
done

This script does the following:

  1. start=$(date +%s%N): Captures the start time in nanoseconds.

  2. The wget command is executed and the status code is stored in the status variable.

  3. end=$(date +%s%N): Captures the end time in nanoseconds.

  4. duration=$(( (end - start) / 1000000 )): Calculates the duration in milliseconds.

This script will continuously check the website's status and response time, printing a line like this every X seconds:

2025-01-27 11:30:45 - Site status: 200 - Response time: 123ms

Remember, you can adjust the sleep interval (currently set to 60 seconds) as needed. If uses 0 it won't do any pause. To stop the script, use Ctrl+C in the terminal.

Note: The response time measured this way includes the time taken by wget to process the response, not just the network round-trip time. For more precise network timing, you might want to consider using specialized tools like curl with its timing options.

Last updated