HTTP/S DoS
HTTP/S Ping
When performing stress testing or DoS simulations, the following commands ar used to make HTTP requests and measure website performance. Its main objectives include:
Sending HTTP/HTTPS requests to a web server.
Measuring response times and performance metrics
Obtaining HTTP status codes and response sizes.
Performing performance tests and comparisons between websites.
While curl can be used for stress testing or DoS simulations, it's crucial to emphasize that such activities without authorization are illegal and unethical. However, for authorized load testing, curl can be employed in the following ways:
Performing multiple requests in a loop to simulate heavy traffic.
Using the --limit-rate option to test server behavior under different connection speeds.
Combining curl with tools like "ntimes" to execute a specific number of requests and analyze response time percentiles.
Using cURL
Using curl can provide a more accurate measurement of round-trip time compared to the wget method (refer to Using wget). curl offers built-in timing options that can give you precise information about various stages of the HTTP request.
One-line Command
Here's a one-line command using curl to measure the round-trip time:
while true; do response=$(curl -s -o /dev/null -w "Status:%{http_code}; Time:%{time_total}; DNS:%{time_namelookup}; Connect:%{time_connect}; TTFB:%{time_starttransfer}" https://site.com); echo "$(date '+%Y-%m-%d %H:%M:%S'); $response"; sleep 0; doneThis command:
Uses
curl's-woption to format the output, showing the HTTP status code and total time.The
-soption silences curl's progress meter.-o /dev/nullredirects the response body to/dev/null, as we're only interested in timing information.DNS Lookup time: Time taken for DNS resolution.
Connect time: Time to establish the TCP connection.
TTFB (Time to First Byte): Time until the first byte is received.
Total time: Overall time for the entire request.
The output will look something like this:
2025-01-27 11:15:39; Status:200; Time:0.841107; DNS:0.025360; Connect:0.141437; TTFB:0.841054Using wget
One-line command
url="https://site.com"; while true; do start=$(date +%s%N); status=$(wget -qS --spider "${url}" 2>&1 | grep "HTTP/" | awk '{print $2}'); end=$(date +%s%N); duration=$(( (end - start) / 1000000 )); echo "$(date '+%Y-%m-%d %H:%M:%S') - Site status: $status - Response time: ${duration}ms"; sleep 60; doneScript
while true; do
start=$(date +%s%N)
status=$(wget -qS --spider http://example.com 2>&1 | grep "HTTP/" | awk '{print $2}')
end=$(date +%s%N)
duration=$(( (end - start) / 1000000 ))
echo "$(date '+%Y-%m-%d %H:%M:%S') - Site status: $status - Response time: ${duration}ms"
sleep 60
doneThis script does the following:
start=$(date +%s%N): Captures the start time in nanoseconds.The
wgetcommand is executed and the status code is stored in thestatusvariable.end=$(date +%s%N): Captures the end time in nanoseconds.duration=$(( (end - start) / 1000000 )): Calculates the duration in milliseconds.
This script will continuously check the website's status and response time, printing a line like this every X seconds:
2025-01-27 11:30:45 - Site status: 200 - Response time: 123msRemember, you can adjust the sleep interval (currently set to 60 seconds) as needed. If uses 0 it won't do any pause. To stop the script, use Ctrl+C in the terminal.
Note: The response time measured this way includes the time taken by wget to process the response, not just the network round-trip time. For more precise network timing, you might want to consider using specialized tools like curl with its timing options.
Last updated