My internet service provider is Vodafone. I have cable running into the house. It's the Clear.net cable that was acquired by TelstraClear and finally by Vodafone.
This rant is in two parts - one is the flakiness of the internet service recently, the second is the poor customer service being provided by Vodafone.
Okay, part 1.
The connectivity has been excessively patchy over the last 12 months or so. I've reported outages probably half a dozen times during that period. I got so infuriated with that I decided to write a small python script to help me make sense of the connectivity issues. It's very rough but might also be useful for others out there.
Data collection
I looked at a bunch of different ways to collect data about the internet connection. They were way overkill for what I wanted. I thought about writing something myself but thought that I would try to get a working prototype up and running as quickly as possible. Well, the simplest way I thought was just to use ping. Repeatedly. In the end I created a cron job to ping the Vodafone domain name server every minute:
*/1 * * * * date >> /var/log/pinger/pinger.log && ping -c4 203.97.78.43 | grep 'packets\|rtt' >> /var/log/pinger/pinger.log && echo -- >> /var/log/pinger/pinger.log
Here's a breakdown of the cron job
*/1 * * * * |
Run every minute |
date >> /var/log/pinger/pinger.log |
Append the unix date command to the log file we want to record everything in. |
&& ping -c4 203.97.78.43 | grep 'packets\|rtt' >> /var/log/pinger/pinger.log |
Once that has successfully completed, ping Vodafone's DNS with 4 packets, filter out all the lines that ping produces and only write the lines that contain 'packets' or 'rtt' to the log file. |
&& echo -- >> /var/log/pinger/pinger.log |
Once that successfully completes, write another line with two dashes to signify that the ping record has been completed. |
This data collection method effectively produces four types of record:
| Everything is fine |
Tue Oct 20 13:59:01 NZDT 2015 4 packets transmitted, 4 received, 0% packet loss, time 3004ms rtt min/avg/max/mdev = 7.989/8.456/9.015/0.401 ms -- |
| Things are patchy |
Tue Oct 20 14:00:01 NZDT 2015 4 packets transmitted, 3 received, 25% packet loss, time 3002ms rtt min/avg/max/mdev = 8.876/9.414/9.952/0.439 ms -- |
| Things are poked |
Either 100% packets are lost: Mon Oct 19 15:45:01 NZDT 2015 4 packets transmitted, 0 received, 100% packet loss, time 3023ms -- or we have some errors going on related to the destination computer not being available: Tue Oct 20 17:33:01 NZDT 2015 4 packets transmitted, 0 received, +2 errors, 100% packet loss, time 2999ms -- |
This data collection in itself is powerful but not easily usable. You could grep to see when packet loss lines but you wouldn't see when these were occuring. Well not easily.
I wrote the following small python script to assist with that:
from datetime import datetime
"""
Changes the following records into a pipe delimitted output for easier use in Calc, etc:
Sun Sep 20 02:20:01 NZST 2015
4 packets transmitted, 4 received, 0% packet loss, time 3004ms
rtt min/avg/max/mdev = 7.761/8.618/9.321/0.608 ms
--
Thu Oct 22 17:57:01 NZDT 2015
4 packets transmitted, 0 received, +2 errors, 100% packet loss, time 3014ms
--
You get results like:
dt_object:2015-10-24 16:47:01|date:Sat Oct 24 16:47:01 2015|lost:0|avg:9.081|errors:0|borked:n
dt_object:2015-10-24 16:48:01|date:Sat Oct 24 16:48:01 2015|lost:100|avg:0.0|errors:0|borked:y
dt_object:2015-10-24 16:53:01|date:Sat Oct 24 16:53:01 2015|lost:0|avg:0.0|errors:4|borked:y
You can manipulated this in Calc quite easily using pipe, colon and space delimiters to alter granularity.
This utlitises the following cron job:
*/1 * * * * date >> /var/log/pinger/pinger.log && ping -c4 203.97.78.43 | grep 'packets\|rtt' >> /var/log/pinger/pinger.log && echo -- >> /var/log/pinger/pinger.log
You can modify the cronjob to be 'icmp\|packets\|rtt' for more verbosity.
Also, it's advisable to set up a logrotate definition e.g.:
/var/log/pinger/pinger.log {
weekly
rotate 52
compress
delaycompress
missingok
notifempty
create 644 root root
}
"""
log_file = open('/var/log/pinger/pinger.log') #change this to point to the log file that needs to be parsed
days_of_week = ["Mon","Tue","Wed","Thu","Fri","Sat","Sun"]
dt = ""
dt_object = ""
lst = 0
errors = "0"
avg = 0.0
borked = "n"
def find_nth(string, delimiter, num):
"""Take a string and break it into parts based on a specified delimiter.
Return the num'th part.
Args:
string: the text to be broken up into parts
delimiter: a string that seperates each part
num: the num'th item to be returned. Must be greater than zero.
Returns:
The num'th part as a string
Raises:
# TypeError: if num is not an integer - TODO
# ValueError: if num is less than 1 - TODO
"""
parts = string.split(delimiter)
return parts[num-1]
for line in log_file:
if line[0:3] in days_of_week:
dt = line.replace(" NZST "," ").strip() # remove the NZST part for ease of parsing
dt = dt.replace(" NZDT "," ").strip() # remove the NZDT part for ease of parsing
dt_object = datetime.strptime(dt, "%a %b %d %H:%M:%S %Y")
if line.startswith("64 "): # a bit redundant as I modified the cron job to not record individual packets to reduce log file size
continue
if "packets transmitted" in line and "+" not in line: # no errors reported
x = find_nth(line, ",", 3).strip()
lst = find_nth(x,"%", 1).strip()
if lst != "0":
borked = "y"
if "packets transmitted" in line and "errors" in line: # errors reported
x = find_nth(line, ",", 3).strip()
errors = find_nth(x," ", 1).strip()[1:] # the slice on the end is to remove the leading + related to errors
borked = "y"
if line.startswith("rtt"):
avg = find_nth(line,"/",5).strip()
if line.startswith("--"):
# Both date objects and date strings offer a little more flexibility when doing filtering in Calc
print("dt_object:{}|date:{}|lost:{}|avg:{}|errors:{}|borked:{}").format(dt_object, dt, lst, avg, errors, borked)
dt_object = ""
dt = ""
lst = 0
avg = 0.0
errors = "0"
borked = "n"
Running the script runs over the log file data, flattens it out for easier use with Calc or even grep.
Part 2 to follow later.