AI3 logo     Comstream Monitoring
AI3 teammAI3 eventsAI3 researchAI3 publicationsAI3 reports

ComMon is a set of tools for monitoring performance on Comstream satellite modems.

A version updated for FreeBSD is available. Updated

Demo

CSIM Logo WelcomeCourses
Faculty, Student, Staff
Projects and reports
Conferences, workshop and seminars
Laboratories and reasearch facilities
Information related to CSIM
Information non-related to CSIM
Address, map, phone, etc.
Search

ComMon is a data loging and monitoring set of tools developped for Comstream satellite modem. It is part of AI3 network monitoring working group.

ComMon is the newer version of the Satelline Link Performance Analysis Package.

It is devided into 3 major parts: data collection, data archiving and data displaying. Each part is separate and can run on different server.

System requirements

Module Where Software needed
comstream Any with a serial connection to the modem MySQL client library: ./configure --without-server
collect Any, must have SNMP read access to AI3 router

MySQL client library: ./configure --without-server
Perl modules for MySQL: Bundle::DBI and Bundle::DBD::mysql
Perl module for SNMP: Net::SNMP

dump and netmon

A web server MySQL client library: ./configure --without-server
Perl modules for MySQL: Bundle::DBI and Bundle::DBD::mysql
RRD-tool

All passwords for connection to MySQL have been hidden in the distribution files.

Data collection

Data are collected by a set of programs in Perl and C that run on the a box attached to the modem by a serial connection. This box should have SNMP capability to the AI3 router. In practice, the box is the AI3 router itself.

First program (comstream), written in C, connects to the modem using the control port of the modem. It reads the Eb/N0 level and the Automatic Gain Control (AGC) level. Both information vary constantly. To get a better accuracy, data are collected every second and averaged over a period of 5 minutes. I think this reduces the risk to read aberrant data, it can also reflect some rapid change of condition over the 5 minutes interval.

The program will keep the average data, the minimum and maximum over the 5 minutes period, as well as the number of miss read over the period. Time stamp and interval duration are also kept.

MySQL client library must be available on the machine. It must be compiled with the library:
ai3gate<23>: gcc -O2 comstream.c -o comstream -lmysqlclient -lm -I/usr/local/include -L/usr/local/lib/mysql

As default, comstream access to the modem via the device /dev/cuaa0 (port COM1:), this can be changed with the macro TTY.

The serial port on the modem must be set to 9600 bauds, and 8bit, no parity. In case this should be changed, the C code would have to be adapted. I suggest the following way to proceed:

The program comstream is started at boot time (in /etc/rc.local).

The second program (collect and a configuration file ai3mon.conf) is a small Perl script that will access few SNMP values as the number of IP packets and Bytes received and the number of erroneous packets received. Time stamp and interval duration are also kept. Data are collected on a 5 minute interval. As the purpose is to monitor the down link, only received data are taken into account.

Perl will need MySQL and SNMP modules. This program is started at boot time (in /etc/rc.local).

Now the program collect takes the configuration file name as first argument.

Time stamp is rounded at exact 5 minutes time.

Data archiving

Archiving is the easiest part as data are simply pushed in a MySQL database. Two tables are used that reflect each set of data gathered by the two collection programs. In each table, the time stamp is the primary key.

The database server can run on a different machine than the data collection, so special care must be taken when connecting to the database, data must be kept locallywhen the database server is temporarily unavailable. The algorythm is as follow:

  1. collect the data
  2. connect to the database server
  3. if the connection is successful
    1. add the new data to the database
    2. if there are data kept locally
      1. store them one by one
      2. until the end or the database cannot be accessed
  4. else store the data locally

One such algorythm would be, for example in Perl:

  ## collect the data # save data to MySQL use DBI; my $dberror=0; my $dbh = DBI->connect('DBI:mysql:ai3mon;host=database.cs.ait.ac.th;port=3306', 'ai3', '******', { PrintError => 0 }); if (! defined($DBI::err)) { ## We managed to connect to the Database # prepare and execute MySQL command my $statement="insert into $Table values ($timestamp, $Interval, ". "\"$val1\", \"$val2\", \"$val3\")"; my $sth=$dbh->prepare($statement); my $rc=$sth->execute; if (defined($DBI::err)) { ## We cannot save in the database $dberror=1; } else { ## Do we have some locallly saved data? # if yes, export them to the database if (-r $Tempfile) { open(IN, "$Tempfile"); my $ok=1; while (($val=<IN>) && $ok) { chop $val; $val=~s/ /, /g; # repare and execute MySQL command # it should be stored in the file in a format compatible # with MySQL command my $statement="insert into $Table values ($val)"; my $sth=$dbh->prepare($statement); my $rc=$sth->execute; if (defined($DBI::err)) { ## We could not put that line in the database # for some reason. Maybe the database # became inactive kwhile we were using it # we need a copy of the temporary file, without the # reccord that were already pushed in the database close(IN); $ok=0; my $firstcopy=$.; # remember reccord number (line number) # first copy the Tempfile link("$Tempfile", "/tmp/ai3mon.$$"); unlink "$Tempfile"; my $rc=open TMPOUT, ">$Tempfile" || die "Cannot open temporary local file $Tempfile\n"; if (! defined($rc)) { die "Cannot open temporary local file $Tempfile\n"; } open (IN, "/tmp/ai3mon.$$"); while (<IN>) { if ($. >= $firstcopy) { # only copy the reccord greater than the one # at point of failure print TMPOUT $_; } } close (IN); close (TMPOUT); unlink "/tmp/ai3mon.$$"; } # if DBI::err } # while <IN> && $OK if ($ok) { ## All the tmp file has been transfered to # the database unlink $Tempfile; } } } # else { do we have local file $rc= $dbh->disconnect; } else { # we could not connect to the database # we will have to save locally; $dberror=1; } if ($dberror) { ## We could not save on the database, save locally my $rc=open TMPOUT, ">>$Tempfile" || die "Cannot open temporary local file $Tempfile\n"; if (! defined($rc)) { die "Cannot open temporary local file $Tempfile\n"; } print TMPOUT "$timestamp $Interval $val1 $val2 $val3\n"; close(TMPOUT); }  

Data displaying

Displaying the data is made simple by using RRD-tool. One script (dump) will retreive the data from the database and update corresponding RRD files, the other script (netmon) will generate the graphs.

Two RRD files are created, one for the Eb/N0 and AGC values and one for the IP traffic values. The RRD files reflect the database table. The resolution choosen correspond to the one of MRTG.

The first script, dump, is run through cron(8) every five minutes. It will copy the latest values of the database into the RRD files. Only the database reccords that are newer than the last value of the RRD file are copied. This allows to keep the database server and the RRD files totally disconnected but synchronous.

The second script, netmon, will draw the RRD graph, for day, week, month and year. This script is called from the web page, via an SSI <!-- #exec --> command and returns the date of the last sample collected, for display purpose. That way, the graphs are only generated when really needed, when the graphs are displayed, this should save some computation time.

Some scaling factor must be applied in the data because the data range are not compatible. The scaling factors are controled by a few variables located at the begining of the script, it is also reminded in the legend of the graph. As minimum and maximum values are included in the legend of the graph, the web page need not to be updated at every interval.

InError does not display the number of erroneous packets, but rather indicate when one or several errors occured. If the InError stripe is wide, it means that some errors occured over a longer period of time.

Powered by: Net-SNMP RRD TOOL

CSIM home pageWMailAccount managementCSIM LibraryNetwork test toolsSearch CSIM directories
Contact us: Olivier Nicole AI3    CSIM    SET    AIT Last update: Apr 2003