Saturday, December 17, 2016

Hamburg Hacker Friday - MotionEyeOS

Many thanks to JörgC for taking the initiative to organize our Friday afternoon get-together.

Here my quick notes, tipps to setup a security camera with Raspberry:

MotionEyeOS - Hardware Recommendations

  • get the Raspi3
  • 8MP cam modules side by side
    (infrared pic on right - without the filter)
    • Wifi already onboard
      (no need to buy a Wifi dongle)
    • more CPU Performance (RPI1 was constantly at 90+% load, RPI3 only 25%)
  • get the new 8MP camera modules (I bought one for day and the InfraRed for night shots)
  • power cord, SD Card, and a suitable case

MotionEyeOS - Basic Installation

  • download image from
    (it makes a difference...e.g. the RPi 3 Image is configured to use the 4 CPU cores)
  • write the Image to SD Card
    On Win10 make sure to launch with admin rights - shift-ctrl-click)
  • connect Raspi with LAN cable first, check IP address on your WLAN router
  • browse to \\ip-address\ and login with user admin (no password)
  • Basic configuration
  • MotionEyeOS configuration
    • Hostname
    • secure user+password (admin + viewer)
    • Wifi credentials
    • Resolution/Framerate (on RPi you can safely go to max Settings)
    • Do not use "automatic brightness" (did not work at all for me...)
    • now properly shut down
  • Now power up without LAN cable (should connect to your wifi, check on your Wifi router)
    • set fixed IP address for your MotionEyeCam
      (caution: LAN and WLAN have different MAC addresses...)
  • Access the recorded pictures/vids via Web-GUI, per FTP or per fileshare (\\ipaddress\)
  • More configuration
    • Email notification
    • upload to Dropbox or Google Drive
    • carefully tweak the sensitivity of the Motion detection...
    • create mask for motion detection (e.g. for trees or other "false triggers")

MotionEyeOS - Advanced Config

MotionEyeOS - Next Steps

  • use GPIO pins to launch actions when motion is detected (siren, LEDs, 1000-W-Laser...)
  • create scripts to automatically download/process the Pictures
  • create script to provide daily digest (with timeline and thumbnails)
  • add InfraRed lighting for better night shots

Further Reference

Good luck and fun with your new toy!

Sunday, December 4, 2016

My first steps with AWS Lightsail

It is Sunday evening, it is quiet in the house.
Perfect time to play with my new toys.

I just spun up my own virtual server on Amazon AWS Lightsail.
Just amazing how fast and easy that is...literally takes just a few minutes:
  • go to AWS Lightsail (with a valid AWS account)
  • pick your instance Image (plain OS with/without stack like Wordpress, NGINX...)
  • choose the plan (I picked the 5US$ one)
  • select a Name for your instance
  • click on CREATE
  • around a minute later you can access your own VPS via SSH in the browser. Whoa!

As I wanted to setup a simple web server and reverse Proxy I chose to manually install NGINX (the pre-packaged Comes with PHP and many other goodies I don't need. yet.).
  • install NGINX (sudo apt-get install nginx)
  • update the HTML pages in the default Location 
  • install GOACCESS.IO to analyze the web logs
  • setup a cron-job to update the GOACCESS html report every 10 minutes
  • create a DNS-record in my own Domain (
  • boom - ready: my own static Website hosted on my own virtual private Server!
After a few hours I can see the activity in the Lightsail Dashboard. It is alive!

And what took the longest? Exactly, the Content:

Picking a new HTML5 template and adding some text/content.

Setting up a reverse proxy with NGINX and port forwarding to another NGINX instance...

Finetuning the subdomain/Proxy configuration...

Let's see how this works out...maybe time to retire my Homepage that is still hosted on Google App Engine.

But then...Hosting a few static pages (and reverse proxying)

What's next?

Now, what else can I do with this little Server? My own Proxy/VPN to circumvent geo-fencing?

Terminating my ADS-B experiments....for good reason

I've been running my own ADS-B logger for some months now and I learned many things. And while I am excited about the "endless" possibilities, I have started to reach some limits with my experiments:

The Limits of Radio Reception

Hamburg Low Altitude out of range...
Yes, I was surprised how much reception I did achieve with that tiny antenna on my USB dongle. But it became very clear that I missed most of the arriving/departing flights over Hamburg at low altitude. To improve coverage I would have had to install a bigger antenna or simply joined/used the data feed from or And yes, I was too cheap to go for a 1090 amplifier/filter or that famous FR24 dongle.

The Limits of the Free-Tier on AWS RDS

I did upload my aggregated ADS-B data to a free Amazon mySQL instance (free tier, free for one year). Great learning experience, but as the data accumulated the CPU utilization did go up. I would have had to clean up the data, or to switch to a bigger instance. Again, I was too cheap to go spend money on a bigger DB that would essentially host only my data.

The Limits of my own Attention Span

The other effect I experienced was the lack of new challenges.

Yes, the ADS-B logging worked fine, the mySQL upload as well, the analytics with is fun.  So where do I go from here?

At work, I have access to full 2 years of FR24 data and some powerful analytics tools. That allows me to tinker and explore with massive aircraft data - much much more than I could ever handle (the 2 week data set for testing alone has 200Mio entries).

My old Raspberry PI was re-purposed as a surveillance camera with the fabulous MotionEyeOS package. Highly recommended..sure beats all the commercially available products out there (especially when you use the new 8MP camera module on an RPi3).

And Amazon AWS? I deleted my RDS instance (before I forget and incur cost).  Now I'm experimenting with AWS Lightsail...articles to follow shortly.

Keep on learning...!

Sunday, March 20, 2016

Little Error in my script...remembers too much data.

Remember my approach to store aircraft data (flight number, altitude,...) in an array?
Works well to "remember" such data to complete ADS-B records.

Now I found out that it works a little too well - especially when it remembers the last flightnumber from hours ago. takes quite a few ADS-B blips to update the current flight number. 

What's the impact? You can see it below.... I record 2..3 blips with the old flight number EWG16U from 13:43 when the current flight at 17:29 is now EWG95U. 

I guess I have to improve my script to "forget" this data after a few minutes of having received no blips. Stay posted for new version of my scripts...

Carry-Over of Flight Number

Monday, March 14, 2016

Geographical Distance Calculation - Haversine

My first quick&dirty distance calculation was okay...but not good:

After a bit of research of converting Latitude/Longitude I finally found somesimple code for the Haversine function at

Pressed the code into a query like this and executed in mySQL workbench:

#Haversine SQL function pressed into a query
#code adapted from

SET @myLat := '53.465';
SET @myLon := '09.697';

select logdate, loghexid, logsign, altitude, latitude, longitude,

round(3959 * 2 * ATAN2( SQRT( 
pow(sin((RADIANS(latitude) - RADIANS(@myLat))/2),2) + 
pow(sin((RADIANS(longitude) - RADIANS(@myLon))/2),2) * COS(RADIANS(longitude)) * COS(RADIANS(@myLat))
), SQRT( 1 - 
pow(sin((RADIANS(latitude) - RADIANS(@myLat))/2),2) + 
pow(sin((RADIANS(longitude) - RADIANS(@myLon))/2),2) * COS(RADIANS(longitude)) * COS(RADIANS(@myLat))
))) as HaversineDistance

from flightlog
where logdate=curdate();

Exported the data to a CSV and mapped on - viola!

ADS-B blips color-coded by distance

Next step: create the mySQL function so the code in the query is a bit less comples...

Good night!

Sunday, March 6, 2016

ADS-B Filtering by Latitude/Longitude SQL query

Now that my scripts are collecting ADS-B data to mySQL it's time to learn more SQL queries.

How can I find the flights furthest away from me?
I simply let mySQL calculate the distance - with simplified Pythagoras (a² + b² = c²), assuming that Latitude and Longitude are somewhat similar...


select logsign, altitude, 
latitude, (latitude-53.46) as LatDiff, 
longitude, (longitude-9.69) as LonDiff,
sqrt((latitude-53.46)*(latitude-53.46) + (longitude-9.69)*(longitude-9.69)) as LatLonDistance
from flightlog
order by sqrt((latitude-53.46)*(latitude-53.46) + (longitude-9.69)*(longitude-9.69)) desc
limit 100;

Resulting Data  (using mySQL workbench)

Query Result

Resulting Map (using

Now mapping the resulting data on a map gives me this nice picture. As you can see, Latitude and Longitude are not perfectly square - resulting in a nice oval cut out in the middle....


New Aircraft Flight Tracks

Just trying the CSV files from my revised ADS-B data collection scripts.
Went to CARTODB.COM and had a quick map with yesterdays data...

Had to name the field columns, as my CSV doesn't have a nice header anymore...

Flights by Altitude...colors multiplied

NEW & IMPROVED: ADS-B Data Collection

My first scripts worked well - see post at

But...after installing Dump1090 Mutability the script often stopped working.
Also noticed some SQL errors. Time for some debugging and better error handling.

Here are my revised DUMP2SQL scripts

Topology - Data Flow

Main improvements:

  • Original script often stopped when Netcat did not receive data - fixed with another simple loop around. Added some logging to see how often this happens....
  • Now include data without a callsign - and immediately tracked a sail plane in the area...  :-)
  • Fixed SQL errors from empty fields - now passing "NULL" as value
  • Segregated the data collection from mySQL upload - to avoid missing ADS- messages when busy with. This way you can trigger the mySQL insert every few minutes by Crontab and not for every single message.
  • Added mySQL error handling and logging. SQL files are deleted when insert was successful (CSV data remains untouched).
  • Changed the SQL table layout (not a good idea to use reserved names for columns such as "index")

All the files-....

Results? Working absolutely fine!

No more missed ADS-B messages (averaging now 120.000 per day instead of 80.000).
No more SQL errors.
No more missed sail planes or helicopters (that fly without a callsign).

Collected ADS-B data in both CSV and SQL format  (main data aggregation script)

Main purpose of this script is to collect all ADS-B messages, and aggregate them to complete missing fields (e.g. to carry-over the call-sign) and to save data only when location is known.

Launch this script at startup with crontab (sudo crontab -e) and insert a line:
@reboot /home/pi03/ >/dev/null &

#Shell script to listen parse and aggregate ADSB data
#Run in foreground to see progress bar
#run  in background with: >/dev/null &

#variables DUMP1090 server
#variables  mySQL credentials...use certificates in next version
#now declare the arrays
counter=0  #for loop control
declare -a arr_call    #collect call_signs per aircraft
declare -a arr_alti    #collect altitude per aircraft
declare -a arr_sped    #collect groundspeed per aircraft
declare -a arr_trck    #collect track per aircraft
declare -a arr_vert    #collect vertical speed per aircraft
TimestampDaily=$(date +"%Y-%m-%d")        #separate logfile every day
TimestampMinut=$(date +"%Y-%m-%d-%H-%M")   #separate SQL file every minute
ADSB_log="/home/pi03/adsb-log.txt"   #status log -
ADSB_csv="/home/pi03/adsb-csv-"   #data capture - collected per day - append TimestampDaily
ADSB_sql="/home/pi03/adsb-sql-"   #SQL statements - append TimestampMinut

#Startup Messages
echo "DUMP1090 Aggregator2mySQL by Matthias Gemelli 2016"
echo "Listening to DUMP1090 on $ADSBhost port $ADSBport"
echo "Writing Logs to $ADSB_log"
echo "Writing Data to $ADSB_dat"
echo "Writing SQL  to $ADSB_sql"
echo "SQL insert to $SQLsrv"
echo "--------------------------------"

#Startup log
Timestamp=$(date +"%Y-%m-%d %H:%M:%S")
echo "Dump2SQL launched with PID $$ at $Timestamp" >> $ADSB_log
echo "MsgCount,HexIdent,Date,Time,Lat,Long,Callsign,Altitude,Speed,Track,Vertical" >> "$ADSB_csv                                                                                                 $TimestampDaily.csv"
echo "select count(*) from flights;" >> "$ADSB_sql$TimestampMinut.sql"

#----------------------LOOP starts------------------------
while true; do #outer loop - because netcat stops too often with Dump1090 mutability
#inner loop - netcat listener
nc -d $ADSBhost $ADSBport | while IFS="," read -r f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11 f12 f13 f14                                                                                                  f15 f16 f17 f18 f19 f20 f21 f22
do     #loop until a break is thrown or netcat stops

#first update the timestamp for log files
TimestampDaily=$(date +"%Y-%m-%d")        #separate logfile every day
TimestampMinut=$(date +"%Y-%m-%d-%H-%M")  #separate SQL file every minute

#relevant data fields in every ADSB record
#echo "Field 05 HexIdent         :$f5"
#echo "Field 07 Date message gen :$f7"
#echo "Field 08 Time message gen :$f8"
#echo "Field 11 Callsign         :$f11"
#echo "Field 12 Altitude         :$f12"
#echo "Field 13 GroundSpeed      :$f13"
#echo "Field 14 Track            :$f14"
#echo "Field 15 Latitude         :$f15"
#echo "Field 16 Longitude        :$f16"
#echo "Field 17 Vertical Rate    :$f17"

#now save the data into array, using HexIdent as index
#overwrite only if field is not empty
ident=$((0x${f5}))   #convert hex to decimal
if [ "$f11" != "" ];  then arr_call[ident]="$f11"; fi
if [ "$f12" != "" ];  then arr_alti[ident]="$f12"; fi
if [ "$f13" != "" ];  then arr_velo[ident]="$f13"; fi
if [ "$f14" != "" ];  then arr_trck[ident]="$f14"; fi
if [ "$f17" != "" ];  then arr_vert[ident]="$f17"; fi
#write default values - important for SQL insert
if [ "${arr_call[ident]}" = "" ]; then arr_call[ident]="unknown"; fi
if [ "${arr_alti[ident]}" = "" ]; then arr_alti[ident]="NULL"; fi
if [ "${arr_velo[ident]}" = "" ]; then arr_velo[ident]="NULL"; fi
if [ "${arr_trck[ident]}" = "" ]; then arr_trck[ident]="NULL"; fi
if [ "${arr_vert[ident]}" = "" ]; then arr_vert[ident]="NULL"; fi

#debugging and progress bar - display RAW-before - AGG-after aggregation data
#echo "Debug for $f5 Altitude $f12-${arr_alti[ident]} Flight $f11-${arr_call[ident]}"
#progress bar  X for every location, DOT for every ADSB-message
#if [ "$f15" != "" ]; then  printf "x"; else printf "."; fi

#----------save ADSB data only if location is known--------
#----------pointless to write speed...altitude without location
if [ "$f15" != "" ]; then  #if Latitude is not empty
echo "Position Point received for $f5 ${arr_call[ident]} at alt ${arr_alti[ident]}"

#---------write CSV data and SQL commands to file-------
#echo "MsgCount,HexIdent,Date,Time,Lat,Long,Callsign,Altitude,Speed,Track,Vertical" >> "$ADSB_cs                                                                                                 v$TimestampDaily.csv"
echo "$CSV" >> "$ADSB_csv$TimestampDaily.csv"

QUERY="INSERT INTO esp8266data.flightlog "
QUERY="$QUERY (logmsg, loghexid, logdate, logtime, latitude, longitude, "
QUERY="$QUERY logsign, altitude, speed, track, vertical) VALUES "
QUERY="$QUERY ($counter,\"$f5\",\"$f7\",\"$f8\",\"$f15\",\"$f16\","
QUERY="$QUERY \"${arr_call[ident]}\",${arr_alti[ident]},${arr_velo[ident]},"
QUERY="$QUERY ${arr_trck[ident]},${arr_vert[ident]});"
echo "$QUERY" >> "$ADSB_sql$TimestampMinut.sql"

#instead of inserting every separate blip I collect into a SQL file and run in batches
#mysql -h $SQLsrv -u $SQLusr -p$SQLpwd $SQLdbs <$SQLqueries
#if [ $? != 0 ]; then echo "error: $QUERY" >> $ADSB_log ; fi

fi #if position is not empty

#reset the array if it is midnight (fewer planes)
((counter++))   #increase the loop counter
done            #netcat listener loop
Timestamp=$(date +"%Y-%m-%d %H:%M:%S")
echo "Dump2SQL Netcat stopped...Re-Launch with PID $$ at $Timestamp" >> $ADSB_log

done            #outer loop
#-----------------end of the loops----------------
#attention: variables set within the loop stay in the loop

echo "Done for the day..."
echo "Dump2sql done" >> $ADSB_log
echo $(date +"%Y-%m-%d %H:%M%S") >> $ADSB_log  (insert the data to mySQL)

Main purpose of this script is to insert the flight data into a mySQL database using the mysql-client software package. It automatically looks for the SQL files generated with the dump2sql script and only processes the ones older than one minute (to prevent processing files that are still being written to).
Launch this script every 5 minutes startup with crontab (sudo crontab -e) and insert a line:
*/5 * * * * /home/pi03/ >/dev/null

#variables  mySQL credentials...use certificates in next version
SQL_log="/home/pi03/mysql-log.txt"   #status log -
Timestamp=$(date +"%Y-%m-%d-%H-%M")   #separate logfile every day

#Now cycle through all SQL files and process them with mySQL
for f in $FILES
  #echo "Processing $f file..."
  #ls -ls $f
  Timestamp=$(date +"%Y-%m-%d-%H-%M")
  #now check if file is old enough to touch
  if [[ $(find $f -mmin +1) != "" ]]
        then #file found that is older as one minute
        #echo "old $f"
        #now launch SQL command
        SQLout=$(mysql -h $SQLsrv -u $SQLusr -p$SQLpwd $SQLdbs <$f 2>&1)
        if [ $SQLerr != 0 ];
                then  #SQL error
                echo "Error in $f with $SQLout" >>$SQL_log
                cp $f /home/pi03/err$f     #copy the error SQL file
                rm $f  #delete the original file
                echo "Successfully Processed $f" >>$SQL_log
                rm $f   #delete SQL file
                fi  #SQL error
        else #file is fresh - dont touch it
        echo "Ignored fresh file $f" >>$SQL_log
        fi #find mmin1 - check for file age


Good Luck! Feedback and improvement ideas very welcome!

Sunday, February 21, 2016

The magic of SQL queries on my flight data collection....

If you have read my other post you know that I am logging all received ADS-B "squirts" into a mySQL database. On average I collect around 50.000 complete entries per day (with callsign, altitude, speed, position, track, date/time).

I can retrieve all data with a simple "select * from flights;"

Table Structure & Data
After one week of data collection I have collected 400.000 entries in my database.
Now let's explore this data and learn some SQL query language at the same time.

How many Aircraft and Flights each day?

SELECT date, count(distinct hexident), count(distinct sign) FROM flights GROUP BY date;

Unique/Distinct Aircraft and Flights
You can clearly see that Aircraft are serving multiple flights each day - e.g. arriving with Lufthansa 001 and departing with Lufthansa 002.

Which Aircraft are racking up most flights?

Select distinct hexident,count(distinct sign) from flights
group by hexident order by count(distinct sign);
Number of flights per Aircraft HexID

Who is running these busy planes? 

Apparently it's German Wings (GWI):

select distinct sign, hexident from flights
where hexident like '3C5EF%' order by hexident;

German Wings keeps its planes busy...

The fastest/slowest/highest/lowest each day?

Let's focus on the the highes/lowest/fastest/slowest flights with
Select date, count(*), count(distinct hexident), count(distinct sign),
max(alti), min(alti), max(speed), min(speed) from flights group by date;

And I can look for the specifics:

  • Flights below 0 feet altitude? Just a few blips from nearby Airbus plant Finkenwerder...pilots are either diving in the river or have not set proper altitude...
  • Flights above 44.000 feet altitude? AOJ87D a Swiss Jet? , FPG111 - a Dassault Falcon? , MMD6424 - another Dassault Falcon? It must be nice to look down from your own private jet.
  • Flights slower than 100 (knots?) - Who is CHX26? Looks like I tracked a helicopter?
  • Flights faster than 600 (knots?) - Looks like aircraft passing by (Air India, Air Canada...)

The furthest away?

Let's look for the flights that are furthest away... with this query:
select date, count(*), min(lat), max(lat), min(lon), max(lon) from flights group by date;

Furthest Flights
I could do some math (difference in Lat/Long to my position), but I am trying to keep it simple.

What interesting SQL queries can you think of?

If you have some ideas let me can also have access to the mySQL data and experiment yourself...

Sunday, February 14, 2016

ADS-B Data Visualization (separate by Speed/Altitude)

Just playing around with my ADS-B data that is collected in a mySQL database.
Competing a little with someone in Hannover doing the same...

Unfortunately does not allow direct mySQL data feeding, so I have to export to CSV first and then upload. Oh well....

Find below my latest visualizations with cartodb:

All Positions (unfiltered)

This picture nicely shows the reception area of my indoor antenna.
The spike to the Left/West is the windows of my office...
Unfiltered data...a bit busy...

High Altitude = Just passing over....

High Altitude Flights Only (> 12.000 feet)

Low Altitude = Departing or Landing

Low Altitude Flights Only (<= 10.000 feet)

Groundspeed (low speed=green=landing/departing)

I like this visualization....the ground speed clearly shows the flights departing/landing...while the overflights are at higher speed.
Groundspeed (slow=green, fast=red)


More experiment:
  • create KML files for Google Earth with flight-tracks from connected positions
  • create 3D rendering in Blender with the data
  • daily status reports (highest/lowest/fastest/slowest/furthest)

Tuesday, February 9, 2016

ADS-B Aircraft Data Collection & Aggregation

Understanding and aggregating ADSB message data

After installing DUMP1090 I was looking for ways to collect the position data for later analysis.
You can easily display the dump1090 data feed with netcat: nc –d localhost 30003. And of course you can simply run netcat in the background and write the data to file with  nc -d localhost 30003 >> adsb.csv
raw data from DUMP1090
This “base station format” is well explained here You will quickly realize this raw data needs to aggregated, as different message types send different data – but never at the same time.

ADS-B data before aggregation


  • continously pipes the netcat raw data through a simple parser (IFS):
    nc -d $ADSBhost $ADSBport | while IFS="," read -r f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11 f12 f13 f14 f15 f16 f17 f18 f19 f20 f21 f22
    do #loop until a break is thrown
  • lets me read each individual value in each individual message:
    #echo "Field 05 HexIdent         :$f5"
    #echo "Field 07 Date message gen :$f7"
    #echo "Field 08 Time message gen :$f8"
    #echo "Field 11 Callsign         :$f11"
    #echo "Field 12 Altitude         :$f12"
    #echo "Field 13 GroundSpeed      :$f13"
    #echo "Field 14 Track            :$f14"
    #echo "Field 15 Latitude         :$f15"
    #echo "Field 16 Longitude        :$f16"
    #echo "Field 17 Vertical Rate    :$f17"
  • Saves the relevant data into an array
    ident=$((0x${f5})) #convert Aircraft hex ID to an number
    if [ "$f11" != "" ];  then arr_call[ident]="$f11"; fi
    if [ "$f12" != "" ];  then arr_alti[ident]="$f12"; fi
    if [ "$f13" != "" ];  then arr_velo[ident]="$f13"; fi
    if [ "$f14" != "" ];  then arr_trck[ident]="$f14"; fi
    if [ "$f17" != "" ];  then arr_vert[ident]="$f17"; fi
  • And finally write the data only when both position and callsign are known
    #if position and if callsign is broadcast
    if [ "$f15" != "" ]; then   #if f15 Latitude not empty
    if [ "${arr_call[ident]}" != "" ]; then #if callsign is already known
    echo "$ MapPoint" >> "$ADSBlog"      #write to log file

The Result? 

Now, instead of the original dump1090 messages like this: 

I have a complete data set like this:
250,47875D,2016/02/09,21:12:35.282,53.53898,8.41868,IBK5411 ,38000,418,225,0
365,47875D,2016/02/09,21:12:53.369,53.51395,8.37762,IBK5411 ,38000,419,225,0

Before and after data aggregation

And the best: instead of 100MB per day my new aggregated log file is only 3 MB/day. Hooray!

The resulting CSV file can now easily be geo-plotted – e.g. with 
If you filter by altitude, you can nicely see the departing/landing flights without the fly-overs….

Quick visualization using 
Have a look at some of my maps at 


Yes, my script is fairly basic. But it works quite well so far.
No input sanitizing  …  (I hope there is no callsign “rm *” or “drop table;”)
Memory management (the array should be cleaned up from time to time, otherwise it accumulates ALL aircraft).
Drops aircraft data without a callsign (I noticed some military aircraft do send position data but not call sign)
Carries over the last callsign one spot – even if the same aircraft now departs with a different flight number.

Here is the Script - Feel free to improve

pi@gemelli-pi03:~ $ cat
#Shell script to listen parse and aggregate ADSB data
#Run in foreground to see progress bar
#run  in background with: >/dev/null &

Timestamp=$(date +"%Y-%m-%d")   #separate logfile every day
#now the mySQL credentials...use certificates in next version
counter=0       #for loop control
countmax=200    #script can stop after so many loops

echo "DUMP1090 Aggregator2mySQL by Matthias Gemelli 2016"
echo "listening to server: $ADSBhost writing to log $ADSBlog"
echo "mySQL as $SQLusr to $SQLsrv"
echo "progress bar: . for every message, X for every location, Y for missing cal                                                                                                                lsign"
echo "exit with Ctrl-C or set message limit in countmax"
echo "-------"
echo "MsgCount,HexIdent,Date,Time,Lat,Long,Callsign,Altitude,Speed,Track,Vertica                                                                                                                l" >> "$ADSBlog"

#now declare the arrays
declare -a arr_call
declare -a arr_alti
declare -a arr_sped
declare -a arr_trck
declare -a arr_vert

#loop through the Netcat data
nc -d $ADSBhost $ADSBport | while IFS="," read -r f1 f2 f3 f4 f5 f6 f7 f8 f9 f10                                                                                                                 f11 f12 f13 f14 f15 f16 f17 f18 f19 f20 f21 f22
do     #loop until a break is thrown

#first update the filename
Timestamp=$(date +"%Y-%m-%d")   #separate logfile every day

#now read the relevant data fields in every ADSB record
#echo "Field 05 HexIdent         :$f2"
#echo "Field 07 Date message gen :$f7"
#echo "Field 08 Time message gen :$f8"
#echo "Field 11 Callsign         :$f11"
#echo "Field 12 Altitude         :$f12"
#echo "Field 13 GroundSpeed      :$f13"
#echo "Field 14 Track            :$f14"
#echo "Field 15 Latitude         :$f15"
#echo "Field 16 Longitude        :$f16"
#echo "Field 17 Vertical Rate    :$f17"

#now save the data into array, using HexIdent as index
if [ "$f11" != "" ];  then arr_call[ident]="$f11"; fi
if [ "$f12" != "" ];  then arr_alti[ident]="$f12"; fi
if [ "$f13" != "" ];  then arr_velo[ident]="$f13"; fi
if [ "$f14" != "" ];  then arr_trck[ident]="$f14"; fi
if [ "$f17" != "" ];  then arr_vert[ident]="$f17"; fi

#if position and if callsign is broadcast
if [ "$f15" != "" ]; then  #if f15 not empty
if [ "${arr_call[ident]}" != "" ]; then #if callsign is already known

echo "$punkt" >> "$ADSBlog"      #write to log file

#now compose SQL statement
QUERY="$QUERY(msgcount,hexident,date,time,lat,lon,sign,alti,speed,trck,vert) VAL                                                                                                                UES"
QUERY="$QUERY ($counter,\"$f5\",\"$f7\",\"$f8\",\"$f15\",\"$f16\","
QUERY="$QUERY \"${arr_call[ident]}\",${arr_alti[ident]},${arr_velo[ident]},"
QUERY="$QUERY ${arr_trck[ident]},${arr_vert[ident]});"
echo "$QUERY" >$SQLqueries  #write SQL to a file before executing
#echo "$QUERY"
mysql -h $SQLsrv -u $SQLusr -p$SQLpwd $SQLdbs <$SQLqueries

#progress bar on shell - X for position, Y for pos without callsign
printf "X"
else  #if no callsign is known at position
printf "Y $f5"
fi #if callsign is aleady known
else  #what to do if no position is given
printf "."  #progress bar
fi #if f15 not empty

#reset the array if it is midnight (fewer planes)
#if reached max counter then break from loop
#if [ "$counter" -gt "$countmax" ]; then break; fi
((counter++))   #increase counter

done            #netcat listener loop
#attention: variables set within the loop stay in the loop
echo "done"

Friday, January 29, 2016

Raspberry and Amazon-hosted mySQL


Let me share my latest adventure, exploring the world of cloud-hosted databases.

Turned out to be quite simple to get started…with plenty room for improvements.

I’ll show you how easy it is to store log data from your Raspberry (temperature, DHCP logs)
to a MySQL database hosted on Amazon AWS RDS.

Raspberry or similar Linux-powered device (also works from Mac/PC )
Amazon AWS account (free)
2..3 hrs of time
Basic SQL and shell scripting experience


Just head over to and sign up for free. 
Then setup a MySQL database instance, fire it up, create user accounts.

Check in the AWS RDS Dashboard that your instance is up and running, and that you know the full endpoint server name (screenshot below)

AWS RDS Console


Let’s familiarize ourselfes with SQL again, setup some tables, users.  Run SQL queries, insert & delete data…

Download the mySQL workbench from and connect to your AWS database.

First head over to “Users and Privileges” to setup additional users (for your remote queries).

mySQL Workbench - Users and Privileges

Then go to the tables and create some tables, insert data, familiarize yourself with basic SQL queries. The queries you see in the workbench can later be executed by scripts. There are plenty of SQL tutorials out there to get you started…

mySQL Workbench - Table


Now that we have a running mySQL database, and we know we can access it remotely, and we know some basic SQL queries, it’s time to setup the scripts on the Raspberry PI.

  • Install the mySQL client (with sudo apt-get install mysql-client).
    We don’t need the PHP libaries or the sql-server package!
  • Now, let’s check if we can access mySQL over the Internet by visiting a public database. 
  • From your prompt run
    mysql --user=genome
    and then select now(); Should look like the screenshot. 
  • Remember: commands have to be finished with a semicolon!
mySQL cient - first connection
  • Now we can start accessing our own AWS instance with the following command:
    mysql -h -u USER -pPASS database
    • Insert the full instance name behind the –h
    • No space between –p and the password!
    • If your user has access permissions it should look like this:
mySQL client - accessing our own AWS RDS instance

Does it work? 
Great, now let’s setup the actual database tables and scripts to populate the data.


  • First, setup the database table from the mysql shell with
    create table heatlog (Nummer INT NOT NULL AUTO_INCREMENT, Zeitpunkt datetime NOT NULL DEFAULT NOW(), machine VARCHAR(40), heat FLOAT,PRIMARY KEY(Nummer));
  • Then insert a test record into the table with
    insert into heatlog (machine, heat) VALUES ("RASPI02",48.7);
  • With select * from heatlog; you should see the newly created record.
  • Now create a shell script (don’t forget to make it executable with chmod +x
#Extract Board Temp and save into mySQL
#echo "Logging Raspi Temperature"
SQLdbs="database name"
heat=$(/opt/vc/bin/vcgencmd measure_temp|cut -c6-9)
echo "insert into heatlog (machine, heat) VALUES (\"$HOST\",$heat);" > $SQLfile
#cat $SQLfile
#now pipe the SQL commands into mySQL
mysql -h $SQLsrv -u $SQLusr -p$SQLpwd $SQLdbs <$SQLfile

  • Now execute the script (a few times), have a look at the SQL commands (cat heat.sql) and test if the data was actually inserted. 
  • launch mySQL again with
    mysql -h -u USER -pPASS database
    and then run select * from heatlog;
Hooray - our first data entries are online!
  •  If it works well, then launch the script every our via crontab. 
  • Type sudo crontab –e and then add the following lines
    @hourly /home/pi/
crontab entry


Whoa! That was easy, wasn’t it? You now have your own mySQL database on AWS and your Raspi is logging it’s temperature at every hour. Cool shit!

What’s next?

  • Securing your AWS RDS with proper credentials and not just username/password
  • Sanitizing input (error handling, etc...)
  • Upload other data (DHCP lease info, DNS queries...)
  • Query and analyse the mySQL data, trigger actions, reports, etc….
  • Scripts to query the data and launch specific actions….
  • Setup AWS IoT with the same data, etc…
  • Endless possibilities!

Tuesday, January 26, 2016

TIFU - when a dash is not a dash.

Originally posted on and on

Let me share my silly little story.
Took me two nights of ever increasing frustration to figure out.
Makes me feel a little stupid.   ;-)

So I am doing multiple small projects on my Raspberry, playing around on my home network with DNSMASQ, recording aircraft positions with Dump1090/ADS-B, etc. Now I started using cloud-hosted MySQL database (AWS RDS) to store and analyze my data. Setting up the DB on Aamazon was quick, connecting to it with the MySQL Workbench worked fine. But I wanted to insert data from the Raspberry shell, using the mysql-client software. Again, installing it was easy (apt-get…yada yada..).

But then…launching the mysql monitor shell on a remote DB server…what exactly was the right syntax again?
                        mysql -h -u username -ppassword

Easy...but mysql just displayed the help info. Or „Access denied“. I googled and googled, and tried all the tips (yes, I reset the password, yes, I created a new DB user, yes, I granted all the privileges, yes, I checked the DB access rules, ....).  Either mysql showed me the help info or an „access denied“ error. Password in quotation marks and ticks. Without ticks. Still no success. Reinstalled mysql. Reimaged the Raspberry image. Still, no success.  WTF??

Then it hit me. I am using MS Word to keep notes. Cut& paste commands. Screenshots.
Copy in Raspberry, paste in Word. Copy in Word, paste in Raspberry.

Word is very nice to help me format my text to make it look nice.
And subtly changing the text. Like changing a a short dash (-) to a long dash (–). 

And the evil bit: when you copy the long or short dash to the prompt you don’t see any difference.

Do you spot a difference between the following lines?  In Word you can...on the shell you cannot:
mysql -h -u espmaster –p   
mysql -h -u espmaster -p

So, I learned that a –  is not the same as a  -.  Even if it looks identical.
long and short dash are not the same...

Monday, January 18, 2016

Who is your smart home talking to?

SmartTVs are using Google Analytics. Amazon Alexa/FireTV listens to your voice. Security cameras watch your living room. Temperature sensors detect your presence. Your HP printer orders the next cartridge when you printed enough pages.

All of these devices are now on your home network. Connected to the internet. Calling home. Day and night.

Do you know what is going on in your home network? 

I certainly did not. I wanted to find out. I wanted to learn, to play, to experiment. I did not want to spend a lot of money.

Still with me? Then let’s go exploring the shadow world of the IoT — Internet of Things.

Introducing my Home Network

Now let me introduce my home network. Fairly average, I guess you’ll find a pretty similar setup in your home as well. It is amazing how many devices end up online, even if your fridge and TV are not “smart” yet.

My Initial Home Network — Fairly Similar To Yours?

If you want to analyze all Internet traffic you need to do it somewhere between cable/DSL modem and your local network.

Learning to love Linux (and hate Wifi routers)

I wanted to log all Internet traffic. I had a Raspberry Pi available, so I set up a firewall/proxy on the Raspberry Pi, but performance simply sucked. Okay for isolated analysis, but certainly not good enough for everyday usage.

Next step was to install Linux/OpenWRT on a cheap 40€ WifiRouter (TP-Link 3600). Worked pretty well, even managed to sniff network traffic with tshark. Transparent Proxy was a bit tricky, which did not work for HTTPS and performance was not acceptable.

Then I remembered Domain Name Services (DNS) — the tiny service that translates internet addresses ( to its IP address ( Pretty much all Internet transactions require name resolution via DNS — and logging these is quite simple, without impacting network performance (I’ll describe how, further down).

Logging DNS requests — even when I am sleeping

Just one tiny problem: Network Address Translation (NAT)

All consumer grade Wifi routers have NAT built in — which makes it impossible to identify which of your devices are talking, because they all show up with the same IP address.

Additionally, most Wifi devices route all DNS-requests to your customitzed destination and do not tell the devices to go directly to the DNS server. Again, your own DNS server cannot separate the clients as they all use the router IP.

Learning to love real Wifi Access Points (AP)

Finally, ARStechnica motivated me to ditch my Wifi router for a real Wifi Access Point. Cost was a bit steep at first (200€ for a dual-band Ubiquitiy AP), but the features and performance rocked. Wireless performance is much much better — all over our house. Opening a guest network with access code a breeze. And I did not miss all these other useless features( USB storage on a Wifi router? DECT telephones when I already have a base station?).

Home Network — with added Raspberry PI and new WiFi AP

Besides improving my WiFi reception throughout the house, the new AP is not messing with my network traffic, allowing me to track and analyze each individual device, wired or wireless.

Learning to love Linux (and DNSMASQ)

Now, let’s get back to Internet traffic logging. As mentioned earlier, I want to log every internet interaction for every client on my home network.

DNSMASQ to the rescue — a simple DNS+DHCP server that can easily be installed on a Raspberry Pi. With DHCP I can control which client gets what IP address, which gateway and DNS servers to use. And with the built-in DNS server I can log (and manipulate) every DNS request.

dnsmasq.log detailing every DNS query

With DNSMASQ DHCP options I can select which devices send their DNS requests directly to the Internet (labelled "GREEN" network) or to my Raspberry Pi logger (labelled "RED" network). There is no need to change the settings manually on every device!

Green and Red network settings provided via DHCP options

And the winners are…

Through this journey I have learned a lot, spent a few evenings not watching TV and spent some money (hey, it’s always fun to play with new toys!).

What did I learn throughout this journey?
  • Consumer grade equipment is for consumers.
    (if you want more, you need to spend more...time)
  • Linux and the Internet are truly powerful. And complex.
    (Always compare tutorials…some of them are not correct).
  • Network protocols are actually quite easy.
    (Have fun exploring DHCP options!)
  • Goofing around with a Raspberry PI is like grown-up LEGO
    (assembly instructions on the Internet)

…advertising and content networks.

Now, who did you expect your devices to talk to?  How are Facebook and Google making money? Come on….don’t be surprised. 

Here is aggregated data from my home network over the last couple of days (I manually categorized the domains):

DNS Queries by category

Where to go from here?

Now that I have spent plenty of time on understanding my home network, what shall I do next?

  • Block Advertising/Tracking networks
    (just add the domains to /etc/hosts)
  • Sniff & manipulate network traffic
    (what data is actually sent?
    replace all advertising with corn flower blue pictures?)
  • Publish detailed technical step-by-step tutorial
    (cut & paste from the linked articles)
  • …?
Your feedback, questions and ideas are appreciated!