PeopleCounter part one: Counting People PeopleCounterWebpage thumb

PeopleCounter part one: Counting People

Intro

Internet of Things stands for connecting devices to the internet. The devices are then able to communicate with each other. In our project, the PeopleCounter, we use a mini-computer with intelligent software to count the number of people in front of a camera. We send that number to an Oracle IoT Cloud. With a business rule we check if the number is higher than a specific value. If yes, an electric device is turned on. We use a red tube to see the business rule getting activated. (See image one) Our project consists of two parts. The PeopleCounter itself (part one) and the cloud (part two). I describe in this blog post how we created the PeopleCounter and its parts.

PeopleCounterInAction

Use Case

We describe a use case to show that we as a company can develop applications where IoT is a part of it. We have shown this use case on a conference named nlOUG. nlOUG stands for Nederlands Oracle User Group. Companies can give presentations at the conference about techniques that uses Oracle Technology.

Our use case was the following:

  • We have a room where we have a Raspberry Pi mounted with a camera.
  • The Pi films this room.
  • The images are passed through a library or tool which counts the people on the images.
  • We send this number to the Oracle IoT Cloud.
  • If the number is higher than a specified value the cloud sends a signal to an external system that gets activated.

Hardware

We use a Raspberry Pi Model 3B+, where all of the calculation takes place. It is the newest version of the Pi and is relatively cheap. This model has connections for Wi-Fi, Ethernet, HDMI and the most important one the camera. We use a second generation of the camera module (Camera Board – V2). It has an 8MP lens and can shoot video in Full HD. We have as casing called the Camera Box Bundle. It is specifically design to hold a Pi mounted with a camera. We have bought our products on the following website: https://www.modmypi.com/. When everything is assembled it looks like this.

IMG_20180223_155950

OpenCV

We use in our first version a library called OpenCV. OpenCV stands for Open Source Computer Vision Library and is an open source computer vision and machine learning software library. It has hundreds of different algorithms to detect faces or movement, remove backgrounds and many more possibilities. We used a Java based version, but the original is written in C. The Java based version can be found at this repository.

The following code shows an implementation of how the OpenCV library works. The code receives an video input. We pass this input into a function called opencv.loadImage(video). With a function called opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE) OpenCV knows to scan the input for faces. Every face is then pointed out by drawing an square around it.

import gohai.glvideo.*;
import gab.opencv.*;
import java.awt.*;

GLCapture video;
OpenCV opencv;
int x = 0;
PImage snapshot;
Rectangle[] faces;
int numFaces;


void setup() {
  frameRate(5);
  size(640, 480, P2D);
  String[] devices = GLCapture.list();

  video = new GLCapture(this, devices[0], width, height);
  video.start();
  opencv = new OpenCV(this, width, height);
  opencv.loadCascade(OpenCV.CASCADE_UPPERBODY);
}
void draw() {
  println(frameRate);
  background(0);
  if (video.available()) {
    video.read();
    opencv.loadImage(video);
  }
  if (x > 50) {
    snapshot = opencv.getSnapshot();
    faces = opencv.detect();
    stroke(255, 0, 0);
    strokeWeight(2);
    noFill();
    for (int i = 0; i < faces.length; i++) {
      rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
    }
    numFaces = faces.length;
    x = 0;
  }
  stroke(255);
  textSize(30);
  println(numFaces);
  text(numFaces, width/2, height/2);
  x++;
}

PeopleCounter part one: Counting People IMG 20180223 161411 thumb

Soon we found out that the Pi isn’t so powerful. We use in our application a video input which has a resolution of 640 x 480, which is not much considering it can shoot Full HD. Even in 640 x 480 the program ran very slow. The frame rate dropped to 2 to 3 frames per second which is really slow. It helps to shoot in a lower resolution, but then you don’t see it very well on a screen. It doesn’t give us a good user experience.

Because of performance issues we chose to take photos instead of shoot video and analyze those. Another option was to send the video to the cloud analyze the input there. That is considerably faster, because of better hardware and software. The problem then is that the video is put online. There is a chance that people steal the video and do any illegal stuff with it. In our solution it is less an issue since we don’t store the photo, we delete the photo after it is analyzed.

YOLO

In our second version we use a library called YOLO. YOLO stands for You Only Look Once, and just as the title mentioned, the library analyzes the photo only once. It splits up the photo in different areas that are analyzed separately. The result is a prediction with the object and a percentage of how certain he is.

We use a pre-trained weight to show how accurate the library is, recognizing objects. The library has two types of weights. A normal one and a smaller one. We choose the smaller one, because of performance. At the same time it is less accurate. We also use a modified version of the library. This version can be found at this url: https://github.com/digitalbrain79/darknet-nnpack.

PredictionYOLO

We run the following command to start the analysis:

./darknet detect cfg/yolov3.cfg yolov3.weights data/dog.jpg

Then the output is pipe into a script. The script counts the number of persons detected by the library and saves that number in a text file.

#!/bin/bash
input=0
object=""

while read -r line; do
		IFS=':' read -r  -a  object <<< $line
		if [[ $object = "car" ]]
		then
			      (( input++))
			      echo "${object[0]}"			      
		fi
done 

echo $input > numberOfObjects.txt

Python

We expanded the script and wrote it in python to have one script that takes a photo, analyzes the photo and saves the number of persons counted to a text file. We have one important function in our script, that is a function called analysePhoto. As a result we have the whole python script below:

import json
import subprocess
import time
import timeit
import urllib
import urllib2

PeopleCounter_ON = 'https://maker.ifttt.com/trigger/PeopleCounter_ON/with/key/g2VNF0mp-fFyk4RYbPRK0ZjmZjjorjaFQ2LvjkL2GFC'
PeopleCounter_OFF = 'https://maker.ifttt.com/trigger/PeopleCounter_OFF/with/key/g2VNF0mp-fFyk4RYbPRK0ZjmZjjorjaFQ2LvjkL2GFC'
PeopleCounter_FallBack = 'https://eu-wap.tplinkcloudc.om/?token=f58b1ba2-B46gYJulcdt9rX1QCjdclUv'

status = 'off'  # on or off
requestLink = PeopleCounter_OFF
personCount = 0
timeElapsed = time.time()
timeout = None
threshold = None

# Receives Threshold from file to change threshold without exiting the loop
def getMetaData():
    global timeout, threshold
    with open('iotapp/threshold_timeout.json', 'r') as f:
        jsonFile = json.load(f)
        threshold = jsonFile["threshold"]
        timeout = jsonFile["timeout"]
        print ('Threshold is {}'.format(threshold))
        print ("timeout is: {}".format(timeout))

def analysePhoto():
    global personCount
    res = subprocess.check_output(['raspistill', '-o', 'iotapp/data/snapshot.jpg',
                                   '-w', '1280', '-h', '720', '-t', '1000',
                                   '-p', '0,0,200,200'])
    for line in res.splitlines():
      print (line) 

    # Analyse part with YOLO Library
    res = subprocess.check_output(['./darknet', 'detector', 'test',
                                   'cfg/voc.data', 'cfg/tiny-yolo-voc.cfg',
                                   'tiny-yolo-voc.weights', 'iotapp/data/snapshot.jpg'])
    
    # Checks if a certain object exists, if true then variable is increment by 1
    for line in res.splitlines():
      if 'person' in line.decode('utf-8'):
        personCount += 1
    
    timestamp = int(time.time())
    file_path = 'iotapp/data/numberOfObjects_'+str(timestamp)+'.txt'
    file_stream = open(file_path,'w')
    
    message = '{ "person" : ' + str(personCount) + ' }'
    file_stream.write(message)
    file_stream.close()
    
    res = subprocess.check_output(['cp', 'predictions.png',
                                   'iotapp/data/predictions.jpg'])
    for line in res.splitlines():
      print (line)

# sends the specific request which is needed to turn on/off the smart link plug
def sendRequest(threshold, peopleCounted, timestamp):
    global requestLink, status, timeElapsed
    htmlResponse = None
    timeToCompareWith = timestamp
    
    if(peopleCounted >= threshold and status == 'off'):
        requestLink = PeopleCounter_ON    
        status = 'on'
        
    if(peopleCounted < threshold and status == 'on'):
        requestLink = PeopleCounter_OFF
        status = 'off'
    
    print (requestLink)
        
    body = urllib.urlencode({'value1' : str(peopleCounted)})
    
    
    # First time it doesn't send a request to a link, after certain threshold
    if(timeToCompareWith - timeElapsed > timeout):
        request = urllib2.Request(requestLink, body)
        response = urllib2.urlopen(request)
        htmlResponse = response.read()
        timeElapsed = timeToCompareWith
        response.close()
    
    return htmlResponse

# Whole loop to keep the program running
while True:
    personCount = 0
    getMetaData()
    start = timeit.default_timer()
    analysePhoto()
    stop = timeit.default_timer()
    print ('People counted: {}'.format(personCount))
    print (stop - start) # Shows how long it takes to analyze the photo
    html = sendRequest(threshold, personCount, time.time());
    if html:
       print (html)

We ran an node script next to the python script. This script grabs the most recent file with the number of persons counted and sends it to the Oracle IoT Cloud.

// Return only base file name without dir
function getMostRecentFileName(dir) {
    var allFiles = fs.readdirSync(dir);
    var files = allFiles.filter(extension);

    if (files.length > 0) {
        // use underscore for max()
        return _.max(files, function (f) {
            var fullpath = path.join(dir, f);

            // ctime = creation time is used
            // replace with mtime for modification time
            return fs.statSync(fullpath).ctime;
        });
    }
    return '';
}

We have created a webpage to show the output of the library as you can see in the image below.

PeopleCounterWebpageFinal word

There are a lot of possibilities. For example you can scan queues for their length and open or close more counters accordingly. Or you can count the number of animals passing by so foresters know how many of each kind is living this part of the forest. Another possibility is to measure crowd in an lunch room so colleagues know how busy it is at the lunch room. They can choose to come later if it is too busy.

I want to thank people for contributing to this project: Robert van Mölken, Michael van Gastel and Corien Gruppen. Without them is was not possible to present this project at nlOUG.

This is the end of part one. In part two I will you how we implemented the cloud and activated the red tube. See you in part two!