Finally — here is a Python script, that you can use for example in your Raspberry Pi garden sprinkler controller to watch your beautiful garden from anywhere in the world you are and have access permissions…

In Part 1 of this Home Surveillance post I laid out the basis of a home-surveillance system: Video capturing with a camera.

A good home-surveillance system should consist of several parts:

  • You should be able to see live what is going on at home.
  • The video system should react to changes or movements and save a video / photo at detection of something relevant.
  • You should be able to access a live stream and videos/photos of detected intrusions from anywhere using mobile phone or computer.

In this post I want to drive this a little further. The here shown code can be used for a real home surveillance system:

  • the detection of motion in a video stream.
  • the streaming of the life video stream to the browser – accessible to computers in the same network.
  • accessing the life video stream from external computers.

The Required Parts

For this system to work you will first need the following hardware:

  • Raspberry Pi (preferably 3).
  • A USB camera such as this one.

Regarding the software on the Raspberry Pi:

$ sudo apt-get install python3-flask


The File Structure of the Code

The file structure of the code is as shown in below diagram:

./   ---> The main class library for camera activation and image/video capture...
./      ---> The Flask application that streams captured images to the web browser...
./templates/index.html  ---> The demo webpage for video streaming to a browser...

The main code and class structure is in the file This file contains the class that activates the camera, performs image processing, streams the images to a web-site, saves them to a file and streams them to the screen.
The file contains the flask code that performs the writings to the website.
Finally the file index.html (in ‘templates’) contains the implementation of the website.

Image Capture and Motion Visualization with Python

import cv2
import time

class Camera():
	# Constructor...
	def __init__(self):
		w     = 640			# Frame width...
		h     = 480			# Frame hight...
		fps   = 20.0                    # Frames per second...
		resolution = (w, h)         	# Frame size/resolution...
		self.cap = cv2.VideoCapture(0)  # Prepare the camera...
		print("Camera warming up ...")
		# Prepare Capture
		self.ret, self.frame =
		# Prepare output window...
		self.winName = "Motion Indicator"
		cv2.namedWindow(self.winName, cv2.WINDOW_AUTOSIZE)
		# Read three images first...
		self.prev_frame     = cv2.cvtColor([1],cv2.COLOR_RGB2GRAY)
		self.current_frame  = cv2.cvtColor([1],cv2.COLOR_RGB2GRAY)
		self.next_frame	    = cv2.cvtColor([1],cv2.COLOR_RGB2GRAY)
		# Define the codec and create VideoWriter object
		self.fourcc = cv2.VideoWriter_fourcc(*'H264')     # You also can use (*'XVID')
		self.out = cv2.VideoWriter('output.avi',self.fourcc, fps, (w, h), True)
	# Frame generation for Browser streaming wiht Flask...	
	def get_frame(self):
		self.frames = open("stream.jpg", 'wb+')
		s, img =
		if s:	# frame captures without errors...
			cv2.imwrite("stream.jpg", img)	# Save image...
	def diffImg(self, tprev, tc, tnex):
		# Generate the 'difference' from the 3 captured images...
		Im1 = cv2.absdiff(tnex, tc)
		Im2 = cv2.absdiff(tc, tprev)
		return cv2.bitwise_and(Im1, Im2)
	def captureVideo(self):
		# Read in a new frame...
		self.ret, self.frame =
		# Image manipulations come here...
                # This line displays the image resulting from calculating the difference between
                # consecutive images...
		diffe = self.diffImg(self.prev_frame, self.current_frame, self.next_frame)
		# Put images in the right order...
		self.prev_frame		= self.current_frame
		self.current_frame	= self.next_frame
		self.next_frame		= cv2.cvtColor(self.frame, cv2.COLOR_RGB2GRAY)
	def saveVideo(self):
		# Write the frame...
	def __del__(self):
		print("Camera disabled and all output windows closed...")

def main():
	# Create a camera instance...
	cam1 = Camera()

		# Display the resulting frames...
		cam1.captureVideo()    # Live stream of video on screen...
		cam1.saveVideo()       # Save video to file 'output.avi'...
		if cv2.waitKey(1) & 0xFF == ord('q'):

if __name__=='__main__':

The class Camera() defines the implemented features of the camera class. In the ‘Constructor’ __init__() after a few constant declarations the camera is prepared/started using the openCV VideoCapture() function. In line 24 to 26 three images are captured, which are used later for motion detection. This is achieved simply by subtracting these consecutive images from each other. There, where the result is not equal or close to zero something has moved or changed. Such area is called a blob.
The lines 29 and 30 define the codec and the form to save the video file. The right combination between codec and file format can be tricky to find, but the shown one worked well on my Raspberry Pi 3.

The function get_frame() (lines 34 to 39) reads single images from the camera and saves them in a jpg image called ‘stream.jpg’. Each image overwrites the previous one. The images of this jpg file are captured by the script and streamed to the website index.html which can be viewed in a browser.

The function diffImg() in lines 42 to 46 calculates the difference between two consecutive image pairs and ‘AND’s them in an attempt to make the detection of changes between three consecutive frames more robust against false detections of blobs.

The function captureVideo() in the lines 63 to 66 reads an image from the camera and saves it in the variable next_frame. The variables current_frame and prev_frame receive the frames captured earlier (commutation). These three variables are used in the function diffImg() to calculate the blob.

saveVideo() simply saves the captured video in the file ‘output.avi’, which was declared in the __init__() function.

Finally __del()__ closes all camera connections and also closes all opened windows. Closing the camera properly is important to leave it in a defined state so that it is possible to re-connect.

The main() function is basically the main function which calls the other functions.

Running the Video Capture and Motion Visualization Script

Please make sure you run this script in the openCV virtual environment we built in the Part 1 post, by typing:

$ source ~/.profile
$ workon cv 

You can start the script with

(cv)$ python

Below image shows the result of building the difference between consecutive frames (blobs). Everything is black (=0) besides the places where motion occurred.

Streaming the Video to a Browser with Flask

The following script allows you to stream the live video to a browser. This script is based on a post from Miguel Grinberg with some modifications to allow the life video stream from the web cam.

#!/usr/bin/env python
from flask import Flask, render_template, Response

# emulated camera
from camera import Camera

# If you are using a webcam -> no need for changes
# if you are using the Raspberry Pi camera module (requires picamera package)
# from camera_pi import Camera

app = Flask(__name__)

def index():
    """Video streaming home page."""
    return render_template('index.html')

def gen(camera):
    """Video streaming generator function."""
    while True:
        frame = camera.get_frame()
        yield (b'--frame\r\n'
               b'Content-Type: image/jpeg\r\n\r\n' + bytearray(frame) + b'\r\n')

def video_feed():
    """Video streaming route. Put this in the src attribute of an img tag."""
    return Response(gen(Camera()),
                    mimetype='multipart/x-mixed-replace; boundary=frame')

if __name__ == '__main__':'', debug=True, threaded=True)

The Camera() class listed above is instantiated here in line 5 (‘camera’). In the function gen() (lines 20 to 25) the get_frame() function of ‘camera’ instance is accessed, which generates jpg frames read from the web camera.
These jpg frames are then streamed with the function video_feed() to the web site index.html in the folder ./templates, which is defined in the function index() (lines 14 to 17).

The Browser Website Template

The design of the website which is displayed in the browser is specified in the file index.html.

    <title>Video Streaming Demonstration</title>
    <h1>Video Streaming Demonstration</h1>
    <img src="{{ url_for('video_feed') }}">

Line 3 specifies the name of the tab in the browser. Line 6 specifies the headline of the displayed website.
Finally line 7 establishes the connection to the video feed function in lines 28 to 32.

Video Streaming to a Browser

Now we are ready to stream the camera output to a website.
For this type at the prompt following command:

$ python 

The output should be similar to what is shown in below image:
Type the address shown at the output — here — into the Raspberry Pi browser. This should activate the camera and you should see the live stream.
If you used above version of you should now also be able to see the live stream on any computer in your home-router network.
For this find out what the IP address of your Raspberry Pi is, by typing ifconfig. In my case the address is as shown in below screen shot:

Open up on your laptop or smartphone a browser and type in following address (assuming your IP address is

…and you should see the live stream of your Raspberry Pi webcam.


Note: You will only be able to see the stream in one browser at a time. I am still trying to figure out how to stream in multiple browsers at the same time.

Accessing the Live Stream from the Outside World

The holy grail for the live video surveillance is certainly to be able to access the video stream from anywhere you are.

You can do this using e.g. port forwarding (see for example here), but I personally don’t like this solution as it opens up your router to outside attacks.

A solution which seems saver to me is to use a specialized service such as weaved (which seems to have changed its owner to the company

The installation of the required software is simple and can be done following this tutorial. This service allows you to see your video stream from anywhere in the world if you have the addresses and the login credentials needed to log into weaved.

Code Repository

The entire here presented code can be downloaded from the GitHub, by typing following at the Raspberry Pi command prompt:

$ git clone

or click on this link to open the GitHub page.

… and as usual – I am looking forward to hearing your comments and questions.
Please leave your feedback in below form.