Finally — here is a Python script, that you can use for example in your Raspberry Pi garden sprinkler controller to watch your beautiful garden from anywhere in the world you are and have access permissions…
In Part 1 of this Home Surveillance post I laid out the basis of a home-surveillance system: Video capturing with a camera.
A good home-surveillance system should consist of several parts:
- You should be able to see live what is going on at home.
- The video system should react to changes or movements and save a video / photo at detection of something relevant.
- You should be able to access a live stream and videos/photos of detected intrusions from anywhere using mobile phone or computer.
In this post I want to drive this a little further. The here shown code can be used for a real home surveillance system:
- the detection of motion in a video stream.
- the streaming of the life video stream to the browser – accessible to computers in the same network.
- accessing the life video stream from external computers.
The Required Parts
For this system to work you will first need the following hardware:
- Raspberry Pi (preferably 3).
- A USB camera such as this one.
Regarding the software on the Raspberry Pi:
- Install Python (preferably 3 or higher).
- Install OpenCV environment as shown in my post Simple Home-Surveillance System (Part 1).
- Install Flask with the following command (for more information on Flask see e.g. this post)
$ sudo apt-get install python3-flask
The File Structure of the Code
The file structure of the code is as shown in below diagram:
./camera.py ---> The main class library for camera activation and image/video capture... ./web.py ---> The Flask application that streams captured images to the web browser... ./templates/index.html ---> The demo webpage for video streaming to a browser...
The main code and class structure is in the file camera.py. This file contains the class that activates the camera, performs image processing, streams the images to a web-site, saves them to a file and streams them to the screen.
The file web.py contains the flask code that performs the writings to the website.
Finally the file index.html (in ‘templates’) contains the implementation of the website.
Image Capture and Motion Visualization with Python
#!/usr/bin/python import cv2 import time class Camera(): # Constructor... def __init__(self): w = 640 # Frame width... h = 480 # Frame hight... fps = 20.0 # Frames per second... resolution = (w, h) # Frame size/resolution... self.cap = cv2.VideoCapture(0) # Prepare the camera... print("Camera warming up ...") time.sleep(1) # Prepare Capture self.ret, self.frame = self.cap.read() # Prepare output window... self.winName = "Motion Indicator" cv2.namedWindow(self.winName, cv2.WINDOW_AUTOSIZE) # Read three images first... self.prev_frame = cv2.cvtColor(self.cap.read()[1],cv2.COLOR_RGB2GRAY) self.current_frame = cv2.cvtColor(self.cap.read()[1],cv2.COLOR_RGB2GRAY) self.next_frame = cv2.cvtColor(self.cap.read()[1],cv2.COLOR_RGB2GRAY) # Define the codec and create VideoWriter object self.fourcc = cv2.VideoWriter_fourcc(*'H264') # You also can use (*'XVID') self.out = cv2.VideoWriter('output.avi',self.fourcc, fps, (w, h), True) # Frame generation for Browser streaming wiht Flask... def get_frame(self): self.frames = open("stream.jpg", 'wb+') s, img = self.cap.read() if s: # frame captures without errors... cv2.imwrite("stream.jpg", img) # Save image... return self.frames.read() def diffImg(self, tprev, tc, tnex): # Generate the 'difference' from the 3 captured images... Im1 = cv2.absdiff(tnex, tc) Im2 = cv2.absdiff(tc, tprev) return cv2.bitwise_and(Im1, Im2) def captureVideo(self): # Read in a new frame... self.ret, self.frame = self.cap.read() # Image manipulations come here... # This line displays the image resulting from calculating the difference between # consecutive images... diffe = self.diffImg(self.prev_frame, self.current_frame, self.next_frame) cv2.imshow(self.winName,diffe) # Put images in the right order... self.prev_frame = self.current_frame self.current_frame = self.next_frame self.next_frame = cv2.cvtColor(self.frame, cv2.COLOR_RGB2GRAY) return() def saveVideo(self): # Write the frame... self.out.write(self.frame) return() def __del__(self): self.cap.release() cv2.destroyAllWindows() self.out.release() print("Camera disabled and all output windows closed...") return() def main(): # Create a camera instance... cam1 = Camera() while(True): # Display the resulting frames... cam1.captureVideo() # Live stream of video on screen... cam1.saveVideo() # Save video to file 'output.avi'... if cv2.waitKey(1) & 0xFF == ord('q'): break return() if __name__=='__main__': main()
The class Camera() defines the implemented features of the camera class. In the ‘Constructor’ __init__() after a few constant declarations the camera is prepared/started using the openCV VideoCapture() function. In line 24 to 26 three images are captured, which are used later for motion detection. This is achieved simply by subtracting these consecutive images from each other. There, where the result is not equal or close to zero something has moved or changed. Such area is called a blob.
The lines 29 and 30 define the codec and the form to save the video file. The right combination between codec and file format can be tricky to find, but the shown one worked well on my Raspberry Pi 3.
The function get_frame() (lines 34 to 39) reads single images from the camera and saves them in a jpg image called ‘stream.jpg’. Each image overwrites the previous one. The images of this jpg file are captured by the script web.py and streamed to the website index.html which can be viewed in a browser.
The function diffImg() in lines 42 to 46 calculates the difference between two consecutive image pairs and ‘AND’s them in an attempt to make the detection of changes between three consecutive frames more robust against false detections of blobs.
The function captureVideo() in the lines 63 to 66 reads an image from the camera and saves it in the variable next_frame. The variables current_frame and prev_frame receive the frames captured earlier (commutation). These three variables are used in the function diffImg() to calculate the blob.
saveVideo() simply saves the captured video in the file ‘output.avi’, which was declared in the __init__() function.
Finally __del()__ closes all camera connections and also closes all opened windows. Closing the camera properly is important to leave it in a defined state so that it is possible to re-connect.
The main() function is basically the main function which calls the other functions.
Running the Video Capture and Motion Visualization Script
Please make sure you run this script in the openCV virtual environment we built in the Part 1 post, by typing:
$ source ~/.profile $ workon cv
You can start the script with
(cv)$ python camera.py
Below image shows the result of building the difference between consecutive frames (blobs). Everything is black (=0) besides the places where motion occurred.
Streaming the Video to a Browser with Flask
The following script allows you to stream the live video to a browser. This script is based on a post from Miguel Grinberg with some modifications to allow the life video stream from the web cam.
#!/usr/bin/env python from flask import Flask, render_template, Response # emulated camera from camera import Camera # If you are using a webcam -> no need for changes # if you are using the Raspberry Pi camera module (requires picamera package) # from camera_pi import Camera app = Flask(__name__) @app.route('/') def index(): """Video streaming home page.""" return render_template('index.html') def gen(camera): """Video streaming generator function.""" while True: frame = camera.get_frame() yield (b'--frame\r\n' b'Content-Type: image/jpeg\r\n\r\n' + bytearray(frame) + b'\r\n') @app.route('/video_feed') def video_feed(): """Video streaming route. Put this in the src attribute of an img tag.""" return Response(gen(Camera()), mimetype='multipart/x-mixed-replace; boundary=frame') if __name__ == '__main__': app.run(host='0.0.0.0', debug=True, threaded=True)
The Camera() class listed above is instantiated here in line 5 (‘camera’). In the function gen() (lines 20 to 25) the get_frame() function of ‘camera’ instance is accessed, which generates jpg frames read from the web camera.
These jpg frames are then streamed with the function video_feed() to the web site index.html in the folder ./templates, which is defined in the function index() (lines 14 to 17).
The Browser Website Template
The design of the website which is displayed in the browser is specified in the file index.html.
<html> <head> <title>Video Streaming Demonstration</title> </head> <body> <h1>Video Streaming Demonstration</h1> <img src="{{ url_for('video_feed') }}"> </body> </html>
Line 3 specifies the name of the tab in the browser. Line 6 specifies the headline of the displayed website.
Finally line 7 establishes the connection to the video feed function in web.py lines 28 to 32.
Video Streaming to a Browser
Now we are ready to stream the camera output to a website.
For this type at the prompt following command:
$ python web.py
The output should be similar to what is shown in below image:
Type the address shown at the output — here http://0.0.0.0:5000 — into the Raspberry Pi browser. This should activate the camera and you should see the live stream.
If you used above version of web.py you should now also be able to see the live stream on any computer in your home-router network.
For this find out what the IP address of your Raspberry Pi is, by typing ifconfig. In my case the address is 192.168.1.110 as shown in below screen shot:
Open up on your laptop or smartphone a browser and type in following address (assuming your IP address is 192.168.1.110):
http://192.168.1.110:5000
…and you should see the live stream of your Raspberry Pi webcam.
Note: You will only be able to see the stream in one browser at a time. I am still trying to figure out how to stream in multiple browsers at the same time.
Accessing the Live Stream from the Outside World
The holy grail for the live video surveillance is certainly to be able to access the video stream from anywhere you are.
You can do this using e.g. port forwarding (see for example here), but I personally don’t like this solution as it opens up your router to outside attacks.
A solution which seems saver to me is to use a specialized service such as weaved (which seems to have changed its owner to the company remote3.it).
The installation of the required software is simple and can be done following this tutorial. This service allows you to see your video stream from anywhere in the world if you have the addresses and the login credentials needed to log into weaved.
Code Repository
The entire here presented code can be downloaded from the GitHub, by typing following at the Raspberry Pi command prompt:
$ git clone https://github.com/Arri/Py_WebSurveillance.git
or click on this link to open the GitHub page.
… and as usual – I am looking forward to hearing your comments and questions.
Please leave your feedback in below form.
April 12, 2017 at 3:41 am
Very interesting post.
I was looking for something like this to play with my RPi.
Unfortunately i can’t save a readable video.All i can obtain is a small (5 or 6 kb) “output.avi” file not playable in vlc or any other video player on my linux machine.
Maurizio
April 18, 2017 at 2:24 am
Hi Arri, you are right, it was an uncorrected frame width setting.
Before switching to raspberry i’ve tested your code on my notebook with a builtin webcam but i didn’t know that it was acquiring at a 848×480 resolution,so there was an inconsistency between frame resolution acquired and frame resolution of generated frame(640×480).After that all went off without a hitch.
May 23, 2017 at 10:25 pm
I was looking for something like this to play with my RPi.
Unfortunately i can’t stream the Video to a Browser with Flask.
pi@raspberrypi:~ $ source ~/.profile
pi@raspberrypi:~ $ workon cv
(cv) pi@raspberrypi:~ $ python web.py
* Running on http://172.24.53.137:5000/ (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
* Debugger PIN: 257-579-858
172.24.53.135 – – [24/May/2017 12:44:22] “GET / HTTP/1.1” 200 –
Camera warming up …
Debugging middleware caught exception in streamed response at a point where response headers were already sent.
Traceback (most recent call last):
File “/home/pi/.virtualenvs/cv/lib/python3.4/site-packages/werkzeug/wsgi.py”, line 704, in __next__
return self._next()
File “/home/pi/.virtualenvs/cv/lib/python3.4/site-packages/werkzeug/wrappers.py”, line 81, in _iter_encoded
for item in iterable:
File “/home/pi/web.py”, line 23, in gen
frame = camera.get_frame()
File “/home/pi/camera.py”, line 39, in get_frame
return self.frames.read()
File “/usr/lib/python3.4/codecs.py”, line 313, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: ‘utf-8’ codec can’t decode byte 0xff in position 0: invalid start byte
172.24.53.135 – – [24/May/2017 12:44:24] “GET /video_feed HTTP/1.1” 200 –
September 30, 2017 at 6:48 am
I got the same error .. did you manage solve it ?
October 1, 2017 at 11:05 am
I am getting that same error. I had it working, then put a new image on my sd card and re installed everything and now I get that error. Did you ever figure out how to fix it?
August 18, 2017 at 2:25 am
When i run web.py , i get templateNotFound:index.html.
which ip should i use to see th streaming video. pls help me out here….
August 21, 2018 at 2:08 pm
You need to put index.html in templates/ directory
August 21, 2018 at 3:11 pm
Hi Steve,
Please see under “The File Structure of the Code”, regarding the location of the files.
September 19, 2018 at 4:24 am
much useful article, thanks a million mate <3
November 16, 2018 at 5:59 am
Hi!
Good project!
But it does not work on macOS at all with problem:
WARNING: nextEventMatchingMask should only be called from the Main Thread! This will throw an exception in the future.
Also I don’t understand why you write return Response(gen(Camera()), …) for video_feed function with every-time initialized Camera() object? Can we init it only once as start?
April 25, 2019 at 12:42 am
hi i am getting following error
please help me out as soon as possible
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
172.17.234.1 – – [25/Apr/2019 13:08:02] “GET / HTTP/1.1” 200 –
Camera warming up …
Debugging middleware caught exception in streamed response at a point where response headers were already sent.
Traceback (most recent call last):
File “C:\Users\Mir\Anaconda3\lib\site-packages\werkzeug\wsgi.py”, line 870, in __next__
return self._next()
File “C:\Users\Mir\Anaconda3\lib\site-packages\werkzeug\wrappers.py”, line 82, in _iter_encoded
for item in iterable:
File “C:\Users\Mir\Desktop\TAsk\Py_WebSurveillance-master\web.py”, line 23, in gen
frame = camera.get_frame()
File “C:\Users\Mir\Desktop\TAsk\Py_WebSurveillance-master\camera.py”, line 43, in get_frame
return self.frames.read()
File “C:\Users\Mir\Anaconda3\lib\encodings\cp1252.py”, line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: ‘charmap’ codec can’t decode byte 0x81 in position 251: character maps to
172.17.234.1 – – [25/Apr/2019 13:08:04] “GET /video_feed HTTP/1.1” 200 –
April 5, 2020 at 1:38 am
Wonderful , helped me a lot, executed fine.