The goal here is to have in C++ multiple threads running, with each doing only one (or few) image processing functions and get in this way high framerates combined with a large image size.
The reading of a frame from a camera is generally a “time-consuming” procedure (especially if it is a USB-connected webcam as in this post). So this step should definitely have its own thread to get the highest possible framerate at the largest possible video size.



The code was developed on a Raspberry Pi (2) with the standard Raspbian OS. Hence to compile the C++ code requires a Makefile, such as the following:

IDIR = .  

LIBS=`pkg-config --cflags --libs opencv`

DEPS = ./camLib.hpp
_OBJ = camLib.o camera.o
OBJ = $(patsubst %,$(ODIR)/%,$(_OBJ))

%.o: %.cpp $(DEPS)
	$(CC) -c -std=c++14 -pthread -o $@ $< $(LIBS) $(CFLAGS) 
camera: $(OBJ)
	$(CC) -std=c++14 -pthread -o $@ $^ $(LIBS) $(CFLAGS) 
.PHONY: clean
	rm -f *.o camera 

Note: In this Makefile I am assuming a flat hierarchy - i.e. the three files I am implementing are all in the same folder.
To get the highest frame-rate at the larges framesize on the now relatively "old" Raspberry Pi 2 it is really necessary to use threads to use all resources available (i.e. the 4 cores of the Raspberry Pi processor). Hence it is necessary to use the C++11 or C++14 (or later) Standard which introduces the threading functionality in C++. This is achieved by using in Makefile the flag -std=c++11 or -std=c++14, as is done in above Makefile in line 15 and line 17. The Makefile also links the OpenCV libraries to the compilation with the LIBS resources defined in line 7.
I usually clean the folder from previous compilations with the call make clean and then start a new copulation and linkage with the make call.

With this Makefile it is now possible to compile the code.


Below is the header file, which defines the main camera class and its member functions. I am calling this header file camLib.hpp:


#include "opencv2/highgui/highgui.hpp"
#include "opencv2/opencv.hpp"

using namespace cv;
using namespace std;

class Camera 
		Mat captureVideo(void);
                Mat frame;
		double dWidth;
		double dHeight;
		double fps;



This member function is declaring the constructor and destructor of the class Camera and one member function called captureVideo, which has no argument. It also declares a few (private) variables, such as the OpenCV matrix frame, which contains the latest captured frames.
The member functions are defined in the file camLib.cpp:

#include "./camLib.hpp" 

using namespace std;

VideoCapture cap(0);		// The openCV camera object...

// The Constructor...
Camera::Camera(void) {
	//Check if opening the camera worked...
	cout << " Camera warming up..." << endl;
	int isrunning = 0;
	if (!cap.isOpened())  // if not success, exit program
		cout << "Cannot open the video cam" << endl;
	} else {
		isrunning = 1;

	if(isrunning == 0) {
		cout << "Camara did not start up - Exiting..." << endl;
	// Determine camera output size automatically...
	cap.set(CV_CAP_PROP_FRAME_WIDTH, 1280);
	cap.set(CV_CAP_PROP_FRAME_HEIGHT, 720);
	cap.set(CV_CAP_PROP_FPS, 30);
	dWidth  = cap.get(CV_CAP_PROP_FRAME_WIDTH); 		// get the width of frames of the video
    dHeight = cap.get(CV_CAP_PROP_FRAME_HEIGHT); 	// get the height of frames of the video
    fps     = cap.get(CV_CAP_PROP_FPS);				// get frames-per-second of the video device
	// Print values out...
    cout << "Frame size : " << dWidth << " x " << dHeight << " --- fps: " << fps << endl;
    // Read in a new frame...
	cap >> frame;		// This frame is furthre processed for the motion detection...	
	//cout << "FIRST Height = " << frame.rows << " .. Width = " << frame.cols << endl;

// The Destructor...
Camera::~Camera(void)  {
	cout << "Shutting down camera and closing files..." << endl;
// The camera access function... 
Mat Camera::captureVideo(void) {
	cap >> frame;		// This frame is furthre processed for the motion detection...	
	//cout << "In VideoCapture Height = " << frame.rows << " .. Width = " << frame.cols << endl;
   return frame;

In this file the OpenCV VideoCapture instance is declared outside the class function in line 5. This allows the access to the VideoCapture cap instance from anywhere in the class. This has advantages and disadvantages. One advantage is the simplicity. A big disadvantage is that when the camera-assertion fails (which can happen), then the program needs to be manually restarted. Maybe even multiple times until finally the camera starts up.

Main file camera.cpp

The following, finally, is the main code camera.cpp. To get the highest possible frame rate, while applying some OpenCV functions on a Raspberry Pi 2, using threads is a good way to go.
The file camera.cpp contains three functions:

  • main()
  • grabFrame()
  • processFrame()

Some important variables are declared global in this code:

  • cam1, which is an instance of the camera class, and needs to be accesible from main and from grabFrame().
  • frameBuffer, which is the specified max limit of frames that can go in the OpenCV Mat stack before the whole stack gets erased.
  • frameStack, which is the OpenCV Mat frame stack containing the frames read from the camera (up to frameBuffer frames).
  • contourStack, which is the OpenCV Mat frame stack containing the frames taken from the frameStack stack and processed using OpenCV functions.
  • stopSig, which a flag that when set to 1 signals all threads to stop and return to the main routine.

#include "./camLib.hpp"

using namespace cv;

// Allocate memory for the frames to store and start the camera...
Camera cam1;
const int frameBuffer = 50;	// Frame buffer around motion ...
vector frameStack = *new vector [frameBuffer*sizeof(cam1.captureVideo())];	
vector contourStack = *new vector [frameBuffer*sizeof(cam1.captureVideo())];	
int stopSig = 0;				// Global stop signal...

void processFrame(void) {
	Mat frame;
	Mat gauss;
	Mat gray;
	Mat contour;
    // Check if there is data in the frame buffer...
    while(!::stopSig) {
		// If the frame stack is not empty grab a frame w/o removing it for further processing...:
		if(!::frameStack.empty())  {       // If the original video stack is not empty...
			frame = ::frameStack.front();   // --> take a the first frame from the original stack w/o removing it...
			//----------------------------------OpenCV image manipulations---------------------------------------
			//contour = frame;                            // --> Just pass the original frae through...
			//cvtColor(frame, gray, CV_RGB2GRAY);	      // Converts an image from one color space to another. -- 
			GaussianBlur(frame, gauss, Size(5,5), 0, 0);  //
			cvtColor(gauss, gray, CV_RGB2GRAY);	                 // Converts an image from one color space to another. -- 
			//threshold(gauss, contour, 50,255,THRESH_BINARY);	// Applies a fixed-level threshold to each array element.
			//Laplacian(gray, contour, 165, 3, 1, 0, BORDER_DEFAULT);
			Canny(gray, contour, 50, 150, 3);
		// 1. If the contour-stack has more then 2 frames remove the last frame (at back of the stack)...
		if (::contourStack.size() > 2) {	// If the contour-stack has more then 2 frames...
			// Remove the last frame from the stack...:
		// 2. If a new processed frame is available and the stack is not yet full..:
		if(!contour.empty() && ::contourStack.size() < ::frameBuffer) {
			// Put the new processed frame at the front location of the stack...:
		} else if(::contourStack.size() >= ::frameBuffer) { // only in case the stack has run full...
			// Clear the entire stack...:

	cout << "processFrame: esc key is pressed by user" << endl;

void grabFrame(void) {
	Mat frame;
	while(!::stopSig) {
		frame = ::cam1.captureVideo();			// Capture a frame from the live stream of camera...
		// 1. Remove one frame from the back, if the stack has more then 2 frames...
		if(::frameStack.size() > 2) {		//If the framestack has more then 2 frames...
			// This line removes the last frame from the stack...
		// 2. Add a frame at the front of the stack if the stack is not full...
		if (::frameStack.size() < ::frameBuffer) { 
			// This line puts frame-by-frame at the back of the stack...
			::frameStack.push_back(frame);	// Put new frame on stack on the computer's RAM...
		} else {
			// This line clears the stack when it is full...
	cout << "grabFrame: esc key is pressed by user" << endl;

int main(int argc, char* argv[])
	Mat frame;					// Captured single frames...
	Mat contour;				// Video stream showin countours of objects...

	// Start endless loop to capture frames...
	// This endless loop is stopped by user pressing the ESC key...
	// Generate new file name with a time-stamp right after the sequence that was captures
	thread t1(grabFrame);
	thread t2(processFrame);
	while(1) {
		if(::contourStack.size() >= 2)  {
			contour = ::contourStack.back();
			imshow("Contour Video", contour);
		if (waitKey(1) == 27) 		//wait for 'esc' key press for 30ms. If 'esc' key is pressed, break loop
			cout << "Main: esc key is pressed by user" << endl;
			::stopSig = 1;		// Signal to threads to end their run...

    return 0;

In camera.cpp first the two OpenCV stacks frameStack and contourStack are cleared in lines 90 and 91 (they were defined in the lines 8 and 9. Then the two threads are started in lines 92 and 93.


In below picture I am trying to visualize the sequence of image processing through the three threads:

  1. grabFrame() is symbolized as a conveyor belt, whose only purpose is to grab frames from the camera and to store those frames on the frameStack stack. This function also makes sure that the stack does not flow over. In case the stack is full clear it and start from new. Note: A great place to look up the functions that can be used with the OpenCV stack is the GeeksForGeeks Website.
  2. The function processFrame() is symbolized as a conveyor belt as well. This one read the last frame from the stack frameStack and applies some OpenCV functions on the image. In above code you can see a few different test functions I ran on the frames. The latest that is not commented out is the Canny edge-detection (line 32). To have better results before applying the Canny function I am applying a Gaussian Blur in line 28 and transform the image from RGB to gray-scale in line 29. This function then saves the resulting frames in a second stack, called contourStack.
  3. The third thread is the main() function. This function starts the two other threads and waits for the user to press the Esc key, at which point it sets stopSig=1, which signals the two other threads to stop and to join back in main(). The other job of main() is to display the resulting images from the threads. Note: The OpenCV functions imshow and waitKey() don't seem to really work if you try to use them in sub-frames. Stable results can be achieved only if using these two functions in the main() function.


Below image shows the result of the Canny edge-detection running at 720p, 30fps on the Raspberry Pi 2 using a Logitech QuickCam Pro 9000.

This is already pretty nice with what I have. However it is also pretty much at the limit of what is possible with this hardware.
One major bottleneck is the USB 2.0 throughput of the Raspberry Pi 2 at 480Mb/s. The camera produces HD video with up to 1280x720=921,600 pixels, with each having 8bits, and that at 30 frames per second. That is 1280 x 720 x 8 x 30 = 221.184 Mb/s, which is close to max of what the USB port can accept.
So for this reason in the next step I am going to use the Raspberry Pi camera, which has a CSI-2 MIPI bus with 2 lanes allowing up to 1Gb/s, which should allow 1080p frames at higher framerates.