IoT Connected Computer Vision

tassAI is computer vision software that is able to communicate autonomously with applications and devices via the Internet of Things. There are several versions of tassAI and several different projects that evolved from the concept. Each version of tassAI uses different techniques and technologies to accomplish facial recognition and other computer vision uses.

The projects are now totally open source and have lead to the creation of a number of non facial recognition projects including computer vision project for detecting breast cancer, American Sign Language and classifying white blood cells.

At the Intel® / Microsoft / IoT Solutions World Congress Hackathon 2016 in Barcelona, a version of tassAI was presented as Project H.E.R and won the Intel® Experts Award. Since then tassAI has evolved immensely and the network version now uses Intel® hardware and software for the local A.I server.









A Look Back At Web Summit

We recently demoed tassAI at WebSummit in Lisbon, check out our look back video to see what went on at the event.

View Event Page






Intel Technologies







Open Source










The History


During the ongoing development of TASS, around 10 different A.I. solutions have been used and tested.





The concept of tassAI began when Adam Milton-Barker submitted a project idea for an IoT security system to the then new Windows Developer Program for IoT. The idea was awarded with an Intel® Galileo developer kit to make the first version of what was then called the LinGalileo Security System.





The first solution was to use OpenCV and Haarcascades with an Eigenfaces model, users could upload their training data which was sent to the device via MQTT for training. This solution was good as a POC, but identification was not accurate enough. The solution has now been opened up as an example for the IoT JumpWay Developer Program. This version has now been opensourced.





At the IoT Solutions World Congress Hackathon in Barcelona the second version of TASS won the Intel Experts Award for building a deep learning neural network on the Intel Joule. This solution included OpenCV to detect faces, and Caffe to identify them, although we managed to build the network on the Joule, we were unfortunately unable to complete the full functionality, but had a great time working on the project and were honoured to win the Intel Experts Award.




The third solution was to use OpenCV to detect faces and pass them through a custom trained Inception V3 model using Tensorflow. The project used transfer learning directly on the device (Raspberry Pi). Users could upload their training data which was sent to the device via MQTT for training. This solution was a massive improvement and accuracy for detecting trained people was almost 100%, unfortunately I identified an issue which I now know to be a common issue at the moment, where the network would identify anyone that was unknown as one of the trained people. This version has been released as InceptionFlow.


TASS was officially debuted at the Intel Booth at CodeMotion in Amsterdam in 2017. More recently TASS was demonstrated at Web Summit alongside A.I. E-Commerce debuting the current version using the latest Intel NUC.

The project a system was developed on the foundations of OpenFace. A server used for the A.I was re-homed onto an Intel Nuc. The structure of the network also changed, the program that handles the facial recognition and identification could now connect to multiple IP cams, previously the camera devices would send the frames through MQTT to the broker, with this move, the identification process became significantly more efficient, the camera devices now only needed to streamand no longer needed to connect to the communication broker, this also meant that 3rd party IP cameras were now supported. In addition to the ability to manage multiple IP cams, the hub could now process frames from a Realsense camera and classify the image.


  • For the 6th solution, we moved towards a more powerful Intel NUC. This move enabled us to take advantage of the Intel Computer Vision SDK which drastically improved the efficiency of the project. Previous issues with lighting, speed and accuarcy were no longer a problem, and TASS now only requires one training image of a person to be able to accurately identify them.

  • The 7th & 8th solutions were created with the intention of open sourcing the code to allow other developers to create their own IoT connected, Artficially Intelligent Assistant. These versions are both Windows console applications and allow you to create your own version of TASS using a webcam or a RealSense camera. You can find full source codes and both tutorials on our Github, click here to view the tutorial.






About The Developer

Adam Milton-Barker has over 5 years experience in the Internet of Things & Artificial Intelligence/Machine Learning, and over 13 years in hybrid web applications & content management systems, mobile & desktop applications, business administration systems & social media marketing. Adam uses these skills to provide free information and services for those wanting to learn how to program.

Proud member of the Intel® Software Innovators Program, Intel® IoT Alliance, Microsoft Bizspark and Microsoft Partners.











Related Videos

Check out related project videos, including demo videos of TASS and the A.I. E-Commerce Store.

View All Videos



Global IoT DevFest
Web Summit Lisbon 2017
Codemotion Amsterdam
Collision Conf New Orleans 2017