- Made for Intel® Core™ i7 Devices.
- Built using the Intel® Computer Vision SDK Beta.
TASS is a state of the art IoT connected computer vision server and API for advanced home and business security and automation. The hub can connect to multiple IP cameras and RealSense cameras and utilizes the Intel® Computer Vision SDK Beta to bring industry standard computer vision to the project.
TASS was officially debuted at the official Intel Booth at CodeMotion in Amsterdam in 2017. More recently TASS was demonstrated at Web Summit alongside A.I. E-Commerce debuting the current version using the latest Intel NUC.
During the ongoing development of TASS, 8 A.I. solutions have been used and tested before settling at the current solution.
- The first solution was to use OpenCV and Haarcascades with an Eigenfaces model, users could upload their training data which was sent to the device via MQTT for training. This solution was good as a POC, but identification was not accurate enough. The solution has now been opened up as an example for the IoT JumpWay Developer Program. This version has now been opensourced.
- The second solution was developed whilst at the IoT Solutions World Congress Hackathon in Barcelona, and won our team the Intel Experts Award for building a deep learning neural network on the Intel Joule. This solution included OpenCV to detect faces, and Caffe to identify them, although we managed to build the network on the Joule, we were unfortunately unable to complete the full functionality, but had a great time working on the project and were honoured to win the Intel Experts Award.
- The third solution was to use OpenCV to detect faces and pass them through a custom trained Inception V3 model using Tensorflow. We created the ability to carry out transfer learning directly on the device (Raspberry Pi). Users could upload their training data which was sent to the device via MQTT for training. This solution was a massive improvement and accuracy for detecting trained people was almost 100%, unfortunately I identified an issue which I now know to be a common issue at the moment, where the network would identify anyone that was unknown as one of the trained people. I am currently writing a Python wrapper for the Tensorflow/Inception/IoT JumpWay method and the project will soon be released as an IoT JumpWay example.
- For the 4th, a system was developed on the foundations of OpenFace. We moved to using a local server to house the A.I. (Ubuntu) rather than doing the identification onboard as the identification onboard using an RPI was quite poor. This move means that training is only required on the server rather than all devices. As with the Tensorflow implementation, we came across the issue of unknown people being identified as known people. We have so far resolved this issue through the use of an unknown class, although this solution may not work across the board, we are working on additional solutions with the OpenFace GitHub community which incorporate multiple models that will verify the identification.
- For the 5th solution, the server used for the A.I was re-homed onto an Intel Nuc. The structure of the network also changed, the program that handles the facial recognition and identification can connect to multiple IP cams, previously the camera devices would send the frames through MQTT to the broker, with this move, the identification process became significantly more efficient, and also the camera devices only need to stream, they do not need to connect to the communication broker and 3rd party devices are now supported. In addition to the ability to manage multiple IP cams, the hub can now process frames from a Realsense camera and classify the image. This version has now been opensourced, click here to view the tutorial.
- For the 6th solution, we moved towards a more powerful Intel NUC. This move enabled us to take advantage of the Intel Computer Vision SDK which drastically improved the efficiency of the project. Previous issues with lighting, speed and accuarcy were no longer a problem, and TASS now only requires one training image of a person to be able to accurately identify them.
- The 7th & 8th solutions were created with the intention of open sourcing the code to allow other developers to create their own IoT connected, Artficially Intelligent Assistant. These versions are both Windows console applications and allow you to create your own version of TASS using a webcam or a RealSense camera. You can find full source codes and both tutorials on our Github, click here to view the tutorial.
TechBubble Technologies have over 4 years experience in Internet of Things & Artificial Intelligence/Machine Learning and over 13 years in hybrid web applications & content management systems, mobile & desktop applications, business administration systems. Our team has a vast range of experience in disruptive and non disruptive technologies, spanning over 13 years. We use these skills to create innovative systems that allow businesses to automate their world.
Proud members of the Intel® Software Innovators Program, Intel® IoT Alliance, Microsoft Bizspark and Microsoft Partners.