This article was published 7 yearsago

The concept of ‘home automation’ has become somewhat precarious these days, where some of the perfectly functional appliances are to be compulsively replaced with their Internet-connected equivalents, which is always prone to hacks and manufacturers whim. Also, the fixture of damaged sensors or movable parts is a grinding task in itself.

And every time you call for group of  connected device makers to your abode, you run the risk of revealing your sensitive personal data, which is sucked onto the cloud for profit seeking purposes.

The researchers at CMU’s Future Interfaces Group have a different approach in this stream of sensing home environment. They claim to have figured out quicker, cheaper and lesser inconvenient way of creating a ‘smarter’ interior. Some privacy benefits have also been provided as per the claims.

Their innovation is not as capable of offering a many remote control options as a mature  IoT-enabled appliance scenario, though the approach still appears commendable. The team is looking forward to presenting their research at this week’s ACM CHI Conference in Denver. A video showing the demo functioning of its test system has also been uploaded.

The system actually comes along with  a single custom plug-in sensor board which is stacked with multiple individual sensors, but no cameras– for safety concerns. The custom sensor relies on  machine learning algorithms to process all the different types of domestic activities, such as switching ON of appliances. They can identify the opening and closing of  cupboard or microwave doors, analyze the hob of burners, or check the flush status of your toilet.

It is an effective device which enables multiple synthetic sensors, capable of tracking all kinds of in room activities, preventing us from installing a sensor on all the available device to track their status, and eradicates the problems arising from any physical damage on sensors.

Google’s 2015 research proposal, of which the CMU ‘super sensor’ project forms a part, has following goals and priorities:

The mission of this program is to enable effective use and broad adoption of the Internet of Things by making it as easy to discover and interact with connected devices as it is to find and use information on the open web. The resulting open ecosystem should facilitate usability, ensure privacy and security, and above all guarantee interoperability.

Chris Harrison, a researcher from CMU says that he can not discuss Google’s plans about commercializing the super sensor project, but there is a high probability that it may later be incorporated in some of its automation products like Google Home voice driven AI speaker.

The limitation associated with the system is that it does not allow you to have a remote control access over your appliances, as the system itself is not connected with the internet. However, this is also a benefit to some extent, as prevents the prospects of hacking or data losses.

The home automation system may encounter another major problem, which is domestic chaos, arising in situations where too many appliances are in action and the detection system fails to analyze the different domestic activities. On this  issue, Harrison clarifies;  “It can degrade if there are lots of noisy things going on.” Though the possibility of triggering different sensing channels by different functioning appliances has not been obliterated.

Harrison says:

 If you are running your dishwasher, and coffee grinder and toaster and blender all at the same time, it is likely to only recognize a few of those at the same time (though it’ll recognize the high level state that the kitchen is in use)

The automation system does have a potential limitations — it needs to be trained. Which means, the user will have to be pretty much involved initially in order to introduce the system with different types of appliances, so that the algorithms may acquaint themselves with the working of different appliances. However, a library of known appliances could be maintained on cloud, which may decrease a lot of strain.

Harrison notes:

Once the machine learning knows what a blender sounds like, it can rain that classifier down to everyone (so users don’t have to train anything themselves).

He stated that the team will continue working on the automation project, with proper financial backing from Mountain View, though nothing much could be said about the next steps of the company.

He says;

“What we are focusing on now is moving to whole-building deployments, where a sparse sensor network (a la one board per room) and sense everything going on. We’re also using deep learning to automatically identify appliances/devices, so users never have to configure anything.  Truly plug and play.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.