Bitmoji avatar

Juuso Haavisto

Research Assistant, Entrepreneur
github twitter dblp [email protected]

vacuum cleaners and me

sucked into researching vacuum cleaners

[this post, according to the author's self-reflection, contains traits of surrealism. the train of thought might not lead anywhere, nor is the author sure whether the train even exists]

consider you own a smart house with an intelligent central system, which is voice controlled. consider this voice-based assistant as omnipresent. how do you wish to interface with this computer? what i am saying, do you wish to live in the house, or do you wish to live with the house? is there point with bonding with a computer system?

movies like Her and Bladerunner 2049 are recent scifi examples in which the protagonist has an intimate relationship with a computer system, and in which (spoilers): the relationship ends, reminding of us, possibly, about the insufficiency and threats of such system, or maybe general line of thinking that actual relationships or belongingness to a human communities are irreplacable

i personally vouch to live with the house. while i do not own a cat or a dog, i find it an interesting excercise to consider a vacuum cleaner to act as one. that is, have the vacuum cleaner greet me when i come home, spinning there in excitement, and talking to me like dogs do [up movie dogs]

now bear with me with this star wars sidetrack, but i find it interesting that star wars continously portraits droids as either (1) beeping ones like r2d2 or (2) like c3po, which actually talk, but i cannot recall case where a droid would have done both

if we go beyond star wars canon we can find hk-47, of whose personality is well captured in the following youtube video:

considering vacuum cleaners, hk-47 firstmost reminds of us to not stick a knife onto one, but also something which has been overlooked in current voice assistants, which is opening the voice feedback with a frank assessment of what and how the computer came to conclude whatever it is trying to communicate. thus, despite the witty personality, the human interacting with the droid is continously being reminded about -- by the blunt opening lines and by the droid itself -- of its limits and utmost purpose of serving the meatbag

another way, to distant the relationship with the computer i guess, is to simply have it communicate what seems like gibberish, as per star wars. however, it's worth pointing out that even in star wars several people seem to understand the boops and beeps without a translator. i do not really care, nor do i think there is a deep meaning to why some droids boop and some speak, but i certainly consider the translator part interesting scifi aspect.

for the purpose of adding colors to my personal artistic palette, i recently bought a synthetisator. i mean, i've already played a keyboard for my whole life, albeit one which has starts with QWERTY. anyhow, the synth can produce sounds, some of which i think resemble droids in star wars. as the synth has 24 keys, it is almost equal to the the amount of alphabet (or exactly, if greek). as so, the logical thing to do here is to map different synth keys to different alphabet, to produce a beeping voice for our supposed vacuum cleaner. you can try out the demo below.

i like this demo, as it tends to be the first thing which people validate that they are talking to an insane vacuum cleaner person. anyhow the idea follows that with time, it could be possible to map different synth sounds back to their corresponding alphabet, thus generate a proof of concept of star wars droid-to-human translation utopia. that is, either the person carries an active microphone, or the room has one, which then is emitted as traslated text to earplugs or whatever. carrying such device is a bit impractical unless it can be embedded into a smartphone, thus what you would need is likely an additional translation device or something like postmarketOS which lets you hack the phone. i tried to make the phone application work, but i managed to brick my nokia n900.


monotone beeps aside, what counts as a smart home anyway? i think its essentially an environment, where computer devices act together as a coherent system. that is, the vacuum cleaner should be able to control devices within the house. unfortunately, the google homes and apple homekits do not really give a good experience unless you are exclusively using approved hardware, which the vacuum cleaner would likely not be. luckily, projects like home assistant exist.

to easen up our work, we can now limit the brands of vacuum cleaners to ones which work with home assistant (definitely larger than what Google and Apple support). after some googling, it is found that the xiaomi one is interesting thanks to previous art of dgiese.

thanks to him, the xiaomi robot can be rooted. this is exactly what we want. it also enables further philosophical questioning, that is, rabbit hole digging.

effectively, we can now apply the previous beep project into our vacuum cleaner. we can now emit messages over the network and have the vacuum speak to us like it would in star wars.

however, the robot cannot really recognize human beings, which is essential for an interesting interaction. as so, the logical thing here is to hack vacuum cleaner sensors to do some work for us.

to get little bit technical, the vacuum cleaner is running a Player-robot service. this is good, because the project is open-source. we find the trunk of the project from 2015, and use 16.04 ubuntu to compile the project from source to enable C API to fetch LIDAR sensor readings. LIDARs are the same devices which control many autonomous cars, which among many other things have to recognize humans, or at least objects blocking their way. as so, if someone has gotten a car to recognize obstacles, and possibly humans, its theoretically possible for me as well. what follows is that should the vacuum cleaner recognize humans from its surroundings, it can better interact by the previously mentioned wiggles, or simply follow people around, making it more relatable object. the idea is that if done properly, maybe even others than me can have feelings for a vacuum cleaner.

considering the technicalities, it is luxurious to operate in an environment which is known to us beforehand, which essentially enables rather simple approach to achieve object and basic human detection within the four walls we consider home.


so the lidar ouputs the following readings: the angle of a "beam", the distance a "beam" travels, and also some sort of "intensivity" of the object, supposedly mandated by its color. the lidar on the vacuum gives us readings by the precision of a single degree, thus we have 360 points being outputted multiple times per second. a beam travels a maximum of circa 7 meters, which should be enough to hit each wall in small apartments.

one rather simple, and computation effective way to produce object classification in our use-case is to first identify the walls and other objects which do not usually move, such as tables and whatnot. this allows us to create an initial floorplan of the room, thus anything new within the floorplan could be considered as an object of interest to the vacuum. after all, in our use-case we hope to identify the distance to a human in relation to the vacuum, not so much for the walls.

scanning the room from a stationary position will allow us to predict the position of a continous stationary object, thus piece walls and other objects together. that is, we can create a bounding box for each beam, and because we know that if the object is stationary, we can assume the next beam to hit it within certain distance, dependent on the previous distance. should the bounding box be hit, then we can take the two points, and draw a line between them, thus effectively creating an object from two beams. the hypothesis is that should these lines then be close to each other, we could then further piece together nearby objects thus create a visualization of the space for our vacuum. this also makes the program effective, by removing the required of amount of points to held in memory by 99%

as so, what is achieved that when the vacuum cleaner is now scanning the room for humans, it can check whether the beam is close to any previously known object (a line), and if so, then discard the beam as hitting the wall, for example. should the beam not hit any previously known object, we could consider the beam hitting something new -- a point of interest --, possibly a moving object, which we could then check for our understanding of representations of human legs.

the code which achieved this more or less is found on my github: https://github.com/jhvst/roborock-svg-lasermap

the math is partly incorrect, but serves the purpose of demoing whats possible. moreover, it sets a tone of practicality to our further thinking on how vacuum cleaners can operate within constrained environments, such as that of a home

a demo can be viewed here:


while the vacuum cleaner is now in its primetime to act as a stationary surveillance device (i mean besides of its manufacturer https://dgiese.scripts.mit.edu/talks/), we can return back to orbiting our thoughts regarding a smart home and repurposing the vacuum cleaner

however, now we are in face of a dilemma, which is about sharing resources of different devices within the home network. that is, the vacuum cleaner now requires a pre-calculated floorplan to act, which it has to either calculate on each run operation, or fetch from somewhere else. it is also worth mentioning, that the processing power required to calculate the maps in realtime was offloaded onto a local desktop computer. as so, the smart home project is truly looking to be like the be like the vision mentioned before, in which devices, possibly repurposed from their original task, collaborate to achieve something together. this is also mentioned in another version of the previously shown vacuum cleaner demo, narrated by my ever-so soothing voice:

to further drive out the point, i created an augmented reality demo which used the floorplans created by the robot. that is, because the floorplan is vector graphics, it is easy to mark the position of some IoT devices, such as Apple TV or light switches, to the floorplan. in the following example, a transclucent LCD screen is hooked into a Raspberry Pi, which sends out 9DoF readings to the local area network, in which the readings are parsed in relation to the floorplan, thus achieve a two-dimensional operation environment for platform indepdendent augmented reality applications. the demo consists of two video clips, and has subtitles to translate my Finnish into English:

in this demo the augmented reality tablet is calibrated to start from a specific point in the floorplan, but locating the tablet in relation to the floorplan could as well be done with the vacuum cleaner, given it can find humans from the room.


as so, the following applications also raise the question that in the future, should there be an augmented reality devices, who does in fact provide the pointcloud data? because these pointclouds have to be both updated and mandated, e.g., suppose your augmented reality glasses continously transmit depth information somewhere, then some sort of actor is responsible of tripping that information from specific parts, while also using it to train current models to match real-life changes, such as construction sites or whatever. alternatively, it could be that some other devices do the actual depth-scanning, and the glasses in itself are merely using the precalculated data to draw their distance in relation to the already generated maps, thus achieve lower operation overhead.

moreover, because these devices require data which is most likely requested from a remote server, the wireless connectivity in these devices is an essential part in providing a working product. we also observe that it might be that these devices would partly use remote servers to achieve better computational capability, and possible energy efficiency, because of the rather complex math required to not only identify points of interests, but let alone to parse the required information into a useful format

obviously, as one does, they do not leave these open questions hanging, but instead start to wonder how the tablet demoed previously could be shrank into a smaller form factor, to that of a smart glasses. my investigations are still in progress, as i was required to dwell into the depths of academia for this one. luckily, i had both the support and possibility to start researching an angle to this through operating in LTE spectrum, by running my own teleoperator. i hope that someday, i can make another demo in which the computation offloading done by the vacuum cleaner is not done over WiFi, but a cellular connectivity which's radio access network i control. some initial thoughts and constraints can be read on my bachelor's thesis, which laid down the foundation to start developing the technological stack towards my goals of bringing vacuum cleaners to a collaborate clan to sweep away the floors of the university while stopping here and there to talk to people