I've read a few articles lately on the acquisition of WifiSLAM by Apple. For me it's a bittersweet event that I've been waiting for as I lead my previous startup Qubulus to the gates of the same company, presenting what still is the most beautiful API for indoor positioning based on Wifi radiomapping. As a bunch of people have pointed out -"it could have been You.."
Well, no. I came to realize shortly after the encounter with the Apple indoor tech team that a) radio wasn't their preference when it comes to indoor positioning accuracy b) they were not very good at what they were doing, only somewhat better than the competitors at Google and Sony(Ericsson).
A Swedish company, SenionLabs, was mentioned at the point in time to have conducted a demo at the exact same place as we picked and that there wasn't much of a difference in the user interface (ie the map) between the technologies. Which proves my point b, if you look at a map on a screen with demo scale, detail accuracy and best case a beta version of the tech and not under the hood... Then you are just like someone buying a car based on the color.
SenionLabs and WifiSLAM technologies are rather close in comparison but I'd say that it looks like the WifiSLAM architecture is way better on how to move from macro positioning (GSM/UMTS/etc) to local positioning (Wifi, BT) down to micro positioning by Sensors (Gyro, Acc, Magnetometer, Compass, light etc).
Which proves my a) statement, it was the use of sensors that they wanted, and bought.
Then I started to work on something new 14 months ago. I didn't believe anymore that Wifi would be the solution to ubiquitous smartphone positioning indoor, basically all players were making it extremely difficult to create a common solution. Apple turned off RSSI-readings for Wifi on iOS. Microsoft didn't even implement it on Windows mobile 7 or 8. The wild west behavior of producers of Android phones created a mess, Samsung created so many new models so fast they didn't implement Android and RSSI-reading in the same way throughout. No time for it. So you could say that the manufacturers killed Wifi as a good basis for indoor positioning, willingly or not.
What I started out with was the knowledge about the sensors in combination with all other components in a smartphone, how they "feel" the presence of RF-modules, battery charger, screen etc. I realized that in the existing structure that all smartphones are built; cramped, packed, overheating, antenna problem, huge screens pulling +50% of all power etc; is killing the sensor performance.
So even if you have almost "learning" algorithms (I can talk for days about the misconception of self teaching algorithms...) as WifiSLAM and SenionLabs, you can't get them to perform over time.
Simply put, they are there in a smartphone to react to real time issues and are not in any way kept in a way to perform over time and distance.
Now, 14 months later we are running a full scale startup with product development, Ulocs, with a brilliant team that combines research of sensor performance with user interaction analysis. I won't tell you How we do It and What we Use in terms of Solutions but I will tell you some of our findings.
Finding no 1. From a sensor technology environment smartphones sucks. And iPhones suck the most. Because of their design Freakshow. What do You think will happen if you put the antenna as a frame around the whole package with maybe the most advanced screen technology in the middle pulling the battery power faster than anything on the market? Correct, sensor disaster.
Finding no 2. Everyone is using different internal architecture but sensors are on the lowest priority when it comes to internal resources, space and isolation. I bet the design process of a new platform goes like this:
- "Dudes, we need a Bigger screen!"
- "Yeah, let's get a bigger screen! Anyone got a science fiction battery to go with it??"
- "No but we will have when we go into production; let's roll! Throw in 4 CPUs, 5 radios, lot's of memory and a killer design based on a metal frame around the whole thing!"
- "Dude, that's is aaawsome!!"
- "Wait, I think we need sensors too...?"
- "Uuuh, that's right, well put them somewhere, I don't care..."
Finding no 3. Everyone is using filters to read the sensors so the output to services and functionalities based on the sensor real time input is stabilized. First we thought these filters were adaptive and "smart". Shows they are not. So the filters are not adapting to the usage of the other parts of the phones platforms; meaning: shit in - shit out.
I believe it's a kudos to the WifiSLAM team on this too that found this out +2,5 years ago. They started to read the raw data output from the phone and created their filters and analytical algorithms based on that. Still they had to pull the old binary map trick to get any kind of micro accuracy going so that proves their problem with basing some smart stuff on mistreated sensors sending raw data. And now it's Apples problem.
Basically, as long as you design a smartphone platform the way Apple is, you can't get any good accuracy based on sensors without binary maps. To get binary maps you need to have the indoor map of a facility that you want to provide your users with indoor positioning. So Apple have what we can see as pretty bad sensors, lousy architecture for sensors and a need for binary maps to pull it together.
Bad boy, you are going to get punished...
It still means that you have a huge advantage over radio mapping from Wifi. That takes a lot more work on site even if you get better accuracy. And anyone that suggests that installing beacons, well they are focused on office spaces and not public areas, malls, airports or any other commercial space because of the huge maintenance that brings.
If you really want to have great micro positioning based on sensors? Then I suggest you work out your filters and architecture. Who you wanna call? Ulocs.
Btw, based on knowledge about teams that get acquired by Apple when they have superior code over Apple; I'm 90% sure the WifiSLAM will end up in the dungeons under Infinity loop...