As always, there’s buzz about something. Among many topics, one that touches our domain is SmartCities. Long history short, SmartCities is based around the concept of everything and everybody in a city communicating. So, the challenge here is getting this communication take place.
But what precisely is being communicated?
The idea is that buildings can communicate their status, whether is their internal temperature, available services you name it. Transport services should also be capable of broadcasting their status like next train/bus arrival, or any delays on the service. The same goes with city services like water or electricity supplies. And so on and so forth.
M2M
Machine to machine (M2M) as it name implies, is the concept of only machines communicating among each other for better efficiency. One example of this is the SmartGrid, on which electric meters of each subscriber (house/office), communicates directly to the electrical network. By doing so, the generation and transmission network knows in almost realtime how much electricity is required by the city. This way of working improves efficiency and reduces waste in generating electricity that’s not needed, as it was done before the SmartGrid. On top of that, more efficient in electricity generation, means less pollution, everybody wins!
Communication
So, how does all this takes place in sync? It doesn’t! Hence the challenge! Each of those services has its own protocol to communicate, which -naturally- makes it incompatible with each other, making communication among services impossible. That’s exactly the stage we are right now with SmartCities. Transport, Electricity, buildings, etc. don’t share their information the same way, let alone the same type of information. Ironically, there are protocols already in place to overcome this situation, the thing is, these protocols need to be either agree among the population of the city, or enforced as an standard by the local authorities. Otherwise it won’t work. LonWorks and BacNET are two examples of these protocols which few use. Don’t get me wrong, those protocols are alive and well, however, critical mass use hasn’t been reached yet.
Currently, some of this data is being exposed as a JSON API (technical jargon), however is not practical to think that each building will have such a thing, as implementing a JSON API per building is costly and maybe impractical.
So, before any city even dares to jump into the SmartCity buzz, a communication protocol between all the parties should at the very least be considered.
Our experience
We -naturally- think augmented reality can help the digestion of all this overwhelming information. That’s how and why Terra Icons was built. The app integrates mainly four services in cities:
- Transport (subway/metro location)
- CarSharing (parking stations location)
- Bicycle sharing (parking stations location)
- Landmarks (City Icons)
Needless to say, none of those services speak the same language. So before we integrate those services to Terra Icons, we need to normalize them in order they can talk to our app and display similar information, regardless of where in the world the user is.
Image recognition
One frequent dream the people has, is to point the phone towards a building or sign and getting it recognized, in order to trigger some action or extra information (enabling the SmartiCity). That’s exactly what we do, however we do it via geolocation, not image recognition. Why? Because:
- Image recognition requires previous capture of the image to be analyzed. So, if you want a building or a sign to be recognized, it has to be captured and stored for further recognition by mobile devices. This requires the handling of huge amount of information to be saved in our servers. Only companies the size of Google can do such a thing, and that is a byproduct of their primarily searching service
- Even if you do the previous, then there’s the challenge of lighting changing conditions. You see, a computer learns to identify one photo at a time, they can’t -yet- identify an object with the lighting on it changed, because to them -computers- that’s another object. So, in order to enable image recognition on a city wide level, several photos have to take place of the same building/sign in order to cope with the change in lightning conditions
As you’re imagining, adding lighting condition plus an image DB for each building and you have to build a gigantic data base, that even if one uses big data, it’s not easy to handle, let alone low cost. Last but not least, all this integration requires efforts and economical resources that are normally scarce on municipalities.
So there you go, now we are aware of the challenges to achieve the SmartCity dream, now we need to overcome them! 🙂
Image source: advancedphotoshop.co.uk