Search

“We are creating brand new data, part of the smart city transformation”

Updated: Dec 19, 2021

How can AI be harnessed to repair a broken curbstone, or recognize violence in the public sphere -- and what are the surrounding ethical implications? Orry Kaz, one of the founding members of Urban AI, speaks with Cybertech News


All photos and images courtesy Urban AI
All photos and images courtesy Urban AI


In urban, crowded Israel, one would be hard-press to find a person who had never stumbled over a cracked curbstone, or had never muttered a bit of foul language in the midst of searching for the key that fell to the ground late at night, at the very corner where the street lamp had gone dark. It is often the city’s residents who also call the municipal hotline to report these hazards, hoping for a quick fix, which is not always the case.


Urban AI was created with the intention of avoiding precisely such frustrating situations. Founded some two years ago by Orry Kaz, Erez Damoch and Shimon Sheves, and located in Tel Aviv, the company develops algorithms that analyze the abundance of visual information collected by municipalities at any given moment, in order to extract various insights. The company was recently featured in Israel’s Muni Expo, the country’s largest municipal innovation fair, as well as at the Smart City Expo World Congress in Barcelona.



Orry Kaz
Orry Kaz

“Local municipalities have a major problem: they’ve got lots of data as well as sufficient budgets – but ultimately, they struggle with innovation,” says co-founder Orry Kaz. A data scientist and researcher, Kaz says that it was several years ago that identified the need of municipalities to gain command of the endless amounts of information generated by the various cameras networking the public sphere – static cameras, video cameras, mobile ones (such as cameras installed on roofs of municipal vehicles), drone footage, and more.


“Some municipalities operate hundreds of cameras across their public sphere. The information generated might be analyzed, to a certain degree, by some sort of operations center, but there is no AI. Meaning, a great deal of data goes unanalyzed. Employing AI can get insights, relations, behaviors etc. from this data – so basically, it creates brand new data that can be used differently, one element of the intricate complex that helps transform a regular city into a smart city,” says Kaz.


What does this actually mean, as far as day-to-day life?

“There are many usages, many questions a given municipality might pose the algorithm. For instance: where are all the broken sidewalks, or flickering traffic lights, or knocked-over traffic signs. We can map the area and deliver quick answers, so that the hazards can be repaired quickly,” Kaz explains.


“Furthermore, we get constant, dynamic updates regarding the issue at hand. For instance, if there was a crack in the sidewalk, the next time a municipal vehicle, with a camera installed, will pass by, we will get that data and understand whether the problem was fixed or not. Artificial intelligence can improve the daily quality of life, and quite easily.”


Kaz shares details of another intriguing project, of a large municipality that has teamed up with Urban AI to identify and mitigate violence in the public sphere, in real time, through employing the stationary cameras already deployed throughout the city (“I think they would be very interested in announcing this initiative themselves, once it matures,” Kaz explains his request not to disclose which municipality this is).


But at the end of the day, it is only a machine, and the company is well-aware of its potential problem. “Think about two people hugging in the public sphere. In many ways, judging by appearance along, this act might resemble an act of violence,” Kaz explains. “This means one must use highly advanced and accurate algorithms in order to refrain from tagging non-violent events as something they are not, while at the same time not missing or overlooking actual violent events. The challenge is to be as accurate as possible.”


Let’s discuss privacy protection. Residents are constantly being recorded on cameras, most times without them even being aware this is happening. And then, the footage goes on to be analyzed in systems such as yours…

“Our purpose as a company is not to hold on to this data or save it,” explains Kaz. “You can liken it to a tube, or prism. Data constantly goes through, and we’re looking through the peephole. Just looking, without saving. If the algorithm identifies something, it alerts us.”


Kaz emphasizes that the company’s algorithms employ pre-existing identification systems, they do not create new ones, and constitute solely a monitoring mechanism. “Let’s consider even the simplest municipal service center. It is like that it keeps a lot of information on residents in the form of video footage taken from street cameras, for example. We don’t do that, nothing is saved to our servers. When the algorithm identifies something, it can alert the local authorities, or the police – which then have something concrete as a reference. Basically, we insert AI into something pre-existing, we don’t add data that might increase privacy concerns.”


How aware are you of algorithmic bias in your work?

“Very aware. This is a major topic that has been studied extensively over the past few years, especially in the US, where awareness of this form of bias is constantly growing,” says Kaz. “One needs to understand that the basis of machine learning is no different than teaching a small child. If that child only sees certain people do certain things, he or she will not know that other possibilities exist. The actual choice of what information we feed the algorithm is what will determine future bias, or lack thereof.”


“In order to tackle this issue properly, data scientists must provide diversified data, or weigh it differently. There are many ways to make this better, however many scientist don’t really think about these thing – which, of course, leads to bias. At the end of the day, with all this fancy AI – a simple intervention by the person behind the algorithm can completely change the results.


“At our company, we make sure that the data is built properly. If a false alert goes off every time someone who might fit a certain profile passes in front of the camera, we’ll get nowhere. We have to be sensitive: at the end of the day, we are working with real information of real people.”



4 views0 comments