Friday, March 29, 2024
spot_img
HomeReal Estate NewsCommercialAI for CRE: Is Cybersecurity A friend or foe?

AI for CRE: Is Cybersecurity A friend or foe?

Cybersecurity has become more important than ever in the commercial real estate industry, especially as buildings become “smarter” and are equipped with more connected devices. Unfortunately, some of the artificial intelligence (AI) used to automate security is making the technology designed to protect buildings more vulnerable. The conundrum has left VentureBeat wondering if AI is cybersecurity’s salvation or greatest threat in its recent AI and Security series.

Industry experts are both excited and concerned about AI’s growing involvement with cybersecurity. AI’s ability to automate security can be advantageous in the short term, but any technology built to replace humans, especially in a security capacity, has raised some eyebrows. The concern comes from wondering what would happen if something goes wrong or hackers learn how to use the technology to their advantage.

“Everything you invent to defend yourself can also eventually be used against you,” Geert van der Linden, an executive vice president of cybersecurity for Capgemini told VentureBeat. “This time does feel different, because more and more, we are losing control as human beings.”

Why using AI in cybersecurity is worth the risk

The global cybersecurity market was valued at $3.5 billion in 2014, and jumped to $120 billion 2017, according to research firm Cybersecurity Ventures. The firm also forecasted that cybersecurity spending would expand to $200 billion annually during the next five years. Microsoft alone spends $1 billion annually on cybersecurity.

Meanwhile, the cybersecurity workforce is expected to fall by 1.8 million people by 2022, according to VentureBeat. The increased spending is partially due to recruiting costs. Those in favor of AI believe it will lower costs by using less humans to keep building systems safe.

“When we’re running security operation centers, we’re pushing as hard as we can to use AI and automation,” Dave Burg, EY Americas’ cybersecurity leader told VentureBeat. “The goal is to take a practice that would normally maybe take an hour and cut it down to two minutes, just by having the machine do a lot of the work and decision-making.”

Companies are also confident that AI can help protect them for any cyber threats. Capgemini reported that 69 percent of enterprise executives it surveyed believed AI would be critical for responding to cyber threats. Meanwhile, 80 percent of telecom executives expressed confidence in AI strengthening their defense.

The risk in using AI in cybersecurity

While AI could help lower cybersecurity spending in terms of money and manpower, it could also cost companies money, too. Last year, Juniper Research predicted that data breaches’ costs would increase from $3 trillion in 2019 to $5 trillion in 2024. A number of factors will play into those costs like lost business, recovery costs and fines, but so will AI.

“Cybercrime is increasingly sophisticated; the report anticipates that cybercriminals will use AI, which will learn the behavior of security systems in a similar way to how cybersecurity firms currently employ the technology to detect abnormal behavior,” Juniper’s report said. “The research also highlights that the evolution of deep fakes and other AI-based techniques is also likely to play a part in social media cybercrime in the future.”

Security experts have also pointed to this year as to when hackers will start their attacks that leverage AI and machine learning.

“The bad [actors] are really, really smart,” Burg of EY Americas told VentureBeat. “And there are a lot of powerful AI algorithms that happen to be open source. And they can be used for good, and they can also be used for bad. And this is one of the reasons why I think this space is going to get increasingly dangerous. Incredibly powerful tools are being used to basically do the inverse of what the defenders [are] trying to do on the offensive side.”

An example of this occurred in 2016 when cybersecurity company ZeroFox developed an AI algorithm that could post 6.75 phishing tweets a minute that reached 800 people. Among the recipients, 275 of them clicked on the bad link in the tweet. In contrast, a human could only create 1.075 tweets a minute and reached 125 people, 49 of which clicked the link.

According to Malwarebytes Labs, hackers could also put AI into malware. The malware would use the AI to adapt in real time if it senses any detection programs. From there, the AI malware could trick automated detection systems or threaten personal and financial information.

“I should be more excited about AI and security, but then I look at this space and look at how malware is being built,” Malwarebytes Lab Director Adam Kujawa told VentureBeats. “The cat is out of the bag. Pandora’s box has been opened. I think this technology is going to become the norm for attacks. It’s so easy to get your hands on and so easy to play with this.”

Cybersecurity and CRE

There’s no question that CRE owners need to embrace cybersecurity to protect not only their own data, but their tenants’ as well. The question is how much should AI be trusted to work alongside any cybersecurity measures CRE owners take to keep their and the their tenants information safe?

Some companies are trying to stay a step ahead of hackers, like BlackBerry who in 2018 acquired Cylance for $1.4 billion. Cylance created a platform that used AI to detect networks’ weaknesses and shut them down when necessary. The company also created BlackBerry Intelligent Security, which adapts security protocols for employees’ smartphones and laptops based on usage patterns and location. The system also works with Internet of Things (IoT) devices.

While these preventative measures are great, it will ultimately be up to CRE owners and enterprises to decide how much of their security they want to put in an algorithm’s hands.
“I think we have to make sure that as we use the technology to do a variety of different thing, we also are mindful that we need to govern the use of the technology and realize that there will likely be unforeseen consequences,” Burg told VentureBeat. “You really need to think through the impact and the consequences, and not just be a naive believer that the technology alone is the answer.”

- Advertisement -
- Advertisment -spot_img

Industry News

- Advertisement -