On Friday, July 2, 2021 one of the “largest criminal ransomware sprees in history” took place. Kaseya, a global IT infrastructure provider, had allegedly suffered an attack that utilized their Virtual System Administrator (VSA) software to deliver REvil (also known as Sodinokibi) ransomware via an auto update.
Over the course of the July 4th holiday weekend, numerous disclosures, blog posts, and articles were published about the incident, but Kaseya still hadn’t released details by the end of the holiday. As details emerge, numerous points of contention have been introduced, such as the vulnerability responsible and whether it was a zero-day vulnerability, as well as the scope of organizations affected.
However, as we investigate this situation, Flashpoint has observed that the Kaseya attack might not be the supply chain attack it was initially written out to be.
This article will present the latest details involving Kaseya as well as our own findings. Later on, we will also examine the bigger picture of what supply chain attacks actually are, so we can better understand what that term means and its origins.
The Kaseya attack: What happened
Early reporting about the Kaseya attack made an effort to disclose as much information as possible, but the lack of details, coupled with the July 4 holiday weekend, muddled the situation.
Here are the main points you need to know to understand the impact and scope of the Kaseya ransomware attack:
- Kaseya’s Virtual Systems Administrator (VSA) software is designed to manage an organization’s complete IT infrastructure
- Some Kaseya customers are Managed Service Providers (MSP) who are contracted to handle IT tasks for thousands of other organizations
- Threat actors using REvil (aka Sodinokibi) ransomware exploited vulnerabilities to hack VSA software
- Using VSA, the threat actors spread REvil ransomware via MSPs to their customers
- Thousands of organizations that were either directly or indirectly involved with Kaseya have been infected with REvil ransomware
- UPDATED: Nine days after the incident, Kaseya has released patch 9.5.7a to address the vulnerabilities used in the REvil attack.
Managed service providers (MSPs)
Before we can dive into the specifics of the Kaseya attack, it is essential to know what role a MSP serves. Essentially, MSPs are contracted by organizations who are unwilling to, or are unable to manage their own IT department.
Lawfare has a fantastic article that explains what a MSP is, and how it plays into understanding the scope of the Kaseya attack. Similarly to how an organization might outsource their customer support, an organization can choose to rely on a MSP if the costs of maintaining their own IT department is too high, or if it is deemed too risky. As such, small to medium-sized businesses often choose to use MSPs.
How Kaseya VSA factors in
Since MSPs are responsible for handling thousands of their customers’ IT tasks, they need software that allows them to automate key tasks. In order to do this, software like Virtual Systems Administrator (VSA) allows a MSP to perform tasks that usually require a high level of privileges, such as updating systems, and removing or adding programs. In fact, in order to perform these processes without interruption Kaseya even advises that users turn off their antivirus and firewall.
Zero day vulnerability or just an exploited vulnerability?
Now that we know what MSPs are and what VSA does, we can review what actually happened. Early speculation was that the attackers had hacked Kaseya themselves and then pushed malicious updates from the vendor to VSA devices. Days later, research is suggesting that the malicious actors directly hacked VSA servers.
Kaseya has stated that the threat actors were able to do this by exploiting zero-day vulnerabilities. Credentials leak and business logic flaw vulnerability CVE-2021-30116 has been currently assigned for the issue. This CVE ID was put in RESERVED status on July 5, despite the incident occurring on July 2. As of July 8, both the CVE and NVD entries for this vulnerability were empty, even though vendor and other advisories exist.
But was CVE-2021-30116 actually a zero-day vulnerability? Perhaps technically it was, but not in the usual sense:
Behind the scenes, DIVD researcher Wietse Boonstra had found the zero-day and notified Kaseya who then started to put the processes in place to mitigate the issue. According to DIVD, Kaseya had followed all the right steps:
“Once Kaseya was aware of our reported vulnerabilities, we have been in constant contact and cooperation with them. When items in our report were unclear, they asked the right questions. Also, partial patches were shared with us to validate their effectiveness. During the entire process, Kaseya has shown that they were willing to put in the maximum effort and initiative into this case both to get the issue fixed and their customers patched. They showed a genuine commitment to do the right thing. Unfortunately, we were beaten by REvil in the final sprint, as they could exploit the vulnerabilities before customers could even patch.”Victor Gevers, Chairman and Head of Research at DIVD
Arguing over semantics isn’t constructive, but this information does raise some interesting questions. Since DIVD and Kaseya were following standard vulnerability disclosure practice, how close was Kaseya in patching those vulnerabilities? How did malicious actors stumble upon those vulnerabilities? Did they intercept communications between the researchers and the vendor – or was it a coincidental case of mutual discovery? And who was ultimately affected by this attack?
Who’s affected by the Kaseya attack
The scope of the REvil ransomware attack
Potentially thousands of organizations and over a million individual systems are affected by the Kaseya ransomware attack. However, this was slightly downplayed by Kaseya in their initial customer notifications when they suggested that only “a very small percentage of our customers were affected – currently estimated at fewer than 40 worldwide”.
Regardless of the actual number, the fact that the affected customers are MSPs means that their customers have also been compromised or are at-risk. According to ThreatPost, it is estimated that 60 customers using the on-premises version of VSA experienced an attack, with many of them being MSPs who manage the networks of other organizations. This has caused Kaseya to backtrack their initial claims, with Fred Voccola, Kaseya’s CEO, admitting that the actual number of compromised businesses may exceed 1,500:
If you instead look at individual systems, the scope of the attack is much more alarming. Over a million individual systems are potentially locked up at the time of publishing, and Kaspersky has observed that there have been over 5,000 attack attempts in 22 countries.
The situation in Sweden provides a window into the global reliance on MSPs for IT services. As a result of this REvil ransomware attack, it is believed that 20% of Sweden’s food retail, pharmacies, and train tickets sales have been shut down:
The ripples of the Kaseya attack are still spreading and at this point, we can only speculate just how far the MSPs’ compromise will trickle down. Many of the compromised Swedish businesses were not even direct customers. This means that there is still a possibility that additional countries or organizations that rely on Kaseya (or even those loosely affiliated) can still add to the tally of casualties. We may not know exactly how many systems become compromised, but we do know that those that are will be staring at a ransom payment demand.
How did the Kaseya attack happen
If Kaseya and DIVD were following a standard coordinated vulnerability disclosure, how far along were they in patching the vulnerabilities? DIVD’s limited vulnerability disclosure provides new insights into those questions.
Including the previously mentioned CVE-2021-30116, there was a total of 7 vulnerabilities that were part of DIVD’s disclosure, and at least one was mutually discovered by the attackers:
- CVE-2021-30116 – A credentials leak and business logic flaw, resolution in progress
- CVE-2021-30117 – An SQL injection vulnerability, resolved in May 8th patch
- CVE-2021-30118 – A Remote Code Execution vulnerability, resolved in April 10th patch (v9.5.6)
- CVE-2021-30119 – A Cross Site Scripting vulnerability, resolution in progress
- CVE-2021-30120 – 2FA bypass, resolution in progress
- CVE-2021-30121 – A Local File Inclusion vulnerability, resolved in May 8th patch
- CVE-2021-30201 – A XML External Entity vulnerability, resolved in May 8th patch
The following was the coordinated vulnerability disclosure timeline between DIVD and Kaseya:
- April 1, 2021 – Research start
- April 2, 2021 – DIVD starts scanning internet-facing implementations
- April 4, 2021 – Start of the identification of possible victims (with internet-facing systems)
- April 6, 2021 – Kaseya informed
- April 10, 2021 – Vendor starts issuing patches v9.5.5. Resolves CVE-2021-30118.
- May 8, 2021 – Vendor issues patch v9.5.6. Resolves CVE-2021-30121 and CVE-2021-30201.
- June 4, 2021 – DIVD CSIRT hands over a list of identified Kaseya hosts to Kaseya
- June 26, 2021 – Patch 9.5.7 is released on VSA SaaS and resolves CVE-2021-30116 and CVE-2021-30119
- July 2, 2021 – DIVD responds to ransomware, by scanning for Kaseya VSA instances reachable via the internet and sends out notifications to network owners
- July 7, 2021 – Limited publication (after 3 months) is released
Even with DIVD’s limited disclosure, putting the pieces together may reveal quite a lot. However, it does introduce a new set of questions for both DIVD and Kaseya. Why did DIVD start scanning the Internet to count vulnerable instances days before informing Kaseya of the vulnerabilities? Why not give them those extra days to begin triage and developing a patch first? Why did Kaseya choose to patch the XEE vulnerability (CVE-2021-30201) before the credentials leak, unless the XEE disclosed information that could be used for privilege escalation?
Given the amount of vulnerabilities, and Kaseya being proactive in patching those issues, we observe the blank space in the timeline above. The timing of the attack, in addition to what came before it makes it really stand out.
Remember Kaseya’s first customer notification?
“While our investigation is ongoing, to date we believe that our SaaS customers were never at-risk. We expect to restore service to those customers once we have confirmed that they are not at risk, which we expect will be within the next 24-48 hours.”Fred Voccola, CEO at KASEYA
With the new timeline in mind this section does make sense, since nearly a week before the attack, the same exploited vulnerability (CVE-2021-30116) was patched on the VSA SaaS. This prompts us to ask, were threat actors pressured to attack on the July 4, holiday weekend? If so, why? Did attackers manage to intercept communications or gain access to DIVD’s vulnerability information?
Kaseya attack: Stumbled discovery or masterfully planned?
Choosing to prioritize VSA SaaS first was a smart move on Kaseya’s part: if the SaaS was also compromised, we may have been looking at an attack at a much bigger scale. Kaseya was also working at a consistent pace, so did attackers think their window of opportunity was closing fast? If they somehow knew that the vulnerability now identified as CVE-2021-30116 existed, then it is logical to assume that threat actors would hack VSA SaaS in order to spread the ransomware as much as possible.
This line of thinking also suggests that attackers found the zero-day vulnerability through mutual discovery. This most likely occurred a little before or after June 26. This theory is plausible if you factor in the “sophistication” of the attack and the ransomware payment demand.
DIVD found the vulnerabilities, performed a scan of the internet for VSA hosts, identified the possible victims, and then notified Kaseya. All of that occurred at the start of April. Before the limited disclosure, one of our possible theories was that attackers may have intercepted DIVD’s email communication or compromised VSA’s bug tracker, a scenario that some others shared as well:
DIVD’s email warning of exploitable vulnerabilities was sent on April 6. Also, according to DIVD themselves, the PoC was easy to exploit:
If attackers had intercepted that email near the time it was sent, we most likely would have seen this attack occur much faster and before VSA SaaS was patched. The goal of ransomware is to infect as many systems as possible. If threat actors had the knowledge that the SaaS was unpatched, it wouldn’t make sense for them to wait until the holiday weekend just as their attack surface was neutered.
We speculate that attackers found the vulnerability and vulnerable systems a little before or after June 26, saw that the SaaS was patched, and then rushed to weaponize CVE-2021-30116 before Kaseya could mitigate the effects for their on-premises customers.
The ransomware payment demand also suggests that attackers may have felt pressured to act fast. Catalin Cimpanu, cybersecurity news reporter at ZDNet, Bleeping Computer, and Softpedia asked the following questions:
- Why is REvil asking payment in Bitcoin when it has already been traced by US authorities and they’ve been primarily asking for Monero (XMR) from victims?
- Are they anticipating that the ransom will not be paid, and the $70 million demand is just for show?
There may be a reason behind allowing victims to pay in Bitcoin. Bitcoin is more accessible, easier to use and, frankly, more well-known. While XMR has been the preferred crypto of REvil operators in the past, ultimately threat actors just want to be paid. When it comes to Bitcoin, there is better infrastructure that allows companies not familiar with cryptocurrency to pay quickly. Given that many affected downstream from the Kaseya attack are small to mid-sized businesses, they would be more likely to pay or to pressure Kaseya to adhere to their demands.
The fact that the ransomware demand has dropped from $70 million to $50 million in less than 24 hours also suggests that this attack may have been a hasty move instead of a calculated attack. Considering the timeline, the complexity of the exploit, and the asking payment / method, the Kaseya attack has a hurried feeling, suggesting that they stumbled upon an opportunity that they knew was about to close.
Is the Kaseya hack actually a supply chain attack?
What is a supply chain anyway?
Within hours of the Kaseya breach becoming public, some critics called out that it was being incorrectly labeled as a supply chain attack. As Nick Carr pointed out, “precise language is important in security”. These calls only grew louder as we learned that devices weren’t compromised due to a malicious software update injected upstream, despite the Kaseya Virtual System Administrator (VSA) devices impacting customers downstream.
If the term ‘supply chain’ is being used incorrectly, it is important to define what it means and then examine what might constitute such an attack.
Let’s start with the definition in common use, and then unpack it.
Googling for “supply chain definition” gives us “the sequence of processes involved in the production and distribution of a commodity“. While accurate, that also has a fairly broad meaning. Looking at Wikipedia we get the following:
This definition leans more heavily on physical items, since it specifies resources, materials, and components along with the term “finished product”. Since software is often updated, and pushing those updates to customers is more about modifying an existing product, that definition may not be entirely accurate for our purposes. This gets more convoluted if we look at the Wikipedia article on Supply Chain Attack, which includes an example of ATM malware that is exploited by walking up to an ATM and removing cash if it is infected. It does not specify how the malware gets there, and a majority of ATM “black box” attacks require physical access to carry out. Such a case is not a supply chain attack at all.
To add even more confusion, the UK National Cyber Security Centre says that watering-hole attacks count as supply chain, which we simply cannot agree with. For readers who are not vulnerability and threat management professionals, a watering hole attack is when you are lured to a page that hosts an exploit which compromises your machine. The relationship is directly between the attacker and victim, or to put it another way, there is no (supply) chain at all.
If the Kaseya incident doesn’t qualify as a supply chain attack, either from vendor to MSP, or from MSP to customers, then what does? We’ll begin by looking at six different types of attacks that have been called “supply chain”, even if some may be arguable.
- Third-party software, whether you call them “libraries” or dependencies”:
- A pure vulnerability in a third-party software package.
Equifax is perhaps the highest profile example of this, as they were compromised via a vulnerable Apache Struts that was not patched in a timely manner. Some consider this a supply-chain attack.
- Malicious code implanted in third-party software.
This may come in the form of repo-jacking, dependency confusion / repo name squatting, or a malicious commit. These are arguably three different methods of obtaining the same result. The attack designed to steal the cryptocurrency Agama is a good example of this.
- Third-party software related service, rather than the software itself.
The recent Codecov attack was a significant example of this.
- A pure vulnerability in a third-party software package.
- A physical supply chain attack that involves implanting a malicious chip / module in a commercial product before it ships from the vendor.
These have long been theorized and talked about, but the highest profile example, as covered by Bloomberg, has been disputed by professionals including vendors, journalists, government, and security experts. New York Times journalist Nicole Perlroth briefly talked about a case where the CIA “infiltrated factory floors”, but curiously offered no citations for the story.
- An upstream vendor is compromised to push a malicious update.
This type of attack relies on hijacking a software update service process so that when the software or device “phones home” to check for updates, it finds one and installs it, even if malicious. The recent Solarwinds compromise was performed this way as well as Russian hackers that launched an attack against Ukraine by compromising the update process of the M.E. Doc software.
- Compromising a Managed Service Provider (MSP), either service or software.
Both theories of how the attack happened would qualify here, one way or another. If Kaseya was compromised and the attackers pushed out a malicious update, it would count as #3 above. But then as the devices pushed out updates or additional malicious code to the computers managed by the Kaseya device, that would count as #4 here.
Please note the distinction between MSP, Managed Security Service Provider (MSSP), and Remote Monitoring and Management (RMM); all provide similar or overlapping functionality but can ultimately be very different things. Kaseya VSA devices are considered to be an RMM product, while some of Kaseya’s customers are MSPs, and Kaseya is a software vendor.
- Injecting malicious content via a Content Delivery Network (CDN).
- Compromising a contractor’s access to privileged resources.
An attacker that can gain access to a company that provides operations support services may have legitimate access to dozens or hundreds of customers via passwords, certificates, or tokens. Using that access the hackers can, in turn, compromise the target and potentially more. Some consider this a supply chain attack, as was the case with the Target incident.
With these six categories in mind, if we revisit those definitions we found in common use, you can begin to see a flaw in the logic. If a vulnerability in a third-party dependency can represent a supply chain attack, then tens of thousands of attacks have been carried out. While we cited Equifax and the Apache “Struts-Shock” vulnerability that was used, think back farther to the Heartbleed vulnerability and it may have been used against thousands of hosts. Looking at the last item in the list, if any stolen credentials represents a supply chain attack then thousands more have occurred.To better qualify what constitutes a supply chain attack, Ax Sharma of CSO Online wrote an article on the topic and defined the term, using a very specific detail that we feel better limits what counts:
“The umbrella term “supply chain attack” covers any instance where an attacker interferes with or hijacks the software manufacturing process (software development lifecycle) such that multiple consumers of the finished product or service are impacted detrimentally.”
When we modify the definition to require malicious intent, it means that #1a and #6 from the list above are not supply chain attacks. This is absolutely important to qualify as such attacks gain more and more attention or we run the risk of falling into the “threat intelligence” trap. That term means a lot of different things and security companies that try to do all of it often fall short on most.
Supply chain attack origins
When it comes to supply chain attacks, Solarwinds dominated news cycles, but now the discussion will shift to Kaseya. Ignoring the current digital examples, where did these types of attacks originate? Was there a time before computer-based attacks where maybe the military or an economy suffered a “supply chain attack?”. It seems probably, but we haven’t found many documented examples in the real world.
Interestingly enough, the best non-computer example we could find was the man who bought 18,000 bottles of hand sanitizer last year with the intent to turn a profit (and thereby perhaps to attack the supply chain in his area as an intended physical disruption). But let’s get back to the digital realm.
The first computer-based supply chain attack
Aaron Bray at Phylum points out a great early example of a software supply-chain hack in Ken Thompson’s paper, “Reflections on Trusting Trust“. This seminal work set the tone for many aspects of security and risk modeling by making us question something that we should conceivably be able to trust. What if your own compiler was introducing malicious code at compile-time to produce backdoored software every time? While Thompson’s paper is heavily cited for this proposal, the same idea was put forth in 1974 by Karger and Schell in their paper “Multics security evaluation: Vulnerability analysis“:
“In Multics, most of the ring 0 supervisor is written in PL/1. A penetrator could insert a trap door in the PL/1 compiler to note when it is compiling a ring 0 module. Then the compiler would insert an object code trap door in the ring 0 module without listing the code in the listing. Since the Pl/1 compiler is itself written in Pl/1, the trap door can maintain itself, even when the compiler is recompiled.”Page 55, “Multics security evaluation: Vulnerability analysis”
During their evaluation of a Multics system they demonstrated this type of trapdoor by placing it in the Multics system at MIT. In this 1974 paper the relationship between the MIT system and the vendor is not clear, but in the 1979 paper Schell clarified what happened:
“Trap door installed. The tiger team penetrated Multics and modified the manufacturer’s master copy of the Multics operating system itself by installing a trap door: computer instructions to deliberately bypass the normal security checks and thus ensure penetration even after the initial flaw was fixed. This trap door was small (fewer than 10 instructions out of 100,000) and required a password for use. The manufacturer could. not find it, even when he knew it existed and how it worked. Furthermore, since the trap door was inserted in the master copy of the operating system programs, the manufacturer automatically distributed this trap door to all Multics installations.”Lieutenant Colonel Roger R. Schell
In summary, a 1974 red team formed by the United States Air Force penetrated the Multics operating system and inserted a backdoor that made it into Honeywell’s master copy. The backdoor was distributed to “all Multics installations” but fortunately, the backdoor was neutered and was only a proof of concept. It was not one that could be exploited. 47 years ago we saw the first computer-based supply chain attack as a proof-of-concept. That incident would be indirectly cited for decades to come before being essentially forgotten.
What 20 years of supply chain attacks looks like
In a recent article in Wired, Andy Greenberg goes into detail about the 2011 RSA breach. Based on interviews with several RSA employees who are no longer bound by NDA, the perspective on the breach and the resulting fallout is interesting. Since the cryptographic seeds of RSA’s two-factor authentication (2FA) key fobs were stolen, it mimicked a supply chain attack of sorts when the criminals used that information to allegedly compromise some of RSA’s customers. The article quotes Mikko Hypponen of F-Secure who was part of a third-party analysis of the breach saying “It opened my eyes to supply chain attacks.“
Given what we know of Karger and Schell, Ken Thompson, as well as how hackers in the late 80s and early 90s operated, this quote is a surprise. In those early days of compromising machines, the technology and infrastructure was immature when compared to today. While a supply chain attack was possible, it primarily would have created a malicious update that would be made available on a vendor’s FTP site.
This type of attack was largely before the days of automatic updates when patches were downloaded and installed by an administrator. Such attacks can be seen in 1994 for WU-FTPD and ircII being backdoored on the official distribution sites, again in 1997 for the AtlantiS IRC Script, in 1999 for TCP Wrappers and util-linux, and so on. Even bigger, more prominent software like OpenSSH and Sendmail were backdoored and distributed via their official sites in 2002. In fact, these incidents happened so many times that they were all lumped into a single CVE ID.
Here are some significant supply chain attacks within the last 20 years:
- 2001 – The Thruport CDN was compromised to deliver a custom GIF image that displayed on SecurityFocus. While the content was not malicious and only served to ‘deface’ their web page, it certainly could have been.
- 2011 – RSA was hacked in 2011 and the cryptographic seeds for their 2FA tokens stolen. They were allegedly used to attack some of RSA’s high-profile customers afterwards. To this day, RSA denies they were used for that purpose. However, some of the compromised organizations are adamant that was the source of compromise.
- 2012 – Flame is a self-propagating piece of complex malware that contained a cryptographic attack that allowed it to generate fake Microsoft security certificates. This was used to hijack the Windows Update service so that it could spread.
- 2013 – The Dragonfly campaign subverted legitimate software on vendor websites that were used by ICS equipment in the energy sector. This represented a serious targeted attack by criminals on a specific sector.
- 2013 – In South Korea, bad actors compromised the auto-update mechanism of SimDisk, a file-sharing service with end-user software that impacted both government and news websites.
- 2015 – The Syrian Electronic Army hacked Instart Logic to inject malicious content into the InStart CDN, allowing them to cause pop-up alerts on the Washington Post website, informing visitors the site had been hacked.
- 2017 – Security company Avast suffered an attack that led to 2.27 million downloads of a trojaned version of CCleaner. The auto-update mechanism pulled in the new code and of those, approximately 1.65 million installations phoned home to the attackers who subsequently launched follow-up attacks. Even though only 40 were targeted, all of them were “technology and IT enterprise targets.“
- 2018 – A suspected Chinese-backed group stole two legitimate ASUS certificates which were used to push updates to almost a million ASUS systems in an operation dubbed ShadowHammer.
- 2019 – Researchers discovered the Magecart credit card skimmer on some Amazon CloudFront CDN servers. Instead of this being served up to hundreds or thousands of web sites, it only affected the customers hosting their own content there. Still, it has the potential to cause serious issues and highlights the risks of cloud security.
- 2019 – Attackers used a ConnectWise Control remote access tool to compromise the MSP, TSM Consulting. 22 town and county sites in Texas were then infected with ransomware.
- 2020 – Researchers discovered and analyzed the Octopus Scanner malware, which searched for and backdoored NetBeans projects hosted on GitHub and used the build process as a method for propagating itself.
- 2020 – In one of the most high-profile supply chain attacks, software company Solarwinds was compromised by what is believed to be APT29 (Cozy Bear/Russian SVR). Once inside, the attackers used a valid certificate to sign and then distribute malware via a software update.
- 2020 – Using default passwords found in software vendors of ICS products, the Kwampirs malware campaign exploited the situation to then inject and distribute a remote access trojan (RAT) through the supply chain leading to backdoor access.
- 2021 – The heavily used Codecov Bash Uploader was modified by a bad actor who was able to use access for over two months to gain access to information in users’ continuous integration (CI) environments.
This timeline shows a significant gap between 2001 and 2011. But despite the gap, many attacks during that time may have been carried out and gone unnoticed, or were simply not labelled as “supply chain” attacks. But post 2011 we see a steady series of incidents that highlights the growing frequency as well as a growing severity. The message is clear: while such attacks may require more time and technical skills, the potential payoff makes it worth it to malicious actors.
The Kaseya attack: What we know now
The Kaseya attack: why did this happen?
New findings from SophosLabs reveal that the attacks occurred on July 2, 2021 at 8:47am.
It took under two seconds to wreak havoc on potentially over 1,500 global businesses and millions of individual systems. This revelation seems to strengthen our initial theory of how the threat actors discovered CVE-2021-30116, the vulnerability exploited in this recent hack. It also makes us reflect on Kaseya’s earlier statements, in which they cited the incident as “sophisticated”. The new information provides context that it was actually quite simplistic. It reminds us of a time not so long ago when it seemed like companies were consistently blaming data breaches on APTs, since everything was so Advanced and so Persistent and such a Threat what else could have been expected. In this case, those words have been replaced with “sophisticated”.
Deflecting responsibility with sophistication
According to DIVD, the proof of concept for CVE-2021-30116 is simple to understand. However, this contradicts the Kaseya advisory where they described the incident as a “sophisticated cyberattack”. That description appears ten times in their advisory. But if you consider DIVD’s professional opinion and the new details on the attack itself, “sophisticated” may not be the best description. Even if it was, it doesn’t excuse what happened. But when this attack, as well as every so many others we hear about, is described as “sophisticated”, we have to ask, “is it really?”
When organizations are hacked, there is a tendency to position the incident as unavoidable, as Kaseya has done in this case. Hundreds of high-profile compromises including the previously mentioned RSA Breach have been called sophisticated, often with colorful modifiers such as “extremely sophisticated” or “sophisticated state hackers”. Yet after the RSA hack, Timo Hirvonen said it “wasn’t particularly sophisticated” and he was the one who performed an external analysis of the incident!
Wait, this sounds eerily similar:
Defaulting to such terminology has become a common way for companies to deflect responsibility by essentially suggesting “no one could have stopped it”. If these attacks are simple to execute, then it means unsophisticated attackers are pulling off incredibly damaging attacks. But if these claims are correct, then it means even the largest companies stand no chance in the face of seasoned attackers. So we must ask if Kaseya truly thought the attack warranted the description, or if this is a classic example of deflection.
Alleged track record of irresponsibility
Newly introduced discoveries about Kaseya’s own security imply that their advisories and press releases have been another case of passing the blame. DIVD’s description includes glowing praise for Kaseya:
“During the entire process, Kaseya has shown that they were willing to put in the maximum effort and initiative into this case both to get the issue fixed and their customers patched. They showed a genuine commitment to do the right thing.”Victor Gevers, Chairman and Head of Research at DIVD
On the other hand, Ryan Gallagher and Andrew Martin’s article for Bloomberg stands in stark contrast. According to Bloomberg’s investigation, many Kaseya employees had addressed their concerns to upper management over the past four years and were let go as a result. One employee even provided their own version of the “95 Theses”, writing a 40-page memo highlighting their security concerns. He was fired two weeks later.
Many employee concerns involved issues that were exploited in the recent Kaseya attack. One of those fired employees also even went so far as to say that VSA “was so antiquated and riddled with problems that it should be replaced”. At the time of writing, our VulnDB(R) offering now has tracked at least 76 known vulnerabilities in Kaseya’s VSA product. Out of those vulnerabilities, only 10 have CVE IDs assigned.
Why the Kaseya attack will likely happen again
At this moment, patch 9.5.7a has brought all VSA SaaS customers back online. But even if VSA servicing returns to normal, it is not clear if or when Kaseya will put this situation behind them. One thing that is certain is that this will not be the last attack of this type or scale, whether it happens to Kaseya or a similar organization. How could we say that? Well…
1. This is not the first time Kaseya have been hit by ransomware
Evidence that a similar attack could happen again is that the REvil attack was not an isolated instance of when VSA software was used to push ransomware. Back in 2019, Kaseya VSA was used to execute Gandcrab ransomware.
In that attack, an affected MSP customer had nearly 2,000 systems shut down. The cause of the compromise was traced to a vulnerable plugin for Kaseya’s VSA software. Since that attack, Gandcrab had disbanded due to increased attention and shut down their websites.
Several sources state that there is a high possibility that the Gandcrab team regrouped behind REvil. However, even if the developers are the same people, it’s important to note that the “affiliates” who selected the targets, launched the attack, and negotiated the ransoms of the Kaseya Gandcrab event are not the same as this recent REvil attack.
2. Breaches can be profitable on both sides
Obviously, breaches are profitable to attackers. But while Kaseya isn’t likely to pay for a questionable decryptor on behalf of their customers’ customers, it is also folly to think the 1,000+ downstream impacted parties would collectively come together to raise the $50 million amount. Regardless, if even a small percent of the downstream customers choose to pay a nominal amount for a decryption key, this will most likely be a profitable attack.
Instead, a more interesting view to consider is, “are breaches profitable to the breached company?” Ycombinator user “genmud” offered an interesting view on the topic:
It may be easy to dismiss this as a joke but this kind of math can influence decision-making. If security teams are unable to demonstrate how the investment in new or improved security initiatives will benefit the company, it is unlikely resources will be allocated to the project.
However one aspect is demonstrably false. Cyber insurers can lessen the financial blow brought on by breach by covering unexpected costs like outside response assistance, legal fees, and yes, even payment of a ransom demand. But no insurer will pay “for your technology/security program” or repair the damage to the company’s reputation.
Any silver linings?
Aside from the doom and gloom of these cyber-extortion-ransom-threat-attacks, are there any silver linings?
We generally don’t associate crippling computer attacks against large organizations with positive things, but Halvar Flake brings up an interesting point. He points out that ransomware gives a direct correlation between vulnerability and “concrete economic cost” that may “be good for improving security in the long run“. This is an interesting point and speaks to the economics of vulnerability, rather than the vulnerability market.
Given the rapid adoption of cyber insurance over the past 10 years, these ransomware incidents have already re-shaped some of the assumptions used by actuaries at insurance companies. As ransomware events continue to occur and the average cost of ransomware incidents jumps considerably, the associated insurance premiums will rise as well – a trend that is already playing out in the industry. Will that in turn lead to higher underwriting requirements, or companies forgoing such policies due to the cost? We have been talking about cyber insurance and issues with policies for years, in particular the pricing!
The final silver lining, that we have been hoping for after every single big incident and specifically a systemic cyber event, is will this be the wake-up call? What will it take for organizations to examine their security budgets and protocols to determine if they need enhancement? If a $70 / $50 million MSP ransomware incident isn’t it, we’re not sure what is.
Possible targets for the next big attack
After the high-profile compromises of SolarWinds and Kaseya, along with smaller MSPs like TSM Consulting, threat actor groups are certainly going to keep targeting these types of companies. These attacks will likely continue as companies like SolarWinds are still getting hacked. Injecting themselves into the supply chain clearly yields great results in the far-reaching impact of compromised systems as well as the ransom potential.
That prompts us to wonder which companies might have targets on their backs. Instead of speculating and listing specific companies publicly (which is very tempting), consider the following types of organizations that produce:
- Managed Service Provider (MSP) software products
- Remote Monitoring and Management (RMM) software
- Unified Endpoint Management (UEM) solutions for desktop and mobile
- Automation and Orchestration products considering the privileges required and functionality, it could be a devastating vector
- On-prem software that offers remote computer management for desktop and mobile
- Remote IT infrastructure management software
- Software in previously mentioned categories that have been previously compromised
- Software advertised with statements such as “keep your business secure“, “high-quality remote security” or “unbreakable“
The list could continue, but what’s likely next after that? How about mobile device management (MDM) software or services, since every phone is a viable target for ransomware? What will an organization do when both desktops and mobile devices are locked? With some solutions offering RMM for both desktop and mobile in a single solution it’s a matter of “when” this happens, not “if”.
Avoiding the threat intelligence trap
Trust is an integral part of security. We live in a time in security where supply chain attacks using automatic updates are treacherous, and history shows organizations are not the best at patching. As a colleague framed it:
“If auto-updates are an unacceptable risk because patches are always applied, and manual updates are an unacceptable risk because patches are never applied, well what then?”
The Kaseya attack has revealed uncomfortable truths about the software and processes we often take for granted. It often takes an eye-opening event to expose how intertwined we all are to one another and it doesn’t matter what country we are in, what industry we are in, or the size of our organization. Will the Kaseya hack finally be the straw that breaks the camel’s back?
With Biden’s new cybersecurity executive orders, and a “red line” stance on cyber attacks from state sponsored actors, it appears that it possibly could be. But as new details emerge and security buzzwords are thrown around, regardless of whether they are accurately applied, we need to be careful that we don’t fall into the threat intelligence trap. If we can’t correctly identify what and how it happened, who was affected, and what a supply chain attack actually is, as an industry we cannot move forward to fix the problems.
Instead, we may try to fix the wrong things, or convince ourselves that nothing could have been done. However, something can always be done. Let’s start by making sure that we and the people we do business with take security seriously.