Category - Denial of Service

Hosters: Is Your Platform Being Used to Launch DDoS Attacks?
Securing the cloud
2011 Likely to Have Far More Malevolent Threats, Security Experts Warn
Floating point DoS attack
Wikileaks Affair & the CyberWorld

Hosters: Is Your Platform Being Used to Launch DDoS Attacks?

May 15, 2012 11:12 AM PDT

As anyone who’s been in the DDoS attack trenches knows, large multi-gigabit attacks have become more prevalent over the last few years. For many organizations, it’s become economically unfeasible to provision enough bandwidth to combat this threat.

How are attackers themselves sourcing so much bandwidth? It’s actually easier than you might think. While botnets comprised of malware-infected computers can be used to launch attacks, you don’t actually need thousands of devices. In some cases, attackers are infiltrating hosting company resources (shared hosting, virtual private servers, dedicated hosting, etc.), availing themselves of bandwidth by using hacked, stolen and fraudulent accounts.

Let’s say that an attacker manages to get his/her hands on 5 hosting accounts with 5 different hosting companies. It’s not unusual for these hosting companies to have 1 Gbps+ of connectivity to the Internet. A lot of hosters don’t look at their outbound traffic all that closely or have difficulty policing what their customers do. All an attacker needs to do is install a script on each account and he/she has easy access to gigabits of connectivity.

For hosters, finding the trouble spot can be like looking for a needle in a haystack (especially if thousands of accounts share resources). While the offender might be found eventually and the account shut down, the damage has already been done.

What can hosters do to help prevent this or detect this better?

Restrict outbound traffic from your customers by using ACLs (Access Control Lists). For example, there are few reasons your customers will ever need to make port 80 UDP connections to other hosts on the Internet. Put policies in place to block all outbound traffic except to specific, acceptable, understood destinations or ports. If customers have legitimate reasons to make an outbound connection from your infrastructure, they should be able to notify you and justify it (this will affect a only tiny percentage of your base) so you can make the appropriate arrangements. Some hosters do not even accommodate these requests.

Throttle outbound traffic from your customers. Even for legitimate outbound connections, most likely they don’t need to take up 500 Mbps of outbound bandwidth. Simply set a lower limit.

Put alarms in place when outbound traffic utilization spikes. If, for example, all of a sudden the amount of data leaving your network increases by 40%, there’s probably an issue somewhere and your tech folks should be investigating.

Restricting and monitoring your outbound traffic will probably save you money on bandwidth costs and decrease the amount of abuse reports. Best of all, attackers will realize they’re not getting what they want out of your platform. The less you have to worry about, the better, right?


Securing the cloud

The future of the Internet could look like this: The bulk of the world’s computing is outsourced to “the cloud”─to massive data centers that house tens or even hundreds of thousands of computers. Rather than doing most of the heavy lifting themselves, our PCs, laptops, tablets and smart phones act like terminals, remotely accessing data centers through the Internet while conserving their processing juice for tasks like rendering HD video and generating concert-quality sound.

What needs to be figured out for this cloud-based future to emerge are three big things. One is how the computers within these data centers should talk to each other. Another is how the data centers should talk to each other within a super-secure cloud core. The third is how the cloud should talk to everyone else, including the big Internet service providers, the local ISPs and the end-of-the-line users (i.e. us).

This last channel, in particular, interests Michael Walfish, an assistant professor of computer science and one of the principal investigators of the NEBULA Project, which was awarded $7.5 million by the National Science Foundation to develop an architecture for making the Internet more cloud-friendly. If we’re going to be trusting so much of our computing lives to the cloud, he believes, we need to develop a more secure model for how information travels.

“A sender should be able to determine the path that information packets should take,” says Walfish. “A receiver should not have to accept traffic that she does not want. An intermediate provider should be able to know where the packet’s been and should be able to exercise its policies about the downstream provider that’s going to handle the flow next.”

Walfish’s system for providing such capacities, which he’s developing with colleagues at Stanford, the Stevens Institute of Technology, and University of California-Berkeley, is called ICING. It’s a set of protocols that allow every packet of information not only to plot out a path from beginning to end, choosing every provider along the way, but also to establish a chain of provenance as it goes that proves, to both the intermediaries and the final recipients, that it came from where it said it was coming from.

“What we do is take a packet, a unit of data, and we add some fields to the head of the packet,” says Walfish, who in 2009 won an Air Force Young Investigator Award for work related to ICING.

“These fields contain enough cryptographic information to be able to communicate to every realm along the way, and back to the sender, where the packet’s been. So when a packet shows up, I know where it’s been. I know whether it obeys the policies of everyone along the path. That property does not exist today.”

The advantages of such knowledge, says Walfish, should be considerable. Senders, for instance, could contract with intermediate providers for a kind of expressway through the Internet. Recipients would have an easier time sorting their incoming traffic into different levels of priority depending on the routes the packets took.

Michael Walfish, assistant professor of computer science, is working to secure the future of cloud computing.

Perhaps the greatest advantage of adopting a system like ICING, says Walfish, would come in the area of security. Targets of various kinds of Internet attacks, like denial-of-service attacks, would be able to sever traffic from their attackers faster and with much greater precision. Governments would be able to set up channels of communication that pass through only well-vetted and highly-trusted service providers. Internet security companies could, from anywhere in the world, inspect your traffic for viruses.

“Right now,” says Walfish, “there are ways to deal with attackers, but they’re crude, and they’re reactive. Once the traffic enters the victim’s network link, you’re hosed. All you can do is shut it all down. It would be like if you had a huge line of people coming into your office, not letting you get work done. You could kick them all out, but you still wouldn’t get any work done because you’d spend all your time kicking them out. What you really need is for them to not show up in the first place.”

ICING, says Walfish, would also prevent “IP hijacking,” a kind of attack in which a network provider redirects net traffic by falsely “advertising” to hold a given IP address or by claiming to offer a more direct route to that address. Such IP hijackings can be globally disruptive. In 2008, for instance, the Pakistani government sought to block videos containing the controversial Danish cartoons that depicted Mohammed. The result was a global shutdown of Youtube for more than an hour. Last year, it’s believed, China Telecom was able to capture 15% of the world’s Internet traffic, for 18 minutes, by falsely claiming to be the source of more than 30,000 IP addresses.

“There are multiple reasons why this wouldn’t happen in ICING,” says Walfish. “First, in ICING, the contents of the advertisement and the name of the advertised destination are tightly bound; lie about one, and the other looks invalid. Second, because packets must respect policy, a packet taking an aberrant path will be detected as such.”

ICING, and its parent project NEBULA, are one of four multi-institutional projects being funded by the National Science Foundation’s Future Internet Architecture (FIA) program. The point of the FIA program, and of the efforts of Walfish and his colleagues, is to step back from the day-to-day challenges of managing the flow of information on the ‘net, and think more fundamentally about what kind of architecture the Internet should have going forward.

“Where ICING was born, I think,” says Walfish,  “was in the realization my teammates and I had that while there was a consensus about what kinds of things needed to change, and there were  a lot of proposals to make those changes, all the proposals seemed to be mutually exclusive. They all required the same space in packets. It would be like if your bike was out-of-date and someone said, oh, you can get this really cool feature if you just replace your front wheel with this wheel, and then someone else came along said, oh, you can get this other really cool feature, but you have to replace your front wheel with this wheel. Well, you can only have one front wheel. So what we set out to do was to design a much more general-purpose mechanism where you could get all these properties without their conflicting with each other, and that’s what I think we’ve done.”


2011 Likely to Have Far More Malevolent Threats, Security Experts Warn

According to a warning by IT security experts, 2011 could be more challenging with respect to malware threats compared to the current year (2010). The Hindu Business Line published this on December 27, 2010. Also, according to the experts, they anticipate a huge change within the threat scenario as fresh types of organizers emerge having increasingly effective objectives for their Internet assaults.

Additionally, they state that during 2011, viruses will appear more-and-more similar to those we see in science fiction films. Together with this, the realm of cyber-crime will consolidate just like corporations merge. Viruses won’t simply attack individuals, but target corporations and installations more-and-more. There will be no sparing of anti-virus agencies too.

According to Vice-President Shantanu Ghosh for Symantec’s India operations, in 2011, there will be increasing attacks against industrial organizations and crucial infrastructures and while there’ll be responses from ISPs, the governments will take counter actions only slowly. The Economic Times published this on December 28, 2010.

Furthermore, mid-sized businesses will be targeted with cyber-spying. Both critical infrastructures and highly reputed brands will keep on getting attacked with more-and-more localized and targeted assaults. Besides, most assaults will take place through Web-browsers, while Distributed Denial-of-Service (DDoS) attacks will continue to inflict the Internet in a massive way.

According to The Hindu Business Line dated December 27, 2010, security specialists think there’ll be a completely fresh group of more dangerous authors of malicious software as well as malware attacks that will seek private data and monetary gains. Also, Spyware 2.0 will emerge which is a fresh breed of malicious program for capturing users’ private information.

Additionally, there will be more of cyber-criminals’ attacks against users in big companies, while direct assaults against daily end-users will slowly decline.

Hence it’s vital for comprehending that for executing an online assault, the technique utilized won’t rely on the entity organizing it alternatively the objective(s) it has rather it’ll be reliant on the services of the Internet, the technical abilities of modern operating systems, and of course the gadgets the general public utilize during work as well as during their day to day living.

SPAMfighter News – 05-01-2011


Floating point DoS attack

A bug in the way the PHP scripting language converts certain numbers may cause it to tie up all system resources. For example, on 32-bit systems, converting the string “2.2250738585072011e-308″ into a floating point number using the function zend_strtod results in an infinite loop and consequent full utilisation of CPU resources.

PHP 5.2 and 5.3 are affected, but apparently only on Intel CPUs which use x87 instructions to process floating point numbers. The x87 design has long been known to contains a bug which triggers just this problemPDF when computing approximations to 64-bit floating point numbers. By default, 64-bit systems instead use the SSE instruction set extension, under which the error does not occur. Processing the numbers 0.22250738585072011e-307, 22.250738585072011e-309 and 22250738585072011e-324 also triggers an infinite loop.

It may also be possible to remotely disable some server systems merely by sending this value as a parameter in a GET request. The PHP development team has fixed this in the forthcoming version 5.3.5. A patch for version 5.2.16 is available from the repository.


Wikileaks Affair & the CyberWorld

The Wikileaks Affair & the CyberWorld

ESET Ireland’s Urban Schrott Examines How Global Communities Defend the Right of Free Information Circulation

ESET_Wikileaks.jpg2010 bows out on a note of controversy and turmoil, not only in the areas of diplomatic, political, international relations and law, but also, probably for the first time in history, with the involvement (willingly or otherwise) of the whole global online community in an initiative aimed at defending the right to free information circulation through various means.

Leaving aside all the aforementioned global implications to focus purely IT security issues, this is a multilayered phenomenon, where each layer could be expanded into a security analysis all on its own. For the sake of a comprehensive overview, lets focus on a few of its most prominent manifestations here, and on how the Wikileaks affair might prove to be a game-changer in several aspects.

The first consideration, the original sin you might say, is of course data protection itself. More specifically, the question of how potentially compromising data was being gathered, how it was transported and how it was stored. And where in all these processes people with various levels of clearance were able to get their hands on it and misuse it.

Inside Stories

Various IT security analysts have been pointing out for years now, how insider data abuse is far the most common source of data leakage. According to a 2009 Ponemon study, 59% of corporate workers surveyed stated they would leave with sensitive corporate data upon layoff or departure; 79% of these respondents admitted that their company did not permit them to leave with company data and 68% were planning to use such information as email lists, customer contact lists and employee records that they stole from their employer.

Even though these data have been available for nearly two years, there seems to have been no significant global trend towards major policy changes regarding in-house data protection, nor has there been a reported widespread increase of the use of specialised protection hardware and software. So since nowadays most data, including data formally classified as sensitive, are no longer collected as neatly organised papers in filing cabinets, but digitally, and are therefore very easy to copy and distribute for anyone who can gain access to them, it was inevitable that a major incident would take place sooner or later. And while such incidents in the corporate environment can usually be accommodated within the bounds of economic sustainability, in this case, since the breaches concern classified government documents, mainly related to US international involvement in sensitive areas, the damage done has greatly affected already brittle international relations.

The After Effects

Now to the next part of the story, the after-effects. The first and most immediate development was a series of futile attempts to shut the stable door, firstly through shutting down Wikileaks servers, then by the exertion of coordinated corporate pressure from some of the major online players to disable funding and hamper further distribution of the compromised data.

The varied national legislations regarding webhosting made it impossible to block the distribution of data globally, while the funding issue and the involvement of (presumably) independent companies such as PayPal and Amazon sparked an unprecedented backlash from netizens worldwide which resulted in yet another previously unheard of situation. This was the much publicised Operation Payback, a concerted global hacking offensive, which was in December directed against the supposed offenders against the freedom of information.

This quick and well organised response surprised many, even if the “relative ease” and success of the attacks chosen didn’t. Jan-Keno Janssen, Jurgen Kuri, Jurgen Schmidt wrote about it in a thoughtful article for Heise (The H), while ESET’s Jeff Debrosse wrote in more detail about the DDoS (Distributed Denial of Service) attacks in his article “Web Weaponization and WikiLeaks”, where yet another twist is disclosed: cybercriminals were quick to attach their own interests to all the buzz created around the topic, spreading infected links supposedly leading to more info or resources, and SEO-ing (using Search Engine Optimization techniques) around the Wikileaks buzzwords.

Opinion has been divided on the concept of “ethical hacking”, especially in the context of the viability and morality of using measures that may cause inconvenience (and worse) to users of targeted services who may or may not be sympathetic to the Wikileaks stance. Consider, for example, this post by Neil Schwartzman which describes an attack on Spamhaus launched on the assumption that the blacklisting of the site was a further example of harassment of Wikileaks. Spamhaus, however, claims that is a malicious site intended to take advantage of all the fuss to pursue its own unethical purposes. While we can’t say authoritatively who is “in the right” in this particular case, it seems all too likely that criminals will continue to use this controversy to their own advantage We can only hope that the defenders of information’s right to “want to be free” do not see the efforts of malware distributors, bot-herders and phishers as “free speech.”

Operation Leakspin

A different approach, aimed at a greater dissemination of controversial data rather than disrupting anyone else’s work, is now also in effect through the means of Operation Leakspin, but that’s already going beyond the field of IT security.

Overall it’s still not sure whether the whole evolution of the Wikileaks affair is best described as a domino effect or a butterfly effect, or the combination of both, given all the repercussions and sub-plots developing all over the web. However, we are very likely to see change in some of the established protocols regarding data handling and distribution as a direct or indirect result of this incident, or perhaps even the introduction of new ones.


Copyright © 2014. DoS Protection UK. All Rights Reserved. Website Developed by: 6folds Marketing Inc. | Demo Test