Category - DDoS

1
More Sophisticated DDoS Attack a New Threat to Apache Servers
2
London Internet Exchange hit by suspected DDoS attack
3
WHMCS under renewed DDoS blitz after patching systems
4
GM Food Research Site Hit by Cyber Attack
5
‘SOCA’s weak response to a recent DDoS attack sends the wrong message’
6
UK’s largest hosting biz titsup in DDoS outrage
7
Anonymous Leaks 1.7 GB Justice Department Database
8
Hosters: Is Your Platform Being Used to Launch DDoS Attacks?
9
ESET Lists the Dominant E-Threats of 2010
10
Securing the cloud

More Sophisticated DDoS Attack a New Threat to Apache Servers

A once flawed DDoS attack targeting the world’s most widely used Web servers has improved its cryptography and attack capabilities to become a more serious threat.

MP-DDoser, also known as “IP-Killer,” uses a relatively new low-bandwidth, “asymmetrical” HTTP attack to inflict a denial-of-service attack against Apache Web servers by sending a very long HTTP header. This forces the web servers to do a great deal of server-side work for a relatively small request. Additionally, the malware now incorporates multiple layers of encryption.

Such sophistication is a far cry from the first version that appeared as a proof-of-concept Perl script in August 2011 and again months later in the Armageddon DDoS bot, according to a new report by Arbor Networks.

“These early versions had a number of serious flaws, such as a completely broken Slowloris attack implementation, and really awful crypto key management,” writes Arbor Networks research analyst Jeff Edwards. “But the latest samples (now up to ‘Version 1.6′) are much improved; the key management is quite good, and the buggy DDoS attacks are not only fixed, but now include at least one technique (‘Apache Killer’) that may be considered reasonably cutting edge.”

Using data collected anonymously from more than 200 service providers participating in Arbor’s ATLAS sensor network, Edwards was able to analyze the newest iteration of the DDoS bot and offer instructions for decrypting its transmissions.

“The malware actually uses a pretty straightforward algorithm for encrypting and decrypting the transmissions sent between bot and C&C server. It modulates the plaintext message with a key string using the XOR operator, but it applies this XOR operation only to the least significant four bits of each message byte,” he said in the report.

The key string in earlier versions was simply hard-coded into the bot executable in plain text. It’s since improved to now be encrypted and stored in an RCDATA resource named MP, along with some other sensitive information such as the hostname and port of the C&C and the botnet ID.

“To decrypt the MP resource string, the bot uses a lookup table (‘LUT’) that maps ASCII characters to integers for the initial phase of the decryption loop. But even this lookup table is itself encrypted! Fortunately, it is encrypted using the same algorithm used for crypting the network comms, and thus the decrypt_mpddos_comms() Python function will handle it,” according to the report. “And mercifully, the key string needed to decrypt the LUT happens to be stored in plain text in the bot executable. In all the samples that we’ve encountered to date, that key string is: 00FF00FF00FF, but that could easily change in the future.”

The 50-page report goes into detail on how to break MP-DDoS’s multi-layered encryption and thwart transmissions. In general, Edwards recommends:

Decrypting the LUT using decrypt_mpddos_comms()
Then using the LUT to decrypt the MP resource via decrypt_mpddos_rsrc()
And then pulling the comms key from the plain text resource and providing it to decrypt_mpddos_comms() to decrypt the actual network traffic

“All in all, MP-DDoser uses some of the better key management we have seen,” Edwards wrote in a blog post on his research.

“But of course, at the end of the day, every bot has to contain — or be able to generate –- its own key string in order to communicate with its C&C, so no matter how many layers of encryption our adversary piles on, they can always be peeled off one by one.”

Source: http://threatpost.com/en_us/blogs/more-sophisticated-ddos-attack-new-threat-apache-servers-060712

London Internet Exchange hit by suspected DDoS attack

The London Internet Exchange (LINX) has been hit by a large scale outage that many observers are blaming on a possible distributed denial of service (DDoS) attack.

The non-profit exchange provides the majority of UK ISPs with a peering platform for their connections and the outage hit both the companies and their customers all in one go.

The LINX Network Community confirmed the outage on Twitter, despite the organisation’s press office being unable to provide Computer Weekly with a statement.

The tweet said LINX was “aware of issues on its network” and had “engineers currently working to rectify this,” but fell short of giving an explanation for the problem.

However, customers operating over LINX also took to the social network to explain their own experiences, with a number suggesting a DDoS attack was responsible.

Worthers Creative Media Solutions released a statement to its customers saying: “We are told [the outage] was due to a 200GB denial of service attack but are unsure of exact details at this point. The result of this was that 60% of traffic for about 40 minutes got lost to some of our servers and therefore may have affected some people accessing sites.

“Just to clarify, this wasn’t an issue with the servers themselves or the datacentre but was more widespread and outside of our control.”

Voice over IP provider Orbtalk, internet telephony firm Voxhub, and telecoms company VoiceHost also reported being taken down by the outage.

Others are also citing Juniper Networks’ PTX packet switches, which the LINX network is based on, which only went live earlier today. However, with no formal statement from the organisation, the exact cause remains open to speculation.

At the time of publishing this article, the network community said the LINX local area network was now stable, but the huge number of services hit will take time to resume after the failure.

Source: http://www.computerweekly.com/news/2240151068/London-Internet-Exchange-hit-by-DDoS-attack

WHMCS under renewed DDoS blitz after patching systems

WHMCS, the UK-based billing and customer support tech supplier, has once again come under denial of service attacks, on this occasion following an upgrade of its systems to defend against a SQL injection vulnerability.

The security patch was applied on Tuesday following reports by KrebsOnSecurity that a hacker was auctioning rights to abuse the vulnerability through an underground hacking forum. The then zero-day blind SQL injection supposedly created a mechanism for miscreants to break into web hosting firms that rely on WHMCS’s technology. The exploit was on offer at $6,000 for sale to a maximum of three buyers.

In a notice accompanying the patch release, WHMCS stated that it was notified about the problem with its systems by an “ethical programmer”.

Within the past few hours, an ethical programmer disclosed to us details of an SQL Injection Vulnerability present in current WHMCS releases.

The potential of this is lessened if you have followed the further security steps, but not entirely avoided.

And so we are releasing an immediate patch before the details become widely known.

Installing the patch is simply a case of uploading a single file to your root WHMCS directory. This one file works for all WHMCS versions V4.0 or Later.

The events of last week have obviously put a lot of focus on WHMCS in recent days from undesirable people. But please rest assured that we take security very seriously in the software we produce, and will never knowingly leave our users at risk. And on that note if any further issues come to light, we will not hesitate to release patches for them – as we hope our past history demonstrates.

The advisory references an incident last week when hackers tricked WHMCS’s own hosting firm into handing over admin credentials to its servers. The crew that pulled off the hack, UGNazi, subsequently extracted the billing company’s database before deleting files, essentially trashing its server and leaving services unavailable for several hours. The compromised server hosted WHCMS’s main website and supported customers’ installations of the technology.

UGNazi also seized access to WHMCS’s Twitter profile, which it used to publicise locations from which the compromised customer records might be downloaded. A total of 500,000 records, including customer credit card details were exposed as a result of the breach. Hacktivists justified the attack via unsubstantiated accusations that WHMCS offered services to internet scammers.

Last week’s breach involved social engineering trickery and wouldn’t appear to be related to the SQL Injection vulnerability patched by WHMCS on Tuesday. Since applying the patch WHMCS has come under attack from a fresh run of denial of service assaults, confirmed via the latest available update to WHMCS’s Twitter feed on Tuesday afternoon.

We’re currently experiencing another heavy DDOS attack – seems somebody doesn’t like us protecting our users with a patch … Back online asap

WHMCS’s website remains difficult to reach, at least from Spain, but its official blog, can be found here.

The firm was unreachable for comment at the time of publication.

Source: http://www.theregister.co.uk/2012/06/01/whmcs_ddos_follows_patching/

GM Food Research Site Hit by Cyber Attack

Rothamsted Research says its Web site appers to have been taken down by a DDoS attack.

The Web site for the UK agricultural institute Rothamsted Research was taken down by a cyber attack on Sunday night.

“The Twitter handle @AnonCrash1 was the first to mention the attack, at 5:18pm on Sunday, tweeting ‘Tango Down www.rothamsted.ac.uk,'” Information Age reports. “Five hours later, @AnonOpsLegion tweeted: ‘TANGO DOWN these guys are like the MONSANTO of the UK www.rothamsted.ac.uk.'”

“The cyber-strike came after hundreds of protestors went to the agricultural research station in Hertfordshire to try to attack the facility’s trial of genetically modified wheat,” writes The Register’s Brid-Aine Parnell. “A large force of mounted police and foot patrols stopped the activists from ripping up the crop, one of the stated aims posted on the protest’s website.”

In a press release, Rothamsted Research stated, “We believe this was a distributed denial-of-service (DDoS) attack but it is unclear who was responsible. The timing of the attack and the information we have seen on Twitter would suggest this attack relates to an experiment being conducted at Rothamsted Research to test wheat which has been genetically modified to repel greenfly and blackfly pests as a sustainable alternative to spraying pesticides.”

“Rothamsted’s wheat contains genes that have been synthesised in the laboratory; a gene will produce a pheromone called E-beta-farnesene that is normally emitted by aphids when they are threatened by something,” BBC News reports. “When aphids smell it, they fly away. Prof John Pickett, a principal investigator at Rothamsted Research, told BBC News there was ‘a very, very remote chance that anything should get out.'”

Source: http://www.esecurityplanet.com/hackers/gm-food-research-site-hit-by-cyber-attack.html

‘SOCA’s weak response to a recent DDoS attack sends the wrong message’

André Stewart, president international at Corero Network Security, argues that the Serious Organised Crime Agency should have taken a recent DDoS attack more seriously…

The response by the Serious Organised Crime Agency (SOCA) to the distributed denial of service (DDoS) attack directed at its public website is somewhat disappointing for the nation’s leading anti-crime organisation. The agency’s statement that it does not consider investing in DDoS defence protection “a good use of taxpayers’ money” fails to take into account potentially serious security consequences. Further, it sends the wrong message to cyber criminals at a time when businesses and organisations in the United Kingdom and around the world operate under continuous threat of attack.

The attack against the SOCA website used a network-layer DDoS attack which is a very publicly visible form of cyber crime. The attackers’ intent is to slow or bring down a website for the entire world to see. The victim organisation has to own up to what has happened and, in the case of government entities, explain why it will not or cannot respond effectively.

However, hacktivist groups and criminals frequently use DDoS attacks as a smokescreen to hide more surreptitious intrusions aimed at stealing data. For example, the theft of 77 million customer records from the Sony PlayStation Network was preceded by a severe DDoS attack. In discussing its 2012 Data Breach Investigations Report, Verizon’s Bryan Sartin said that diversionary DDoS attacks are common practice to mask data theft, including many of the breaches by hacktivists which totalled some 100 million stolen records.

This raises the question about SOCA’s approach to securing its networks and the protection of critical information from more sinister, stealth cyber attacks. Criminals want to create diversions and remain unnoticed while they infiltrate deeper into a network and steal data. Most data breaches go undetected for weeks, months, even years in some cases. Can we be confident, based on SOCA’s response to its public website being hit for the second time in less than a year, that it is addressing more critical security risks? The response to the latest incident could undermine confidence in the quality of the agency’s security program. How deep does its estimable high regard for taxpayer money go?

Just last June, the LulzSec group claimed credit for taking SOCA offline with a DDoS attack. One has to wonder if SOCA is truly dismissive of these attacks or simply has been slow to address the issue. Whilst the agency is dismissive of the latest DDoS attack its inability to protect itself nearly a year after the first public attack plants a seed of doubt about the calibre of its security program.

Perhaps most concerning is that SOCA is conceding the initiative to criminals who are attacking the agency directly. Would the police stand by, for example, while some hooligan scrawled graffiti on a local station with the explanation that they had more important things on which to spend time and money? Would the public tolerate that response?

Whilst putting its foot down on spending public funds is commendable, failing to respond to a direct criminal attack on law enforcement’s public face seems an odd place for SOCA to draw a line in the sand.

Source: http://www.publicservice.co.uk/feature_story.asp?id=19768

UK’s largest hosting biz titsup in DDoS outrage

MASSIVE Chinese web cannons blast 123-reg offline

By Anna Leach

Posted in CIO, 23rd May 2012 12:36 GMT

A “massive” distributed-denial-of-service attack emanating from China has taken down 123-reg, the UK net biz that hosts 1.4 million websites.

In a statement on the its service status page just after midday today, 123-reg blamed attackers in China:

From 11:30 to 22:50 our network was undergoing a massive distributed denial of service attack from China. Due to the nature and size of this attack the firewall systems in place needed to be reconfigured to block the bad traffic and allow the good traffic through.

The attack, which appears to be ongoing, caused patchy service from the sites hosted by the company, which also has more than 4 million domains on its books. 123-reg promised that no emails would be lost, and messages would be queued up by the mail servers and sent shortly.

123-reg’s own site was down too in the aftermath of the traffic blast, which proved to be frustrating for users trying to find out what was going on. A 123-reg tweet at 12.30pm said that they were working through final issues and that services should be returning to normal.

123-reg is a brand name of Webfusion Ltd, part of the Host Europe group. WebFusion isn’t picking up the phone so we can’t get more detail on the hacks at this time. ®
Updated to add

A spokeswoman for 123-reg got in touch this afternoon to say:

We had contained the primary attack within 15 minutes of it happening. As the largest domain provider in the UK, and coupled with the increase of these types of attacks across Europe in particular, we know we are a prime target. We are still in the process of resolving this.

Source: http://www.theregister.co.uk/2012/05/23/123reg_ddos_attack/

Anonymous Leaks 1.7 GB Justice Department Database

Attackers were assisted by Anonymous affiliate AntiS3curityOPS, which launched its own anti-NATO attack against the Chicago Police Department website.

By Mathew J. Schwartz

In what was billed as “Monday Mail Mayhem,” the hacktivist group Anonymous released a 1.7-GB archive that it’s characterizing as “data that used to belong to the United States Bureau of Justice, until now.”

“Within the booty you may find lots of shiny things such as internal emails, and the entire database dump,” according to a statement released by the group. “We Lulzed as they took the website down after being owned, clearly showing they were scared of what inevitably happened.”

That statement was included with a BitTorrent file (named 1.7GB_leaked_from_the_Bureau_of_Justice) uploaded Monday to the Pirate Bay by “AnonymousLeaks,” although multiple downloaders Tuesday complained that the Torrent download was stuck at the 94%-completion point.

Why “dox”–release purloined data from–the Bureau of Justice Statistics? “We are releasing data to spread information, to allow the people to be heard, and to know the corruption in their government,” according to the Anonymous statement. “We are releasing it to end the corruption that exists, and truly make those who are being oppressed free.”

The Bureau of Justice Statistics compiles statistics related to hacking crimes. Except for that fact, the agency would make for an odd attack choice, since it’s devoted to number-crunching “information on crime, criminal offenders, victims of crime, and the operation of justice systems at all levels of government,” according to its website.

The Department of Justice said that it’s investigating the alleged attack. “The department is looking into the unauthorized access of a website server operated by the Bureau of Justice Statistics that contained data from their public website,” said a Department of Justice spokesman via email. “The Bureau of Justice Statistics website has remained operational throughout this time. The department’s main website, justice.gov, was not affected.”

“The department is continuing protection and defensive measures to safeguard information and will refer any activity that is determined to be criminal in nature to law enforcement for investigation,” he said.

In other hacktivism news, Anonymous affiliate AntiS3curityOPS said that it had launched a distributed denial-of-service (DDoS) attack against government websites in Chicago, to support anti-NATO protest marches in the city that saw police officers clash with protestors, resulting in several injuries and 45 arrests. All told, 51 world leaders attended the two-day NATO summit, including President Barack Obama.

On Sunday, prior to the protest marches, the Chicago Police Department and city council websites were knocked offline, and AntiS3curityOPS took credit. “We are actively engaged in actions against the Chicago Police Department and encourage anyone to take up the cause and use the AntiS3curityOPS Anonymous banner,” according to a YouTube video released by the group. “We are in your harbor Chicago, and you will not forget us.”

Interestingly, AntiS3curityOPS said that it had also assisted with the Bureau of Justice Statistics attack. “We were not behind http://justice.gov DB attack. However, we can confirm we ‘helped’ attacked site, and another faction has email spools,” the group said Tuesday via Twitter.

When it comes to DDoS attacks of late, however, hacktivists haven’t been the only actors. Notably, the Pirate Bay–where a Torrent file for downloading the purloined Bureau of Justice Statistics information was uploaded–was itself recently knocked offline for 24 hours by a DDoS attack.

The attack came after the Pirate Bay had criticized an Anonymous-led DDoS campaign against Virgin Media in the United Kingdom, which had begun blocking U.K. access to the Pirate Bay, in compliance with a court order. “We do NOT encourage these actions. We believe in the open and free internets, where anyone can express their views. Even if we strongly disagree with them and even if they hate us,” the Pirate Bay said in its anti-DDoS statement, which was posted to Facebook. “So don’t fight them using their ugly methods. DDOS and blocks are both forms of censorship.”

Interestingly, the Pirate Bay statement included a practical call to arms that stands in sharp contrast to the use of DDoS attacks by Anonymous as a form of online protest. “If you want to help; start a tracker, arrange a manifestation, join or start a pirate party, teach your friends the art of bittorrent, set up a proxy, write your political representatives, develop a new p2p protocol, print some pro piracy posters and decorate your town with, support our promo bay artists, or just be a nice person and give your mom a call to tell her you love her,” recommended the Pirate Bay.

Was Anonymous behind the DDoS attack against the Pirate Bay? While that rumor was circulating online, the Pirate Bay dismissed it. “Just to clarify, we know that it is not Anonymous who is behind the DDoS attack. Stop spreading rumors like that,” it said. “We may not agree with Anonymous in everything, but we both want the internet to be open and free.”

Likewise, Corero Network Security president Andre Stewart emphasized that non-Anonymous actors–a foreign government, record labels, or even a long hacker–were likely to have been behind the attack. “There are a lot of motives out there to bring down a site like The Pirate Bay,” he told PC Pro. “It doesn’t make any sense to be Anonymous … it’s one of the main areas it defends.”

Source: http://www.informationweek.com/news/security/attacks/240000778

Hosters: Is Your Platform Being Used to Launch DDoS Attacks?

May 15, 2012 11:12 AM PDT

As anyone who’s been in the DDoS attack trenches knows, large multi-gigabit attacks have become more prevalent over the last few years. For many organizations, it’s become economically unfeasible to provision enough bandwidth to combat this threat.

How are attackers themselves sourcing so much bandwidth? It’s actually easier than you might think. While botnets comprised of malware-infected computers can be used to launch attacks, you don’t actually need thousands of devices. In some cases, attackers are infiltrating hosting company resources (shared hosting, virtual private servers, dedicated hosting, etc.), availing themselves of bandwidth by using hacked, stolen and fraudulent accounts.

Let’s say that an attacker manages to get his/her hands on 5 hosting accounts with 5 different hosting companies. It’s not unusual for these hosting companies to have 1 Gbps+ of connectivity to the Internet. A lot of hosters don’t look at their outbound traffic all that closely or have difficulty policing what their customers do. All an attacker needs to do is install a script on each account and he/she has easy access to gigabits of connectivity.

For hosters, finding the trouble spot can be like looking for a needle in a haystack (especially if thousands of accounts share resources). While the offender might be found eventually and the account shut down, the damage has already been done.

What can hosters do to help prevent this or detect this better?

Restrict outbound traffic from your customers by using ACLs (Access Control Lists). For example, there are few reasons your customers will ever need to make port 80 UDP connections to other hosts on the Internet. Put policies in place to block all outbound traffic except to specific, acceptable, understood destinations or ports. If customers have legitimate reasons to make an outbound connection from your infrastructure, they should be able to notify you and justify it (this will affect a only tiny percentage of your base) so you can make the appropriate arrangements. Some hosters do not even accommodate these requests.

Throttle outbound traffic from your customers. Even for legitimate outbound connections, most likely they don’t need to take up 500 Mbps of outbound bandwidth. Simply set a lower limit.

Put alarms in place when outbound traffic utilization spikes. If, for example, all of a sudden the amount of data leaving your network increases by 40%, there’s probably an issue somewhere and your tech folks should be investigating.

Restricting and monitoring your outbound traffic will probably save you money on bandwidth costs and decrease the amount of abuse reports. Best of all, attackers will realize they’re not getting what they want out of your platform. The less you have to worry about, the better, right?

Source: http://www.circleid.com/posts/20120514_hosters_is_your_platform_being_used_to_launch_ddos_attacks/

ESET Lists the Dominant E-Threats of 2010

According to its “End of 2010 Report’ that ESET the Slovakian security company released recently, the firm has detected Conficker, INF/Autorun and Win32/PSLOnlineGames as the three most prevalent malicious e-threats that respectively contributed a share of 8.45%, 6.76% and 3.59% to the total malware during 2010.

Moreover, ESET discloses that over 3 consecutive months, the malicious program Bflient.k has remained within the company’s Top Ten Threats List that ESET prepares every month.

Elaborate the security researchers that Bflient, which’s traded among cyber-criminals, is a toolkit with which botnets can be built and preserved. Moreover, the toolkit is customized for each client so that a distinction is maintained from customer to customer.

Notes the report that after a purchase takes place, the client is equipped with instructing his botnet for carrying out the typical operations viz. executing a DDoS (distributed denial-of-service) assault, contaminating other PCs, as well as downloading and planting suspicious programs whenever wished. Infosecurity-magazine.com reported this on February 1, 2011.

Furthermore, there’s a special risk from Facebook to users visiting the website in that they could contract malware as well as other assaults based on social engineering. Facebook, in its attempt at eliminating the symptom instead of the malaise, may keep on offering the privacy-infiltration factor which typically associates social media, since users want just that, in order that they (users) themselves have the onus of making sure that their databases aren’t given out in manners disagreeable to them. A few websites like Bebo have in fact switched to the “deny some things” option from “deny nothing” despite the fact that sharing the maximum of user database is basic so far as the website’s commercial model is concerned.

Additionally, aside the aforementioned issues, ESET in its report discusses the Wikileaks story as well which was dominant between July and December 2010. First, several attempts were made, though unsuccessful, for closing stable door via disabling Wikileaks servers first and subsequently with prominent online players’ coordinated corporate exertion for stopping funding and obstructing any more dissemination of the hacked database. Indeed, consequent of the Wikileaks episode, many DDoS and spam attacks took place worldwide.

Source: http://www.spamfighter.com/ESET-Lists-the-Dominant-E-Threats-of-2010-15768-News.htm

Securing the cloud

The future of the Internet could look like this: The bulk of the world’s computing is outsourced to “the cloud”─to massive data centers that house tens or even hundreds of thousands of computers. Rather than doing most of the heavy lifting themselves, our PCs, laptops, tablets and smart phones act like terminals, remotely accessing data centers through the Internet while conserving their processing juice for tasks like rendering HD video and generating concert-quality sound.

What needs to be figured out for this cloud-based future to emerge are three big things. One is how the computers within these data centers should talk to each other. Another is how the data centers should talk to each other within a super-secure cloud core. The third is how the cloud should talk to everyone else, including the big Internet service providers, the local ISPs and the end-of-the-line users (i.e. us).

This last channel, in particular, interests Michael Walfish, an assistant professor of computer science and one of the principal investigators of the NEBULA Project, which was awarded $7.5 million by the National Science Foundation to develop an architecture for making the Internet more cloud-friendly. If we’re going to be trusting so much of our computing lives to the cloud, he believes, we need to develop a more secure model for how information travels.

“A sender should be able to determine the path that information packets should take,” says Walfish. “A receiver should not have to accept traffic that she does not want. An intermediate provider should be able to know where the packet’s been and should be able to exercise its policies about the downstream provider that’s going to handle the flow next.”

Walfish’s system for providing such capacities, which he’s developing with colleagues at Stanford, the Stevens Institute of Technology, and University of California-Berkeley, is called ICING. It’s a set of protocols that allow every packet of information not only to plot out a path from beginning to end, choosing every provider along the way, but also to establish a chain of provenance as it goes that proves, to both the intermediaries and the final recipients, that it came from where it said it was coming from.

“What we do is take a packet, a unit of data, and we add some fields to the head of the packet,” says Walfish, who in 2009 won an Air Force Young Investigator Award for work related to ICING.

“These fields contain enough cryptographic information to be able to communicate to every realm along the way, and back to the sender, where the packet’s been. So when a packet shows up, I know where it’s been. I know whether it obeys the policies of everyone along the path. That property does not exist today.”

The advantages of such knowledge, says Walfish, should be considerable. Senders, for instance, could contract with intermediate providers for a kind of expressway through the Internet. Recipients would have an easier time sorting their incoming traffic into different levels of priority depending on the routes the packets took.

Michael Walfish, assistant professor of computer science, is working to secure the future of cloud computing.

Perhaps the greatest advantage of adopting a system like ICING, says Walfish, would come in the area of security. Targets of various kinds of Internet attacks, like denial-of-service attacks, would be able to sever traffic from their attackers faster and with much greater precision. Governments would be able to set up channels of communication that pass through only well-vetted and highly-trusted service providers. Internet security companies could, from anywhere in the world, inspect your traffic for viruses.

“Right now,” says Walfish, “there are ways to deal with attackers, but they’re crude, and they’re reactive. Once the traffic enters the victim’s network link, you’re hosed. All you can do is shut it all down. It would be like if you had a huge line of people coming into your office, not letting you get work done. You could kick them all out, but you still wouldn’t get any work done because you’d spend all your time kicking them out. What you really need is for them to not show up in the first place.”

ICING, says Walfish, would also prevent “IP hijacking,” a kind of attack in which a network provider redirects net traffic by falsely “advertising” to hold a given IP address or by claiming to offer a more direct route to that address. Such IP hijackings can be globally disruptive. In 2008, for instance, the Pakistani government sought to block videos containing the controversial Danish cartoons that depicted Mohammed. The result was a global shutdown of Youtube for more than an hour. Last year, it’s believed, China Telecom was able to capture 15% of the world’s Internet traffic, for 18 minutes, by falsely claiming to be the source of more than 30,000 IP addresses.

“There are multiple reasons why this wouldn’t happen in ICING,” says Walfish. “First, in ICING, the contents of the advertisement and the name of the advertised destination are tightly bound; lie about one, and the other looks invalid. Second, because packets must respect policy, a packet taking an aberrant path will be detected as such.”

ICING, and its parent project NEBULA, are one of four multi-institutional projects being funded by the National Science Foundation’s Future Internet Architecture (FIA) program. The point of the FIA program, and of the efforts of Walfish and his colleagues, is to step back from the day-to-day challenges of managing the flow of information on the ‘net, and think more fundamentally about what kind of architecture the Internet should have going forward.

“Where ICING was born, I think,” says Walfish,  “was in the realization my teammates and I had that while there was a consensus about what kinds of things needed to change, and there were  a lot of proposals to make those changes, all the proposals seemed to be mutually exclusive. They all required the same space in packets. It would be like if your bike was out-of-date and someone said, oh, you can get this really cool feature if you just replace your front wheel with this wheel, and then someone else came along said, oh, you can get this other really cool feature, but you have to replace your front wheel with this wheel. Well, you can only have one front wheel. So what we set out to do was to design a much more general-purpose mechanism where you could get all these properties without their conflicting with each other, and that’s what I think we’ve done.”

Source: http://web5.cns.utexas.edu/news/2011/01/securing-the-cloud/

Copyright © 2014. DoS Protection UK. All Rights Reserved. Website Developed by: 6folds Marketing Inc. | Demo Test