Thursday, August 26, 2010

Some Opposing goals for WEB - SECURITY Vs Reliability Performance Usability.txt

There are many opposing goals in software creation but none is more important than security...

Most important goals among these are security versus reliability.Reliability often requires developers to write more code (for example, error-handlers) and more code means more opportunity to write bugs. Most of the times these error handlers are under-exercised in testing and chances that it has security bugs is greater.
Error code needs to be carefully checked for security flaws....

Another important opposing factor is performance. The more code that is pushed to the client the faster the server will run.But more code on the client means more opportunity for security breaches because the user has access to the code running on the client...

Usability may be next an opposing target to security. Usability means giving information to users to make the system as easy to use as possible.Easy to use often means easy to hack :-) ..Specifically, when error messages reveal information that is helpful to an attacker.

Wednesday, August 25, 2010

WEB Versus Client-Sever systems

The World Wide Web is a special case of the client-server paradigm.Client-Server means one or more centralized server computers that serve data, resources, programs connected to server a number of client computers.Traditionally, this involves a powerful central server connected to remote client computers that are often "dumb" in that they do no actual computation and simply provide an interface to the server. You can think of a dumb terminal as a keyboard and monitor into the remote server.

Many UNIX servers that are connected to thin clients, which means that most applications run on the server, but the clients are capable of local data storage and other small computational tasks.The server does most of the heavy computations.Windows networks are typically just the other way round, with the "fat client" possessing basic Office applications and browsing, with separate servers used for major services requiring either the network (Web server, DNS, and so on) or massive storage requirements (database and file servers).

WEB is a special case of the Client-Server model using fat clients and operating on protocols like HTTP, HTML, XML, and Simple Object Access Protocol (SOAP)...Moreover it adds the interesting problem of "untrusted" users. Whereas traditional networks exist within the firewalled protection of a company's private network. In traditional Client-Server networks it is fairly clear what processing should take place on the client and on the server.Also both of client and the server normally exist within the walls of a corporate ...

But this is not the case with the World Wide Web....The Web is different because the clients exist outside the control of the central server and the network.Unlike a LAN Web has no boundaries to protect.All the clients have to be treated as untrusted which puts additional requirements on how computation is distributed across the client and the server.LANs can be designed to maximize performance. The more computation that can be "pushed" to the client the faster the central server can execute.Perhaps this is one reason why the fat client paradigm has won out over thin clients.The computational burden can be more distributed for speeding up the network for everyone.

But the Web is a different thing altogether... It is essentially a network of untrusted clients and any of those can be hostile.This means that every input that originates at a client must be carefully checked and all security operations must be performed on the server.

Monday, August 16, 2010

World Wide Web

Networked computers are not new for us. We have been connecting computers in a LAN and WAN before Web. Even Web is a specialized version of what is called a client-server network. Client-server networks conserve computing resources by delegating complex and time-consuming computation to powerful, expensive computers called servers. Server machines tend to have large storage and memory capacity and multiple fast processors. Their speed allows them to complete computationally intense processing faster than a typical computer and then serve their results to smaller and less powerful machineswhich are known as clients.

In client-server networks, there are really three things of importance:

1. The server computer
2. One or more client computers
3. A connection between the client and server which is called as Network

At the client, a software must be developed to connect to the network to send and receive requests and data. It's the same for the server. At the network layer, we need protocols to allow the computers to communicate.
We alos need to handle bandwidth issues, lossy transmission of data, collisions, errors, and one or the other computer not being available.But all of this has been figured out for various situations. Protocols like Transmission Control Protocol (TCP), and User Datagram Protocol (UDP) as well as supporting protocols like Internet Protocol (IP), Address Resolution Protocol (ARP), and the Domain Name System (DNS) have been implemented and made easy to use for developers on both the client and server side.

World Wide Web: new network layer protocols, new server software to handle the connections and serve the variety of content demanded by the clients, and new client software to browse remote servers and search through the entire universe of servers for the one that had the required information. World Wide Web arrived as a network of computers that span over whole world and speak the same languages and protocols:

HyperText Transfer Protocol (HTTP),
Hypertext Markup Language (HTML),
eXtensible Markup Language (XML) etc.

The Web began largely as a replacement for the major functionality of the Internet: e-mail and File Transfer Protocol (FTP): ways of communicating and sharing files. Initially there was a method for sharing files between many users was a system called gopher. It was much like the Web we know today. GOPHER allowed users to search for documents using Veronica (the Google of its time) and documents could be linked together and navigated to.Gopher disapeared around 90s.HTML as a language of the Web was much more powerful and expressive than that used by gopher.

The magic behind this was a server-side program called a Web server that allowed remote clients to access certain parts of the server computer's hard drive. The Web changed everything about the way we shared files and communicated information.Web browser was the ultimate tool for a client computer to connect to the growing number of Web pages that were sprouting up on servers everywhere.

After that lot many things came into light for making this more dynamic in nature and flexible in usability.
These days many of us can't imagine life withoout these applicatons of World Wide Web...

Tuesday, August 3, 2010

What are the main threats for Web Services???

Web Services comes with loosely coupled architecture for connecting systems, data and various business Organizations. Well designed and lossely coupled web-services can be accessible as separate parts of a Business logic, which can be used independently or combined with others to get a complex application.
This also gives opportunity to hackers for easily exploit these facts.

Here we are going to touch upon few basic threats to Web-Services:

1. WSDL SCANNING ATTACK:

WSDL is mainly used for advertizing the interfaces and web service addresses. These files are oftenly created using some utilities and intentionall designed to expose the information available with a particular method.
So a hackers can get very useful information s/he needs through simple google queries :-)

Queries like :

filetype:WSDL company_name
index of /wsdl OR inurl:wsdl company_name

At very first glance it seems ok because its important to publisize any web service so that it can be used at appopriate places. But this is not the right way to exposing services, it should happen through UDDI. But mny times it developers are not very careful about usage of tools which are used to generate WSDLs. sometimes debuggin information which is never supposed to be accessed, can be exploited in various ways.


Any information in a WSDL file may be a very helpful hint for a hacker which exposes other functionalities.
If we consider a simple example where WSDL describes an operation like GET_STOCK_PRICE even there is an unpublished operation like ACTUAL_STOCK function. Unless there are some authorization checks applied, an attacker can guess the functionalities s/he is not supposed to understand/know.

2. PARAMETER TAMPERING

SQL Injections can be equally useful for attacking a web service.

Most oftenly, Web Services are some other mechanisms of accessing a legacy code for some sepcific purpose. Out of Range parameters, commnd injection and directory traversal are not mitigated just because data is transferred in XMLs.Its all about the way code validates data inputs.

Web Services should validate the input data in XMLs before using that. Having strong typing of XML does help but application must be very careful while using the data after proper validations.


3. XPATH Injections:

XPATH is a laguage used for quering XMLs like SQL for Databases.It uses expressions to select particular nodes and not-sets in an XML.

To give some sense of XPATH, lets have a look at some of the expressions:



/ : This is used for selecting root node

// : For selecting current Node

//Photographer : Select all Photographer Elements

Photographer//Name : Select all name elements whic are under Photographer element

/Photographer/Name[1] : Select first name element which is child of the Photographer element


An XPATH Injection attack allows an attacker to inject malicious exprssions as part of valid SOAP request.
This can lead to unauthorized access or service denial problems.



4. Recursive Payload attacks:


There is a concept of nesting in XML for supporting complex relationships mong elements. Nesting is a simple mechanism where one element lies under another. element which is lying under another is called child element or nested element.

Sometimes attackers create documents with 10000 or 100000 elements or attributes ina na attempt to break a web service. This is called as Recursive Payload attacks...

Most of the times XML based systems attemt to load whole document before processing it. Most of these parsers work on Push-Down automation models. There are some map of XML-documents is created to tell the parser about action to do it discovers a particular element in an XML.If XML schema allows nesting, parser could find find itself in a loop on facing a recursive payload attack...So Parser should have some mechanism to know which element was encountered at waht point..These kinds of recursive attacks can consume lot of memroy or even crash the machine having all web services hosted.


5. Oversize Payload attack

As we know XML is verbose by design due to the fact that its created for humans to read and understand. But its important for XML parser to check the size of file before processing it. Otheriwse attackers can exploit the vulnerability in a web service by sending heavy XML files, probably in size of gigs. Applications may be able to handle sometimes, but its very critical for applications where files are loaded into memory before processing.


6. External Entity attacks:

XML provides different external entity references that allow data outside the main document to be imported. It does this by declaring an external refernce as:

!ENTITY name SYSTEM "URI

so that an XML document can reuse existing data without having to make its own copy.

This particular attack refers to ondition when external reference is not trusted. An Attacker could provide malicious data which can initiate some unwanted action.