Thursday, August 26, 2010

Some Opposing goals for WEB - SECURITY Vs Reliability Performance Usability.txt

There are many opposing goals in software creation but none is more important than security...

Most important goals among these are security versus reliability.Reliability often requires developers to write more code (for example, error-handlers) and more code means more opportunity to write bugs. Most of the times these error handlers are under-exercised in testing and chances that it has security bugs is greater.
Error code needs to be carefully checked for security flaws....

Another important opposing factor is performance. The more code that is pushed to the client the faster the server will run.But more code on the client means more opportunity for security breaches because the user has access to the code running on the client...

Usability may be next an opposing target to security. Usability means giving information to users to make the system as easy to use as possible.Easy to use often means easy to hack :-) ..Specifically, when error messages reveal information that is helpful to an attacker.

Wednesday, August 25, 2010

WEB Versus Client-Sever systems

The World Wide Web is a special case of the client-server paradigm.Client-Server means one or more centralized server computers that serve data, resources, programs connected to server a number of client computers.Traditionally, this involves a powerful central server connected to remote client computers that are often "dumb" in that they do no actual computation and simply provide an interface to the server. You can think of a dumb terminal as a keyboard and monitor into the remote server.

Many UNIX servers that are connected to thin clients, which means that most applications run on the server, but the clients are capable of local data storage and other small computational tasks.The server does most of the heavy computations.Windows networks are typically just the other way round, with the "fat client" possessing basic Office applications and browsing, with separate servers used for major services requiring either the network (Web server, DNS, and so on) or massive storage requirements (database and file servers).

WEB is a special case of the Client-Server model using fat clients and operating on protocols like HTTP, HTML, XML, and Simple Object Access Protocol (SOAP)...Moreover it adds the interesting problem of "untrusted" users. Whereas traditional networks exist within the firewalled protection of a company's private network. In traditional Client-Server networks it is fairly clear what processing should take place on the client and on the server.Also both of client and the server normally exist within the walls of a corporate ...

But this is not the case with the World Wide Web....The Web is different because the clients exist outside the control of the central server and the network.Unlike a LAN Web has no boundaries to protect.All the clients have to be treated as untrusted which puts additional requirements on how computation is distributed across the client and the server.LANs can be designed to maximize performance. The more computation that can be "pushed" to the client the faster the central server can execute.Perhaps this is one reason why the fat client paradigm has won out over thin clients.The computational burden can be more distributed for speeding up the network for everyone.

But the Web is a different thing altogether... It is essentially a network of untrusted clients and any of those can be hostile.This means that every input that originates at a client must be carefully checked and all security operations must be performed on the server.

Monday, August 16, 2010

World Wide Web

Networked computers are not new for us. We have been connecting computers in a LAN and WAN before Web. Even Web is a specialized version of what is called a client-server network. Client-server networks conserve computing resources by delegating complex and time-consuming computation to powerful, expensive computers called servers. Server machines tend to have large storage and memory capacity and multiple fast processors. Their speed allows them to complete computationally intense processing faster than a typical computer and then serve their results to smaller and less powerful machineswhich are known as clients.

In client-server networks, there are really three things of importance:

1. The server computer
2. One or more client computers
3. A connection between the client and server which is called as Network

At the client, a software must be developed to connect to the network to send and receive requests and data. It's the same for the server. At the network layer, we need protocols to allow the computers to communicate.
We alos need to handle bandwidth issues, lossy transmission of data, collisions, errors, and one or the other computer not being available.But all of this has been figured out for various situations. Protocols like Transmission Control Protocol (TCP), and User Datagram Protocol (UDP) as well as supporting protocols like Internet Protocol (IP), Address Resolution Protocol (ARP), and the Domain Name System (DNS) have been implemented and made easy to use for developers on both the client and server side.

World Wide Web: new network layer protocols, new server software to handle the connections and serve the variety of content demanded by the clients, and new client software to browse remote servers and search through the entire universe of servers for the one that had the required information. World Wide Web arrived as a network of computers that span over whole world and speak the same languages and protocols:

HyperText Transfer Protocol (HTTP),
Hypertext Markup Language (HTML),
eXtensible Markup Language (XML) etc.

The Web began largely as a replacement for the major functionality of the Internet: e-mail and File Transfer Protocol (FTP): ways of communicating and sharing files. Initially there was a method for sharing files between many users was a system called gopher. It was much like the Web we know today. GOPHER allowed users to search for documents using Veronica (the Google of its time) and documents could be linked together and navigated to.Gopher disapeared around 90s.HTML as a language of the Web was much more powerful and expressive than that used by gopher.

The magic behind this was a server-side program called a Web server that allowed remote clients to access certain parts of the server computer's hard drive. The Web changed everything about the way we shared files and communicated information.Web browser was the ultimate tool for a client computer to connect to the growing number of Web pages that were sprouting up on servers everywhere.

After that lot many things came into light for making this more dynamic in nature and flexible in usability.
These days many of us can't imagine life withoout these applicatons of World Wide Web...

Tuesday, August 3, 2010

What are the main threats for Web Services???

Web Services comes with loosely coupled architecture for connecting systems, data and various business Organizations. Well designed and lossely coupled web-services can be accessible as separate parts of a Business logic, which can be used independently or combined with others to get a complex application.
This also gives opportunity to hackers for easily exploit these facts.

Here we are going to touch upon few basic threats to Web-Services:

1. WSDL SCANNING ATTACK:

WSDL is mainly used for advertizing the interfaces and web service addresses. These files are oftenly created using some utilities and intentionall designed to expose the information available with a particular method.
So a hackers can get very useful information s/he needs through simple google queries :-)

Queries like :

filetype:WSDL company_name
index of /wsdl OR inurl:wsdl company_name

At very first glance it seems ok because its important to publisize any web service so that it can be used at appopriate places. But this is not the right way to exposing services, it should happen through UDDI. But mny times it developers are not very careful about usage of tools which are used to generate WSDLs. sometimes debuggin information which is never supposed to be accessed, can be exploited in various ways.


Any information in a WSDL file may be a very helpful hint for a hacker which exposes other functionalities.
If we consider a simple example where WSDL describes an operation like GET_STOCK_PRICE even there is an unpublished operation like ACTUAL_STOCK function. Unless there are some authorization checks applied, an attacker can guess the functionalities s/he is not supposed to understand/know.

2. PARAMETER TAMPERING

SQL Injections can be equally useful for attacking a web service.

Most oftenly, Web Services are some other mechanisms of accessing a legacy code for some sepcific purpose. Out of Range parameters, commnd injection and directory traversal are not mitigated just because data is transferred in XMLs.Its all about the way code validates data inputs.

Web Services should validate the input data in XMLs before using that. Having strong typing of XML does help but application must be very careful while using the data after proper validations.


3. XPATH Injections:

XPATH is a laguage used for quering XMLs like SQL for Databases.It uses expressions to select particular nodes and not-sets in an XML.

To give some sense of XPATH, lets have a look at some of the expressions:



/ : This is used for selecting root node

// : For selecting current Node

//Photographer : Select all Photographer Elements

Photographer//Name : Select all name elements whic are under Photographer element

/Photographer/Name[1] : Select first name element which is child of the Photographer element


An XPATH Injection attack allows an attacker to inject malicious exprssions as part of valid SOAP request.
This can lead to unauthorized access or service denial problems.



4. Recursive Payload attacks:


There is a concept of nesting in XML for supporting complex relationships mong elements. Nesting is a simple mechanism where one element lies under another. element which is lying under another is called child element or nested element.

Sometimes attackers create documents with 10000 or 100000 elements or attributes ina na attempt to break a web service. This is called as Recursive Payload attacks...

Most of the times XML based systems attemt to load whole document before processing it. Most of these parsers work on Push-Down automation models. There are some map of XML-documents is created to tell the parser about action to do it discovers a particular element in an XML.If XML schema allows nesting, parser could find find itself in a loop on facing a recursive payload attack...So Parser should have some mechanism to know which element was encountered at waht point..These kinds of recursive attacks can consume lot of memroy or even crash the machine having all web services hosted.


5. Oversize Payload attack

As we know XML is verbose by design due to the fact that its created for humans to read and understand. But its important for XML parser to check the size of file before processing it. Otheriwse attackers can exploit the vulnerability in a web service by sending heavy XML files, probably in size of gigs. Applications may be able to handle sometimes, but its very critical for applications where files are loaded into memory before processing.


6. External Entity attacks:

XML provides different external entity references that allow data outside the main document to be imported. It does this by declaring an external refernce as:

!ENTITY name SYSTEM "URI

so that an XML document can reuse existing data without having to make its own copy.

This particular attack refers to ondition when external reference is not trusted. An Attacker could provide malicious data which can initiate some unwanted action.

Monday, July 19, 2010

Software Testing : What is Cost of Quality

Cost of Quality is a term used to quantify the total cost of prevention and appraisal, and costs associated with the production of software. While calculating the total costs associated with the development of a new application or system few sepecific components must be considered.

The Cost of Quality includes the additional costs associated with assuring that the product delivered meets the quality goals established for the product. This cost component is called the Cost of Quality and includes all costs associated with the prevention, identification, and correction of product defects.

THREE main categories of costs associated with producing quality products are:

1. Prevention Costs

Cost of preventing errors and to do the job right the first time. These normally require up-front costs for benefits that will be derived months or even years later. This is mostly money spent on establishing methods and procedures, training employees, acquiring tools, and planning for quality. Prevention money is all spent before the product is actually built.

2. Appraisal Costs

Money spent to review completed products against requirements. Appraisal includes the cost of inspections, testing, and reviews. This money is spent after the product is built but before it is shipped to the user or moved into production.

3. Failure Costs


All costs associated with defective products that have been delivered to the user or moved into production.
Some failure costs involve repairing products to make them meet requirements. Others are costs generated by failures such as the cost of operating faulty products, damage incurred by using them, and the costs associated with operating a Help Desk.

- The Cost of Quality will vary from one organization to the next.
- The majority of cost associated with the Cost of Quality are associated with the identification and correction of defects. To minimize production costs, the project team must focus on defect prevention.
- The goal is to optimize the production process to the extent that rework is eliminated and inspection is built into the production process.
- The IT quality assurance group must identify the costs within these three categories, quantify them, and then develop programs to minimize the totality of these three costs.
-Applying the concepts of continuous testing to the systems development process can reduce the cost of quality.

Sunday, July 18, 2010

Quality Control Vs Quality Assurance

There is often confusion in the IT industry regarding the difference between quality control and
quality assurance.

Quality methods can be seen in two categories:

1. Preventive methods
2. Detective methods

This distinction can be used to distinguish quality assurance activities from quality control.

This discussion explains the critical difference between control and assurance and how
to recognize a control practice from an assurance.

Quality has two working definitions:
1. Developer’s View – The quality of the product meets the requirements.
2. Customer’s View – The quality of the product is fit for use or meets the customer’s needs.

"Testing is a Quality Control Activity."

Quality Assurance

Quality assurance is a planned and systematic set of activities necessary to provide adequate confidence that products or services will conform to specified requirements and meet user needs.

Quality assurance is a group which is responsible for implementing the quality policy defined through the development and continuous improvement of software development processes.

Quality assurance is an activity that establishes and evaluates the processes that produce products.If there is no need for process then there is no role for quality assurance.

Quality Assurance takes care of:

1. System development methodologies
2. Estimation processes
3. System maintenance processes
4. Requirements definition processes
5. Testing processes and standards

Once established quality assurance would measure these processes to identify weaknesses and then
correct those weaknesses to continually improve the process.

Quality Control

Quality control is the process by which product quality is compared with applicable standards and
the action taken when non-conformance is detected. Quality control ensures that product conforms to standards and requirements.

Quality control activities focus on identifying defects in the actual products produced. These
activities begin at the start of the software development process with reviews of requirements and
continue until all application testing is complete.

It is possible to have quality control without quality assurance. For example, a test team may be in
place to conduct system testing at the end of development regardless of whether that system is
produced using a software development methodology.

The following statements help differentiate quality control from quality assurance:

- Quality control relates to a particular product or service.
- Quality control verifies whether specific attributes are included in a specific product or service.
- Quality control identifies defects for the primary purpose of correcting defects.
- Quality control is the responsibility of the team/worker.
- Quality control is concerned with a specific product.
- Quality assurance helps establish processes.
- Quality assurance sets up measurement programs to evaluate processes.
- Quality assurance identifies weaknesses in processes and improves them.
- Quality assurance is a management responsibility
- Quality assurance is concerned with all of the products that will ever be produced by a process.
- Quality assurance is sometimes called quality control over quality control because it evaluates whether quality control is working.
- Quality assurance personnel should not ever perform quality control unless it is to validate quality control.

Wednesday, July 14, 2010

What is Universal Description, Discovery, and Integration (UDDI)

Most common problem for many businesses is to identify best way to reach their customers and partners with appropriate information about thier services and products.
UDDI empowers by providing a standardised approach which allows copanies to advertise both businesses and technical aspects of their services.

Actually this is achieved by having an informational framework that describes and classifies the organizations, its services and technical details about various interfaces of web services.
This framework also proves discovery of interfaces and web-servicesof a particular type, classification or function. UDDI can considered as Yellow pages for web serives. Most of the registered UDDIs can be found at

http://uddi.ibm.com
http://uddi.microsoft.com

Tuesday, July 13, 2010

What is Web Services Description Language (WSDL)

WSDL is a document written in XML that describesfour critical peices of information:

1. Interface information for describing all publically available functions
2. Data-Type Information for all  message requests and responses.
3. Binding information about protocol used for transportation.
4. Address Information for locating appropriate service.


WSDL is a mechanism by which others know how to intercat with a particular service.
It gives the information about the place where service resides, what a service can do and how to invoke that particular web service.
WSDL is oftenly used in Combination with SOAP and XML to provide Web Service over Internet.
WSDL represents a cornerstone of web service architecture because it provides a common language for describing services and platform for automatically integrating all these services.


AN EXAMPLE OF WSDL looks like:

WSDL is a document written in XML that describesfour critical peices of information:
1. Interface information for describing all publically available functions2. Data-Type Information for all  message requests and responses.3. Binding information about protocol used for transportation.4. Address Information for locating appropriate service.WSDL is a mechanism by which others know how to intercat with a particular service.It gives the information about the place where service resides, what a service can do and how to invoke that particular web service.WSDL is oftenly used in Combination with SOAP and XML to provide Web Service over Internet.WSDL represents a cornerstone of web service architecture because it provides a common language for describing services and platform for automatically integrating all these services.

Monday, July 12, 2010

What is SOAP (Simple Object Access Protocol)

SOAP is a way to transport XML from one end point to another.

It supports a number of standard transport protocols which includes TCP, HTTP and SMTP. HTTP is most popular among all of these.


Basic idea of SOAP is to provide a mechanism by which an XML information can be wrapped in an envelop and which can be further carried by variety of transport mechanisms.

In a SOAP message, there are two main components:
- Header
- Body

As names signify, Header contains information abou actual SOAP message and body contains the actual message payload.

SOAP is a way to transport XML from one end point to another.It supports a number of standard transport protocols which includes TCP, HTTP and SMTP. HTTP is most popular among all of these.Basic idea of SOAP is to provide a mechanism by which an XML information can be wrapped in an envelop and which can be further carried by variety of transport mechanisms.In a SOAP message, there are two main components: Header, BodyAs names signify, Header contains information abou actual SOAP message and body contains the actual message payload.

Saturday, July 10, 2010

eXtensible Markup Language (XML)

XML is a language used for data description. It does it independently of applications, protocols, operating systems and pogramming languages used. Its similar to HTML, where we use different tag structures for data definition.

Inside XML Tags we define what data elements are there. In XML, there are no standard tags as we have in case of HTML. In XML, developer can define her own tags.
With common data providing methods, XML has become common format for elecrtonic data tranfer and Web Services which supports Business to Business Transactions.


Here is an example of an XML:

Friday, July 9, 2010

What are Web Services ???

Web Services are next level of Web applications. Web Services expose internal data and interfaces with other programs. These are similar to Application Programing Interfaces (APIs), a Web Application can use multiple web services and these can be shared between multiple Companies.

Self contained pieces of functionality that can be published, located and invoked on internet, Web Servuces can expose business functionality, services and data over web using automated interfaces.All such interfaces allow different organizatons to find functionality they require on runtime.


For basic pillars of Web Services are :


1) eXtensible Markup Language (XML)

2) Simple Object Access Protocal (SOAP)

3) Web Services Description Language (WSDL)

4) Universal Description, Discovery, and Integration (UDDI)

Monday, May 10, 2010

SQL Vs PL-SQL : Actual difference

.
SQL is a data oriented language for selecting and manipulating sets of data. PL/SQL is a procedural language to create applications.

Normally, you don't have a "SQL application". You normally have an application that uses SQL and a relational database on the back-end. PL/SQL can be the application language just like Java or PHP can. SQL may be the source of data for your screens, web pages and reports. PL/SQL might be the language you use to build, format and display those screens, web pages and reports.

Think of it like this: The code that makes your program function is PL/SQL. The code that manipulates the data is SQL DML. The code that creates stored database objects is SQL DDL. DDL compiles the code that is written in PL/SQL. PL/SQL may call SQL to perform data manipulation. The commands that format the output of a tool are not related to the SQL standard or to PL/SQL.

-- SQL is a data oriented language for selecting and manipulating sets of data.
-- PL/SQL is a procedural language to create applications.

-- PL/SQL can be the application language just like Java or PHP can. PL/SQL might be the language we use to build, format and display those screens, web pages and reports.
-- SQL may be the source of data for our screens, web pages and reports.

-- SQL is executed one statement at a time.
-- PL/SQL is executed as a block of code.

-- SQL tells the database what to do (declarative), not how to do it.
-- In contrast, PL/SQL tell the database how to do things (procedural).

-- SQL is used to code queries, DML and DDL statements.
-- PL/SQL is used to code program blocks, triggers, functions, procedures and packages.

-- We can embed SQL in a PL/SQL program
-- but we cannot embed PL/SQL within a SQL statement.

-- SQL is a language that is used by relational database technologies such as Oracle, Microsoft Access, and Sybase etc.
-- PL/SQL is commonly used to write data-centric programs to manipulate data in an Oracle database. PL/SQL language includes object oriented programming techniques such as encapsulation, function overloading, and information hiding.

Saturday, May 8, 2010

Basics of arrays in PERL

.

Defining an array:

@Myarray = qw(1st-Element 2nd-Element 3rd-Element);

OR

$myarray[0] = "1st-Element";

$myarray[1] = "2nd-Element";

$myarray[2] = "3rd-Element";

$#array returns the number of elements in an array

for (sort @myarray)

{ print $_ }

Friday, May 7, 2010

WHAT IS CPAN: Comprehensive Perl Archive Network

There are some separate modules, usually consisting of a .pm and/or a .pll file and available from CPAN (comprehensive Perl Archive network).


CPAN, the Comprehensive Perl Archive Network, is an archive of over 18000 modules of software written in Perl. It has a presence on the World Wide Web at www.cpan.org and is mirrored worldwide on more than 220 locations. CPAN can denote either the archive network itself, or the Perl program that acts as an interface to the network and as an automated software installer (somewhat like a package manager). Most software on CPAN is free and open source software.

CLICK HERE to know more.... 

perl -MCPAN -e shell 

Wednesday, May 5, 2010

Some Basic definitions when we talk about "Software Testing"

These are also called as ‘Software Quality Factors’

1.      Correctness: Extent to which a program satisfies its specifications and fulfills the user’s main objectives.

2.      Reliability: Extent to which a program can be expected to perform its intended function with required precision.

3.      Efficiency: The amount of computing resources and code required by a program to perform a function.

4.      Integrity: Extent to which access to software or data by unauthorized persons can be controlled.

5.      Usability: Its parameters for measuring extent of ease to use the software: Efforts required learning, operating, preparing input, and interpreting output of a program.

6.      Maintainability: How easy or difficult is to maintain the software under test : Efforts required locating and fixing an error in an operational program.

7.      Testability: Effort required testing a program to ensure that it performs its intended function.

8.      Flexibility: How much flexible is a software for new additions or changes : Effort required modifying an operational program.

9.      Portability: How much effort is required if a particular setup needs to be moved to another environment : Effort required transferring  software from one configuration to another.

10 Reusability: Extent to which a program can be used in other applications – related to the packaging and scope of the functions that programs perform.

11  Interoperability: Effort required to couple one system with another.

Thursday, April 22, 2010

What is OpenCL???

OpenCL (Open Computing Language) is a framework for writing programs that execute across heterogeneous platforms consisting of CPUs, GPUs, and other processors. OpenCL includes a language for writing kernels (functions that execute on OpenCL devices), plus APIs that are used to define and then control the platforms. OpenCL provides parallel computing using task-based and data-based parallelism. Its architecture shares a range of computational interfaces with two competitors, NVidia's Compute Unified Device Architecture and Microsoft's DirectCompute.

OpenCL gives any application access to the Graphical Processing Unit for non-graphical computing. The GPU had previously been available for graphical applications only. The GPU memory would be available to the operating system and or applications essentially as faster system memory than the main system memory Thus, OpenCL extends the power of the Graphical Processing Unit beyond graphics (General-purpose computing on graphics processing units). OpenCL is analogous to the open industry standards OpenGL and OpenAL, for 3D graphics and computer audio, respectively.

OpenCL was initially developed by Apple Inc., which holds trademark rights, and refined into an initial proposal in collaboration with technical teams at AMD, IBM, Intel, and Nvidia. Apple submitted this initial proposal to the Khronos Group. On June 16, 2008 the Khronos Compute Working Group was formed with representatives from CPU, GPU, embedded-processor, and software companies. This group worked for five months to finish the technical details of the specification for OpenCL 1.0

Tuesday, April 20, 2010

Microsoft DirectCompute

Microsoft DirectCompute is an application programming interface (API) that supports General-purpose computing on graphics processing units on Microsoft Windows Vista and Windows 7. DirectCompute is part of the Microsoft DirectX collection of APIs and was initially released with the DirectX 11 API but runs on both DirectX 10 and DirectX 11 graphics processing units. The DirectCompute architecture shares a range of computational interfaces with its competitors - the Khronos Group's Open Computing Language and NVIDIA's Compute Unified Device Architecture.

Monday, April 19, 2010

What is CUDA: Compute Unified Device Architecture

CUDA ( Compute Unified Device Architecture) is a parallel computing architecture developed by NVIDIA. CUDA is the computing engine in NVIDIA graphics processing units or GPUs that is accessible to software developers through industry standard programming languages. CUDA architecture shares a range of computational interfaces with two competitors -the Khronos Group's Open Computing Language and Microsoft's DirectCompute. Third party wrappers are also available for Python, Fortran, Java and MATLAB.

The latest drivers all contain the necessary CUDA components. CUDA works with all NVIDIA GPUs from the G8X series onwards, including GeForce, Quadro and the Tesla line. NVIDIA states that programs developed for the GeForce 8 series will also work without modification on all future Nvidia video cards, due to binary compatibility. CUDA gives developers access to the native instruction set and memory of the parallel computational elements in CUDA GPUs.

WHY CUDA?

CUDA has several advantages over traditional general purpose computation on GPUs (GPGPU) using graphics APIs.

   1. Scattered reads – code can read from arbitrary addresses in memory.
    2. Shared memory – CUDA exposes a fast shared memory region (16KB in size) that can be shared amongst threads. This can be used as a user-managed cache, enabling higher bandwidth than is possible using texture lookups
    3. Faster downloads and readbacks to and from the GPU
    4. Full support for integer and bitwise operations, including integer texture lookups.



KNOW MORE ABOUT CUDA...

Sunday, April 18, 2010

Some facts about Graphics Processing Units...

A graphics processing unit or GPU  is a specialized processor that offloads 3D or 2D graphics rendering. It is used in embedded systems, mobile phones, personal computers, workstations, and game consoles. Modern GPUs are very efficient at manipulating computer graphics, and their highly parallel structure makes them more effective than general-purpose CPUs for a range of complex algorithms. In a personal computer, a GPU can be present on a video card, or it can be on the motherboard. More than 90% of new desktop and notebook computers have integrated GPUs, which are usually far less powerful than those on a dedicated video card.

The IBM Professional Graphics Controller was one of the very first 2D/3D graphics accelerators available for the IBM PC. Released in 1984, 10 years before hardware 3D acceleration became a standard, its high price (~$4500 USD @ 1984 currency), slow processor, and lack of compatibility with then-current commercial programs made it unable to succeed in the mass-market.

OpenGL appeared in the early 90s as a professional graphics API, but became a dominant force on the PC, and a driving force for hardware development. Software implementations of OpenGL were common during this time although the influence of OpenGL eventually led to widespread hardware support. Over time a parity emerged between features offered in hardware and those offered in OpenGL. DirectX became popular among Windows game developers during the late 90s. Unlike OpenGL, Microsoft insisted on providing strict one-to-one support of hardware. The approach made DirectX less popular as a stand alone graphics API initially since many GPUs provided their own specific features, which existing OpenGL applications were already able to benefit from, leaving DirectX often one generation behind.

NVIDIA was first to produce a chip capable of programmable shading,

In 2008, Intel, NVIDIA and AMD/ATI were the market share leaders, with 49.4%, 27.8% and 20.6% market share respectively.

Monday, April 5, 2010

What is JQuery???

Recently I met one of my friend who is running a software company in Chandigarh. I talked to some of the employees of Cybrain Solutions and saw some demos of their cool websites created in Jquery. They recentlt started working in JQuery. As of there ar very few websites are live but they are currently working on approximately 15 websites. Even many clients come up with their requirements and say "Can I have a cool web portal in JQuery". They are very excited about the coolness of JQuery with good perfomance.

http://www.ppadprint.com/ ; This is first project by Cybrain Solutions and client want another website for another sister cocern.
JQuery is a lightweight cross-browser JavaScript library that emphasizes interaction between JavaScript and HTML. JQuery is the most popular JavaScript library in use today.

JQuery is free and open source software. jQuery's syntax is designed to make it easier to navigate a document, select DOM elements, create animations, handle events, and develop Ajax applications. jQuery also provides capabilities for developers to create plugins on top of the JavaScript library. Providing this option, developers are able to create abstractions for low-level interaction and animation, advanced effects and high-level, theme-able widgets. This contributes to the creation of powerful and dynamic web pages.

Microsoft and Nokia have announced plans to bundle jQuery on their platforms. Microsoft adopting it initially within Visual Studio for use within Microsoft's ASP.NET AJAX framework and ASP.NET MVC Framework whilst Nokia has integrated it into their Web Run-Time widget development platform.

The Seaside framework provides full integration of jQuery allowing to write web applications entirely in Smalltalk.

Saturday, March 20, 2010

File Exist script in Perl

#!/usr/bin/perl -w

$filename = 'C:\myfile.txt';

if (-e $filename) {

    print "File Exists!";

}
else
{

print "File does not Exist!";

}

Thursday, January 21, 2010

What is Netlimiter?

NetLimiter is an internet traffic control and monitoring tool designed for Windows.

We can use NetLimiter to set download/upload transfer rate limits for applications or even single connection and monitor their internet traffic.

Netlimiter also offers comprehensive set of internet statistical tools. It includes real-time traffic measurement and long-term per-application internet traffic statistics.

HIGHLIGHTS:

1. Network Monitor

NetLimiter shows list of all applications communicating over network. It will show connections, transfer-rates and more.

2. Bandwidth-Limiter/Bandwidth Shaper

You can use NetLimiter to set download or upload transfer rate limits for applications, connections or groups of them.
With limits we can easily manage internet connection?s bandwidth (bandwidth shaper or bandwidth controller)

3. Statistical tool

This feature lets you to track your internet traffic history since you've installed NetLimiter.

4.Additional network information

NetLimiter 2 provides you with and additional information like WHOIS, traceroute etc.

5. Rule scheduler (and more...)

Remote administration, Personal firewall, Running as WinNT service, User rights, Chart, Advanced Rule editor and scheduler, Zone based traffic management...

MAIN FEATURES:

- Limits

We can use NetLimiter to set download/upload transfer rate limits for applications/connections. With limits we can easily manage our internet connection?s bandwidth (bandwidth shaper) and share it among all applications running on your computer.

- Grants

When we set grant for application/connection then it means that we grant specified bandwidth for it. If other application/connections take too much bandwidth, then it is taken from them and is given to application/connection with granted bandwidth.

- Network monitor

NetLimiter shows list of all applications communicating over network  with their connections and transfer rates.

- Personal firewall functionality

We can allow/deny certain applications to connect to/from any network.

- Zones

NetLimiter monitors and controls traffic separately on three predefined zones - My Computer, Local Network and Internet. For example, we can limit the traffic rate of our browser downloading from the Internet and let it download from intranet at full speed.

- Filters

With filters we can define groups of connections/applications and then apply rules to them.

- Rule editor and scheduler

Rule editor helps us to create advanced rules. For example, we can create limit/firewall rule for a group of applications which is valid only in a given time interval.

- Network manager

Using Network manager we can assign networks present on you computer to NetLimiter zones. It's also possible to add your own networks.

- Statistics


NetLimiter stats module is intended for long-term measurement of internet traffic. This feature lets us to track our internet traffic history since we have installed NetLimiter. NetLimiter is able to automatically export statistical data to disk.

- Traffic chart

Traffic chart shows application's or connection's real time activity.

- Remote administration

We can control and monitor other computers remotely from one place.

To know more Click Here

Monday, January 11, 2010

IDs and CLASSs in CSS...

IDs and CLASSs in CSS:

Apart from styles for a HTML tag CSS aslo allows us to specify our own selectors called "id" and "class".

ID SELECTOR:

The id selector is used to specify a style for a single and unique element. The id selector uses id attribute of HTML-element and it's defined with a #.

#ID1
{
text-align:right;
color:blue;
}

CLASS SELECTOR:

Class selector is used to specify a style for a group of HTML elements. Class selector is most often used on several HTML elements. This allows you to set a particular style for any HTML elements with the same class.

The class selector uses the HTML class attribute, and is defined with a "."

A.left
{
text-align:left;
}

According to the line above all A elements with class="left" will be left aligned.

NOTE: Ids/Calsses does not work in Mozilla/Firefox if their names start with a number.

Basics of CSS and how its useful for your website...

Basics of CSS and how its useful for your website

Cascading Style Sheets

- Styles define  how to display HTML elements
- Styles were added to HTML 4.0 to solve a problem
- External Style Sheets can save a lot of work
- External Style Sheets are stored in CSS files

Development of large web sites where fonts and color information were added to every single page became a long and expensive process because of repetitive inserstion of required tags.

World Wide Web Consortium (W3C) created CSS to solve this problem...

CSS SYNTAX:

A CSS rule has two main parts: a selector, and one or more declarations.Each declaration consists of a property and a value.The property is the style attribute you want to change. Each property has a value.

/* Commnets */
H1
{
color:blue;
text-align:left;
}

Saturday, January 9, 2010

What are Checkpoints in QTP?

What are Checkpoints in QTP?

A checkpoint is a verification point that compares a current value for a specified property with the expected value for that property.

Types of Checkpoints in QTP:

- Standard Checkpoint: Checks the property value of the object in an application or Webpage. It checks buttons,Radiobuttons,Comboboxes etc

- Bitmap checkpoint: Checks the value of an image in the application.

- Text checkpoins: Checks whether the text string is displayed in the appropriate place in your application or on a Web page

- Text Area checkpoint
- Database checkpoint
: Checks the contents of a database accessed by the application

- XML Checkpoint: Checks the data content of XML documents in the application.

Note:
- Before creating checkpoints on web objects we have to select web-test option in ADD-IN Manager.
- If the objects developed in HTML we can use Standard Check point.
- If the Objects developed in XML we can use XML Check point.


How Checkpoints work?

In Checkpoints, we define what should be the state of an object at particular time. e.g.- If I have an application where a particular button gets enabled on some specific operation. I will automate that operation and will apply a checkpoint that Button should be enabled and its value should be 'OK'.... So we apply checkpoint on properties of different types of objects..

Sample Objects that QuickTest can be verified:

Windows:
- Window
- Edit-Field
- Drop-Down List
- Menu command
- Radio Button
- Checkbox
- Windows Object
- Status Bar
- Text Area

WEB:
- Browser
- Text Area
- Text Link
- Images
- Image Link
- Edit Field
- Drop Down List
- checkbox
- Radio Button
- Tables/Grids
- web Elements

Friday, January 8, 2010

What is Synchronization in QTP?

What is Synchronization in QTP?

Synchronization refers to adding a step in the script that instructs Quick Test to wait for a particular object before proceeding to the next step during playback.
   
When do we need SYNCHRONIZATION ?

       When you observe that the application takes a longer time to process information sent or respond to a client request, add a synchronization step while recording. E.g.-:
   
    - A progress bar to reach 100%.
    - A button to become enabled...
   - A window or pop-up message to open.
   
How to Add Synchronization?

    - Synchronization can be added only during recording.
    - Identify the object to be synchronized.
    - Navigate to the window where the object is located.
    - Locate the step in the test that corresponds to the object.
    - Start recording and add the synchronization point.

Two Ways of Setting Synchronization Point?

    1. Global synchronization value for all Objects:

        Instructs Quick Test to wait for all the objects for a specific amount of time.

        MENU > TEST > SETTINGS > RUN > "Object Synchronization Timeout" : For every object in the test, Quick Test can wait a maximum number of milliseconds specified in the settings

     2. Synchronization of a specific Object:

         Instructs Quick Test to wait for a specific object only.

        Menu > Insert > Step > Synchronization Point

Quick Test shall pause the test until the object property achieves the value specified (or until the specific timeout amount is exceeded)...
       
Quick Test uses one of the object’s properties as the waiting criteria. E.g. ‘Text’ property for window, ‘Label’ property for buttons etc...

Thursday, January 7, 2010

What are different recording modes in QTP???

1) Normal recording

It is used for recording the operations perform at different contacts on the standard GUI objects.

2) Analog Recording

It is used for recording the continuous operations.

3) Low-level Recording

It is special recording mode provided by QTP, which is used for recording the minimum operations on the non-Supported environments also.

1.     It will generate the corresponding test script statement for every user action.
2.     It will also store the required related information in the object repository.

Wednesday, January 6, 2010

What is Smart Identification in QTP?

What is Smart Identification in QTP?

When QuickTest uses the learned description to identify an object, it searches for an object that matches all of the property values in the description. In most cases, this description is thesimplest way to identify the object and unless the main properties of the object change this methodwill work.

If QuickTest is unable to find any object that matches the learned object description or if it finds more than one object that fits the description, then QuickTest ignores the learned description and uses the Smart Identification mechanism to try to identify the object.

Smart Identification mechanism is more complex and very flexible. If configured logically a SmartIdentification definition can probably help QuickTest identification of an object if it is present even when the learned description fails.

The Smart Identification mechanism uses two types of properties:

1) Base Filter Properties:

The most fundamental properties of a particular test object class whose values cannot be changed without changing the essence of the original object. For example, if Web link's tag was changed from to any other value, you could no longer call it the same object.

2) Optional Filter Properties:

Other properties that can help identify objects of a particular class. These properties are unlikely to change on a regular basis but can be ignored if they are no longer applicable.

Tuesday, January 5, 2010

What are Ordinal Identifiers in QTP?

What are Ordinal Identifiers in QTP?

If QTP feels that its unable to uniquely identify an object on the basis of Mandatory and Assitive properties then it looks for Ordinal Identifiers.

        # Generally we should not encourage the Ordinal Identifier but when the application is Stable then we may use it.

        # Once the application is stable then only we go for Automation, till such time we do only manual testing.

There are three types of ordinal identifiers that QuickTest can use to identify an object:

1.    Index
2.    LOcation
3.    Creation Time

1. "Index" Indicates the order in which the object appears in the application code relative to other objects with an otherwise identical description.

2. "Location" Indicates the order in which the object appears within the parent window frame/dialog box relative to other objects with an otherwise identical description.

3. "CreationTime" (Browser object only) Indicates the order in which the browser was opened relative to other open browsers with an otherwise identical description

Monday, January 4, 2010

Whats is an Object Repository in QTP?

Whats is Object Repository in QTP:

It's defined as a Storage place where we can store the Objects' information and it also act as an Interface between the Test Script and AUT in order to Identify the Object during the execution.

Types of Object Repository:

    1. Per-Action Repository
     2. Shared Repository

Per-Action Repository:

For every action a Separate individual repository is created automatically and managed by QTP.   

•    Per-action repository  can’t be Re-usable
•    Space  required for storage is Less
•    Execution speed is fast

Shared Repository:

Repository needs to be associated to the corresponding test manually.

•    For long run we go for shared repository even though we need to create manully.
•    Shared repository can be re-usable
•    Space  required for storage is more
•    Execution speed is slow
•    Easily maintenance