Wednesday, December 23, 2009

What are different Recording Modes in QTP?

There are three basic Recording modes in QTP:

1) Normal recording in QTP:

It is used for recording the operations perform at different contacts on the standard GUI objects. During this all actions are recorded with Objects details in Object repository. It will generate the corresponding test script statement for every user action. It will also store the required related information in the object repository.


QTP records in Normal mode by default and takes full advantage of QTP Object Model , recognizing application objects regardless of their location on screen.



2) Analog Recording in QTP

           It is used for recording the continuous operations. This is useful for the cases where we need to record exact mouse or Keyboard operation in relation to Screen or application. This is useful when Normal reocrding does ot work for you. E.g.- It will be helpful where we want to record mouse drag operation.

3) Low-level Recording in QTP

           It is special recording mode provided by QTP, which is used for recording the minimum operations   on  the Non-Supported environments also.





This mode records at object level and records all run-time objects as windows or winobject test objects. You can also use this recording mode in case exact coordinates are important for your testing.

What is the basic difference between Structural and Functional Testing?

Different people use different Software terminologies as per their convenience  and here are two more which are used in Software Industry, specifically among Quality Prfessionals.

Structural testing is about comparing software behavior against the apparent intention of the source code wirtten to execute it. Many testers call it White-Box Testing or Glass-Box Testing :) Structural testing is also called path testing since you choose test cases that cause paths to be taken through the structure of the program.


Other the other side, functional testing is about comparing workflows behavior in the application against user requirements specification. This is also called as Black-Box Testing.

Structural testing verifies how the program works by considering possible pitfalls in the structure and logic. In Functional testing, nobody is bothered about the internals. It simply focuses on the end results and way they are delivered to users.

Tuesday, December 22, 2009

What is Code Coverage and how does it impact Software Quality?

I would like to take second question first: Code coverage is considered as an extended step towards ensuring good quality of the software by indirect means. Why I explained it in this way?

In my opinion Code coverage is more about assuring quality of test-cases and finding redundant code, not about quality of actual product we are developing. Realistically its very difficult to do Code coverage exceises with actual testing. So normally teams Quality Engineers try to find out the cases they need to add and test-cases which does not help in improving the coverage.

You may not agree, as its my personal opinion.

Now lets come to first question about what it is?

Code Coverage is methodology to:

1.  Find different program areas not excercised by Testers through pre-identified set of Test cases.
2. Add new test cases to increase coverage in terms of Functional/Statement/Decision/Conditional-coverages
3.  Identify overall gap between the actual coverage and targetted coverage. Its more about statistics which helps a Quality engineering team to know about the other quality measures of a Software.
4. To identify redundant test-cases which have 0% contribution in coverage increase.

Most of the times last point is not considered but we should not forget that its also a wastage of resources/time when we run redundant test-cases which adds no value.

Saturday, September 5, 2009

What is winDBG???

WinDbg is a multipurpose debugger for Microsoft Windows, distributed on the web by Microsoft. It can be used to debug user mode applications, drivers, and the operating system itself in kernel mode. It is a GUI application, but has little in common with the more well-known, but less powerful, Visual Studio Debugger.

WinDbg can be used for debugging kernel-mode memory dumps, created after what is commonly called the Blue Screen of Death which occurs when a bug check is issued. It can also be used to debug user-mode crash dumps. This is known as Post-mortem debugging.

WinDbg also has the ability to automatically load debugging symbol files (e.g., PDB files) from a server by matching various criteria (e.g., timestamp, CRC, single or multiprocessor version). This is a very helpful and time saving alternative to creating a symbol tree for a debugging target environment. If a private symbol server is configured, the symbols can be correlated with the source code for the binary. This eases the burden of debugging problems that have various versions of binaries installed on the debugging target by eliminating the need for finding and installing specific symbols version on the debug host. Microsoft has a public symbol server that has most of the public symbols for Windows 2000 and later versions of Windows (including service packs).

Recent versions of WinDbg have been distributed as part of the free Debugging Tools for Windows suite, which shares a common debugging engine between WinDbg. This means that most commands will work in all alternative versions without modification, allowing users to use the style of interface with which they are most comfortable.

Thursday, August 27, 2009

What is BoudsChecker : Memory Leak Testing Tool

BoundsChecker is a memory checking tool used for C++ software development with Microsoft Visual C++. It is part of the DevPartner for Visual C++ BoundsChecker Suite. Comparable tools are Purify, Insure++ and Valgrind.

BoundsChecker can be run in two modes: ActiveCheck, which does not instrument the application, and FinalCheck, which does.

ActiveCheck performs a less intrusive analysis and monitors all calls by the application to the C Runtime Library, Windows API and calls to COM objects. By monitoring memory allocations and releases, it can detect memory leaks and overruns. Monitoring API and COM calls enables ActiveCheck to check parameters, returns and exceptions and report exceptions when they occur. Thread deadlocks can also be detected by monitoring of the synchronization objects and calls giving actual and potential deadlock detection.

FinalCheck requires an instrumented build and gives a much deeper but more intrusive analysis. It provides all of the detection features of ActiveCheck plus the ability to detect buffer overflows (read and write) and uninitialized memory accesses. It monitors every scope change, pointer and memory usage.

Filtering Mechanisms in Fiddler: Can we see filtered http calls made through a particular Application?

Yes, Fiddler provide very efficient ways to filter http requests generated through particular application. Here are the steps to achieve this:

- Launch fiddler
- Start your Application
- Launch Fiddler
- Go to Filter tab on right hand side
- Check "Use Filters" option
- Now all filter parameters will be enabled.
- Go to "Client process" section
- Check option : "Show traffic only from"
- Now a dropdown will be enabled >> Select the exe you want to monitor
- This dropdown shows all the processes running on your machine

Apart from this, there are different type of Filters.

1. Filtering on the basis of Hosts. E.g.- you can opt to see only those calls which are made on www.google.com
2. You can also opt for seeing calls which are failing by selecting option for hiding success calls :)
3. We can filter on the basis on content : Images, Text, HTTP, Scripts...

Explore Filter Menu in details....

Tuesday, August 25, 2009

What is Filemon?

FileMon monitors and displays file system activity on a system in real-time. Its advanced capabilities make it a powerful tool for exploring the way Windows works, seeing how applications use the files and DLLs, or tracking down problems in system or application file configurations. Filemon's timestamping feature will show you precisely when every open, read, write or delete, happens, and its status column tells you the outcome. FileMon is so easy to use that you'll be an expert within minutes. It begins monitoring when you start it, and its output window can be saved to a file for off-line viewing. It has full search capability, and if you find that you're getting information overload, simply set up one or more filters.

Filtering

Use the Filter dialog, which is accessed with a toolbar button or the Edit|Filter/Highlight menu selection, to select what data will be shown in the list view. The '*' wildcard matches arbitrary strings, and the filters are case-insensitive. Only matches shown in the include filter, but that are not excluded with the exclude filter, are displayed. Use ';' to separate multiple strings in a filter (e.g. "filemon;temp"). Windows NT/2000 note: because of the asynchronous nature of file I/O, its not possible to filter on the result field.

For example, if the include filter is "c:\temp", and the exclude filter is "c:\temp\subdir", all references to files and directories under c:\temp, except to those under c:\temp\subdir will be monitored.

Wildcards allow for complex pattern matching, making it possible to match specific file accesses by specific applications, for example. The include filter "Winword*Windows" would have FileMon only show accesses by Microsoft Word to files and directories that include the word "Windows".

Use the highlight filter specify output that you want to have highlighted in the listview output. Select highlighting colors with Edit|Highlight Colors.

Additional filter options select or deselect read, write or open operations. In many troubleshooting scenarios only open operations are of interest, for example.

Selecting Volumes (Windows NT/2K/XP/2K3)

The Volumes menu can be used to select and deselect monitored volumes. Select the Network menu item to monitor accesses to any network resources, including remote shares and UNC path name accesses to remote volumes.

Limiting Output


The History Depth dialog, accessed via toolbar button or the Edit|History menu item, allows you to specify the maximum number of lines that will be remembered in the output window. A depth of 0 is used to signify no limit.

Searching the Output


You can search the output window for strings using the Find menu item (or the find toolbar button). You can repeat the search in the forward direction with the F3 key and in reverse with Shift+F3. To start a search at a particular line in the output, select the desired line by clicking on the far left column (the index number). If no line is selected a new search starts at the first entry in searching down, and at the last entry for searching up.

Options


FileMon can either timestamp events or show their duration. The Options menu and the clock toolbar button let you toggle between the two modes. The button on the toolbar shows the current mode with a clock or a stopwatch. When showing duration the Time field in the output shows the number of seconds it took for the underlying file system to service particular requests. The Options|Show Milliseconds menu entry lets you add millisecond resolution to times presented when FileMon shows clock times.

You can toggle FileMon to always remain a top window with the Options|Always On Top menu item. In addition, you can toggle FileMon not to scroll the listview via the Options|Auto Scroll menu item or corresponding toolbar button.

Named Pipes and Mail Slots

Starting in version 4.1 FileMon is able to monitor named pipe and mail slot file system activity on Windows NT/2K. Named pipes are commonly used as a communications mechanism in NT/Win2K by core subsystems like the Local Security Authority Subsystem (LSASS), and are used by DCOM. They are also used by network components such as the Browser service. To see named pipe activity with FileMon select Named Pipes in the Drives menu and perform an operation on a shared network resource, or open an application such as Regedt32 that interacts with the security subsystem.

How FileMon Works


For the Windows 9x driver, the heart of FileMon is in the virtual device driver, Filevxd.vxd. It is dynamically loaded, and in its initialization it installs a file system filter via the VxD service, IFSMGR_InstallFileSystemApiHook, to insert itself onto the call chain of all file system requests. On Windows NT the heart of FileMon is a file system driver that creates and attaches filter device objects to target file system device objects so that FileMon will see all IRPs and FastIO requests directed at drives. When FileMon sees an open, create or close call, it updates an internal hash table that serves as the mapping between internal file handles and file path names. Whenever it sees calls that are handle based, it looks up the handle in the hash table to obtain the full name for display. If a handle-based access references a file opened before FileMon started, FileMon will fail to find the mapping in its hash table and will simply present the handle's value instead.

What is DBGView?

Its a very useful application developed by SysInternals which is acquired by Microsoft now.

DebugView
is an application that lets you monitor debug output on your local system, or any computer on the network that you can reach via TCP/IP. It is capable of displaying both kernel-mode and Win32 debug output, so you don't need a debugger to catch the debug output your applications or device drivers generate, nor do you need to modify your applications or drivers to use non-standard debug output APIs.


DebugView Captures:


Under Windows 2000, XP, Server 2003 and Vista DebugView will capture:

* Win32 OutputDebugString
* Kernel-mode DbgPrint
* All kernel-mode variants of DbgPrint implemented in Windows XP and Server 2003

DebugView also extracts kernel-mode debug output generated before a crash from Window's 2000/XP crash dump files if DebugView was capturing at the time of the crash.


Simply execute the DebugView program file (dbgview.exe) and DebugView will immediately start capturing debug output. Note that if you run DebugView on Windows 2000/XP you must have administrative privilege to view kernel-mode debug output. Menus, hot-keys, or toolbar buttons can be used to clear the window, save the monitored data to a file, search output, change the window font, and more. The on-line help describes all of DebugView's features.

Thursday, August 13, 2009

Basics about QTP : Mercury Interactive Functional Testing Tool (Now HP)

1. QTP : Quick Test Professional, a Mercury Interactive Functional Testing Tool

(Now an HP tool afer overtake)

2. Scripting language used by QTP: QTP uses VB scripting.

3. QTP is based on two concept Recording & Playback

4. How many types of recording facility are available in QTP ?

QTP provides three types of recording methods-
- Context Recording (Normal)
- Analog Recording
- Low Level Recording

5. How many types of Parameters are available in QTP ?

QTP provides three types of Parameter-
- Method Argument
- Data Driven
- Dynamic

6. What is QTP testing process ?

QTP testing process consist of seven steps-
- Preparing to recoding
- Recording
- Enhancing your script
- Debugging
- Run
- Analyze
- Report Defects

7. What is Active Screen ?

It provides the snapshots of your application as it appeared when you performed a

certain steps during recording session.

8. What is Test Pane ?

Test Pane contains Tree View and Expert View tabs.

9. What is Data Table ?

It assists you about parameterizing the test.

10. What is the Test Tree ?

It provides graphical representation of your operations which you have performed

with your application.

11. Which all environment QTP supports ?

ERP/ CRM
Java/ J2EE
VB, .NET
Multimedia, XML
Web Objects, ActiveX controls
SAP, Oracle, Siebel, PeopleSoft
Web Services, Terminal Emulator
IE, NN, AOL

12. How can you view the Test Tree ?

The Test Tree is displayed through Tree View tab.

13. What’s the Expert View ?

Expert View display the Test Script.

14. Which keyword is used for Nornam Recording ?

F3

15. Which keyword is used to run the test script ?

F5

16. Which keyword is used to stop the recording ?

F4

17. Which keyword is used for Analog Recording ?

Ctrl+Shift+F4

18. Which keyword is used for Low Level Recording ?

Ctrl+Shift+F3

19. Which keyword used to switch between Tree View and Expert View ?

Ctrl+Tab

20. What is a Transaction ?

You can measure how long it takes to run a section of your test by defining

transactions.

21. Where you can view the results of the checkpoint ?

You can view the results of the checkpoints in the Test Result Window.

22. What is Standard Checkpoint ?

Standard Checkpoints checks the property value of an object in your application or

web page.

23. Which environment are supported by Standard Checkpoint ?

Standard Checkpoint are supported for all add-in environments.

24. What is Image Checkpoint ?

Image Checkpoint check the value of an image in your application or web page.

25. Which environments are supported by Image Checkpoint ?

Image Checkpoint are supported only Web environment.

26. What is Bitmap Checkpoint ?

Bitmap Checkpoint checks the bitmap images in your web page or application.

27. Which environment are supported by Bitmap Checkpoints ?

Bitmap checkpoints are supported all add-in environment.

28. What is Table Checkpoints ?

Table Checkpoint checks the information with in a table.

29. Which environments are supported by Table Checkpoint ?

Table Checkpoints are supported only ActiveX environment.

30. What is Text Checkpoint ?

Text Checkpoint checks that a test string is displayed in the appropriate place in

your application or on web page.

31. Which environment are supported by Test Checkpoint ?

Text Checkpoint are supported all add-in environments


MORE INFO:

* QTP records each steps you perform and generates a test tree and test script.

* QTP records in normal recording mode.

* If you are creating a test on web object, you can record your test on one browser

and run it on another browser.

* Analog Recording and Low Level Recording require more disk sapce than normal

recording mode.

Sunday, July 19, 2009

What is UTF-8 ???

UTF-8 (8-bit Unicode Transformation Format) is a variable-length character encoding for Unicode. It is able to represent any character in the Unicode standard, yet is backwards compatible with ASCII. For these reasons, it is steadily becoming the preferred encoding for e-mail, web pages, and other places where characters are stored or streamed.

UTF-8 encodes each character in 1 to 4 octets (8-bit bytes), with the single octet encoding used only for the 128 US-ASCII characters.

The Internet Engineering Task Force (IETF) requires all Internet protocols to identify the encoding used for character data, and the supported character encodings must include UTF-8. The Internet Mail Consortium (IMC) recommends that all email programs be able to display and create mail using UTF-8

ADTANTAGES:
1. The ASCII characters are represented by themselves as single bytes that do not appear anywhere else, which makes UTF-8 work with the majority of existing APIs that take bytes strings but only treat a small number of ASCII codes specially. This removes the need to write a new Unicode version of every API, and makes it much easier to convert existing systems to UTF-8 than any other Unicode encoding.
2. UTF-8 and UTF-16 are the standard encodings for XML documents. All other encodings must be specified explicitly either externally or through a text declaration.
3. UTF-8 and UTF-16 are the standard encodings for having Unicode in HTML documents, with UTF-8 as the preferred and most used encoding.
4. UTF-8 strings can be fairly reliably recognized as such by a simple algorithm.
5. Sorting of UTF-8 strings as arrays of unsigned bytes will produce the same results as sorting them based on Unicode code points.

Saturday, June 27, 2009

How to create a low-memory conditions for Testing your Software Application

All this was shared by one of my team-mate for performance testing where we want to test our software performance at different memory levels. Here are the exact details:

*********************
Using the boocfg command, you can restrict the amount of physical memory available to windows. So typing the following command will reduce the memory available to Windows by 768 MB. After this, on a machine with 1GB RAM only 256 MB will be available to Windows, 768 MB
will not be available:

bootcfg /raw "/burnmemory=768" /A /ID 1

After this reboot the system and Windows will reboot with only 256 MB
of memory available to it.

*********************

To remove this entry, the following steps should be performed:

1. Go to Control Panel
2. Open System dialog
3. Open Advanced Tab on System Dialog
4. In Startup and Recovery Section, click Settings button
5. the Edit button in System Startup section
6. Boot.ini opens in notepad, delete the "/burnmemory=768" entry
7. Reboot the system

*********************

Thursday, June 18, 2009

Do you want to learn SQL : Try a different Learning approach on Web

Today when I came back from Office my friend Vikas was revising his SQL concepts on SQLZOO

This site is really good when you have some basic SQL knowledge and want to learn more by practically executing queries. Actually it has some divided sections where different quesries are asked for sample tables. My description may not be that effective but I would recommend to try out this site to improve your SQL skills..

By the way I am done with SELECT part today

Saturday, June 13, 2009

Setting up your automation to produce wiretraces

One of the requirements that came up when we were setting up automation for our hybrid application was to get wiretraces for the activity to help in debugging along with the other logs that our application produced. The request would genuinely help our team and would increase the validity of our automation results. So I started the investigations.

The only knowledge I had was the name of the tool that I had to use and it was "Wireshark".

Here is some excerpt from Wikipedia about Wireshark:
"Wireshark is a free packet sniffer computer application. It is used for network troubleshooting, analysis, software and communications protocol development, and education. Originally named Ethereal, in May 2006 the project was renamed Wireshark due to trademark issues...."
For more details, go ahead and read: http://en.wikipedia.org/wiki/Wireshark

The workflow of our automation was that every hour the automation would be triggered. It would map a required network location and start running 3 test cases using HP Winrunner. It would note the time being taken to complete several actions and save the logs being produced by the application. After which, it would force reboot the system and then wait for the next run.

We had to fit in Wireshark within this workflow.

First Step
The first step in this direction was to find out the interface for which we will be monitoring the traffic. For this we used the following command:

wireshark.exe -d>>c:\interface.txt

This listed the Network interfaces along with their IDs. We identified the interface that we had to monitor and proceeded.

Second Step
The plan was to start running Wireshark through commandline interface and instruct it to save logs to a specific file. This was where we faced the first challenge. Though the Wireshark commandline has an option to start capturing logs, it does not provide a 'good-enough' option to stop capturing them. The only commandline options available in this regards are:

-c stop after n packets (def: infinite)
-a ... duration:NUM - stop after NUM seconds
filesize:NUM - stop this file after NUM KB
files:NUM - stop after NUM files

None of these suited our needs. We needed to be able to tell Wireshark to tell when to start and to stop when our automation was complete. Packet number, duration, file size or number of files was variable in our case and none of the options listed above could be reliable for us.

The method that we adopted to get across this limitation was a bit crude but worked perfectly. We setup Wireshark to capture for 3600 seconds (1 hour). After our automation had completed, we would simply kill Wireshark through commandline using the following command:

taskkill /im wireshark.exe /f

and copy the traces to a network resource.

Third Step
One 'nice-to-have' option would have been to have an embedded timestamp within the name of the wiretrace log file so that whenever we would want to analyze the log for a particular run, we would only need to look at the timestamp and know which one to look at. Since we had the automation running 24 times in a day, it would also mean that we would have a unique log file for each run.

For this, I tricked wireshark into believing that I was capturing output into a ringbuffer*. W.R.T. Wireshark, it means that after saving n number of bytes in a file, it would move to the next file and store n number of bytes in that. And so on. In this case, wireshark embeds time-stamp in the name of each ringbuffer file.

We used the following command line option to set this up:

-b duration:3600 -w c:\wiretraces\wiretrace

where -b is the duration for which 1 buffer file will be used
and -w is the common name for all buffer files in one capture.

As our automation would complete within an hour, it never went beyond one buffer file and at the end we had a buffer file with the following name:

wiretrace_00001_20090613083111

which was exactly what I wanted.

Fourth Step
The Fourth and the last challenge with this was the huge size of the trace logs. Typically, wireshark would capture all ports and protocols, which resulted in a huge amount of data that we did not need. We only needed to capture the http traffic. For this we had to set up a capture filter from the command line. Wireshark accepts the filter in 'libpcap filter syntax'. To get a suitable filter string, you can launch Wireshark and go to Capture->Capture Filters.

For HTTP, the value of the capture filter is HTTP TCP Port (80), for which the corresponding string is "tcp port http" in libpcap filter syntax. Wireshark command line option -f can be used to specify this.

The final command line option that we arrived at is:

wireshark.exe -i 2 -f "tcp port http" -k -b duration:3600 -w c:\wiretraces\wiretrace

where -i us the ID of the interface to monitor
-f is the capture filter in the libpcap filter syntax
-k instructs wireshark to start capturing immediately
-b duration:3600 intructs wireshark to capture in a ringbuffer and to move to the next file in the buffer after every 1 hour
-w is used to specify the base name of the output file

Happy Wiretracing!!!


*A circular buffer or ring buffer is a data structure that uses a single, fixed-size buffer as if it were connected end-to-end. This structure lends itself easily to buffering data streams.

Nice presenation about Cloud Computing... Do you have doubts about the concept of Cloud Computing???

This is really a good video about Cloud Computing. Here presenter has tried to explain Cloud Computing in really easy way.. Watch it

Thursday, June 4, 2009

Hyper Terminal

what says communications theory? - transmitting info from one person to another? OR who says what to whom in what channel with what effect. It depends on the context being used. In terms of computers it could be as simple as connecting two devices.

HyperTerminal is an application you can use in order to connect your computer to other remote systems. It comes with Windows as preinstalled. Search your programs list if you could find it (hint: Communications). It allows to connect using modem, Ethernet, or serial port. I could possibly talk only about connecting a GSM phone to Hyper Terminal, coz i never connected modem.

When we connect GSM phone over a COM port say COM1, Windows automatically detects the phone and ask for COM port to use (if using IrDA then use COM4 since IrDA is mapped to serial on COM4). Then after setting the port settings like baud rate, data bits, parity and flow control etc, we are connected to the GSM phone. Use AT commands to test if it is really connected. At Basic level type AT on the Hyper Terminal screen and hit the return key. OK means successful response. When i had connected my SE w550, the first thing i did was to transfer a game (from getjar.com) to it and i did succeded.

You could also connect to a remote PC or your friends PC and try chat with him/her. Its like sending messages using net send commands.

Wednesday, June 3, 2009

HTTP Status Code : 1XX Informational

1xx Informational

This class of status code indicates a provisional response, consisting only of the Status-Line and optional headers, and is terminated by an empty line. Since HTTP/1.0 did not define any 1xx status codes

100 Continue

This means that the server has received the request headers, and that the client should proceed to send the request body (in the case of a request for which a body needs to be sent; for example, a POST request). If the request body is large, sending it to a server when a request has already been rejected based upon inappropriate headers is inefficient. To have a server check if the request could be accepted based on the request's headers alone, a client must send Expect: 100-continue as a header in its initial request

101 Switching Protocols

This means the requestor has asked the server to switch protocols and the server is acknowledging that it will do so.

102 Processing

Tuesday, June 2, 2009

HTTP Status Codes : 2XX for Success

2xx Success

This class of status code indicates that the client's request was successfully received, understood, and accepted

200 OK

Standard response for successful HTTP requests. The actual response will depend on the request method used. In a GET request, the response will contain an entity corresponding to the requested resource. In a POST request the response will contain an entity describing or containing the result of the action.

201 Created

The request has been fulfilled and resulted in a new resource being created.

202 Accepted

The request has been accepted for processing, but the processing has not been completed. The request might or might not eventually be acted upon, as it might be disallowed when processing actually takes place.

203 Non-Authoritative Information

The server successfully processed the request, but is returning information that may be from another source.

204 No Content

The server successfully processed the request, but is not returning any content.

205 Reset Content

The server successfully processed the request, but is not returning any content. Unlike a 204 response, this response requires that the requestor reset the document view.

206 Partial Content

The server is serving only part of the resource due to a range header sent by the client. This is used by tools like wget to enable resuming of interrupted downloads, or split a download into multiple simultaneous streams.

207 Multi-statuses

The message body that follows is an XML message and can contain a number of separate response codes, depending on how many sub-requests were made.

Saturday, May 30, 2009

What is AJAX

Ajax, sometimes written as AJAX (shorthand for asynchronous JavaScript and XML), is a group of interrelated web development techniques used on the client-side to create interactive web applications or rich Internet applications.

With Ajax, web applications can retrieve data from the server asynchronously in the background without interfering with the display and behavior of the existing page. The use of Ajax has led to an increase in interactive animation on web pages and better quality of Web services thanks to the asynchronous mode. Data is retrieved using the XMLHttpRequest object. Despite the name, the use of JavaScript and XML is not actually required, nor do the requests need to be asynchronous.

Advantages

* In many cases, related pages on a website consist of much content that is common between them. Using traditional methods, that content would have to be reloaded on every request. However, using Ajax, a web application can request only the content that needs to be updated, thus drastically reducing bandwidth usage and load time.

* The use of asynchronous requests allows the client's Web browser UI to be more interactive and to respond quickly to inputs, and sections of pages can also be reloaded individually. Users may perceive the application to be faster or more responsive, even if the application has not changed on the server side.

* The use of Ajax can reduce connections to the server, since scripts and style sheets only have to be requested once.

* State can be maintained throughout a Web site. JavaScript variables will persist because the main container page need not be reloaded.

Disadvantages

* Pages dynamically created using successive Ajax requests do not automatically register themselves with the browser's history engine, so clicking the browser's "back" button may not return the user to an earlier state of the Ajax-enabled page, but may instead return them to the last full page visited before it. Workarounds include the use of invisible IFrames to trigger changes in the browser's history and changing the anchor portion of the URL (following a #) when AJAX is run and monitoring it for changes.

* Dynamic web page updates also make it difficult for a user to bookmark a particular state of the application. Solutions to this problem exist, many of which use the URL fragment identifier (the portion of a URL after the '#') to keep track of, and allow users to return to, the application in a given state.

* Because most web crawlers do not execute JavaScript code, web applications should provide an alternative means of accessing the content that would normally be retrieved with Ajax, to allow search engines to index it.

* Any user whose browser does not support Ajax or JavaScript, or simply has JavaScript disabled, will not be able to use its functionality. Similarly, devices such as mobile phones, PDAs, and screen readers may not have support for JavaScript or the XMLHttpRequest object.[citation needed] Also, screen readers that are able to use Ajax may still not be able to properly read the dynamically generated content. The only way to let the user carry out functionality is to fall back to non-JavaScript methods. This can be achieved by making sure links and forms can be resolved properly and rely not solely on Ajax. In JavaScript, form submission could then be halted with "return false".

* The same origin policy prevents some Ajax techniques from being used across domains, although the W3C has a draft that would enable this functionality.

* Ajax opens up another attack vector for malicious code that web developers might not fully test for.

HTTP Status Codes: 3xx Redirection

The client must take additional action to complete the request.

This class of status code indicates that further action needs to be taken by the user agent in order to fulfil the request. The action required may be carried out by the user agent without interaction with the user if and only if the method used in the second request is GET or HEAD. A user agent should not automatically redirect a request more than five times, since such redirections usually indicate an infinite loop.

300 Multiple Choices

Indicates multiple options for the resource that the client may follow. It, for instance, could be used to present different format options for video, list files with different extensions, or word sense disambiguation.

301 Moved Permanently

This and all future requests should be directed to the given URI.

302 Found

This is the most popular redirect code, but also an example of industrial practice contradicting the standard. HTTP/1.0 specification (RFC 1945 ) required the client to perform a temporary redirect (the original describing phrase was "Moved Temporarily"), but popular browsers implemented it as a 303 See Other. Therefore, HTTP/1.1 added status codes 303 and 307 to disambiguate between the two behaviors. However, the majority of Web applications and frameworks still use the 302 status code as if it were the 303.

303 See Other (since HTTP/1.1)

The response to the request can be found under another URI using a GET method. When received in response to a PUT, it should be assumed that the server has received the data and the redirect should be issued with a separate GET message.

304 Not Modified

Indicates the resource has not been modified since last requested. Typically, the HTTP client provides a header like the If-Modified-Since header to provide a time against which to compare. Utilizing this saves bandwidth and reprocessing on both the server and client.

305 Use Proxies (since HTTP/1.1)

Many HTTP clients do not correctly handle responses with this status code, primarily for security reasons.

307 Temporary Redirect (since HTTP/1.1)

In this occasion, the request should be repeated with another URI, but future requests can still use the original URI. In contrast to 303, the request method should not be changed when reissuing the original request. For instance, a POST request must be repeated using another POST request.

Friday, May 29, 2009

HTTP Status Codes: 4xx Client Error

The request contains bad syntax or cannot be fulfilled.

The 4xx class of status code is intended for cases in which the client seems to have erred. Except when responding to a HEAD request, the server should include an entity containing an explanation of the error situation, and whether it is a temporary or permanent condition. These status codes are applicable to any request method. User agents should display any included entity to the user. These are typically the most common error codes encountered while online.

400 Bad Request: The request contains bad syntax or cannot be fulfilled.

401 Unauthorized: Similar to 403 Forbidden, but specifically for use when authentication is possible but has failed or not yet been provided. See Basic access authentication and Digest access authentication.

402 Payment Required: The original intention was that this code might be used as part of some form of digital cash or micropayment scheme, but that has not happened, and this code has never been used.

403 Forbidden: The request was a legal request, but the server is refusing to respond to it. Unlike a 401 unauthorized response, authenticating will make no difference.

404 Not Found: The requested resource could not be found but may be available again in the future. Subsequent requests by the client are permissible.

405 Method Not Allowed: A request was made of a resource using a request method not supported by that resource; for example, using GET on a form which requires data to be presented via POST, or using PUT on a read-only resource.

406 Not Acceptable: The requested resource is only capable of generating content not acceptable according to the Accept headers sent in the request.

407 Proxy Authentication Required

408 Request Timeout: The server timed out waiting for the request.

409 Conflict: Indicates that the request could not be processed because of conflict in the request, such as an edit conflict.

410 Gone: Indicates that the resource requested is no longer available and will not be available again. This should be used when a resource has been intentionally removed; however, it is not necessary to return this code and a 404 Not Found can be issued instead. Upon receiving a 410 status code, the client should not request the resource again in the future. Clients such as search engines should remove the resource from their indexes.

411 Length Required: The request did not specify the length of its content, which is required by the requested resource.

412 Precondition Failed: The server does not meet one of the preconditions that the requestor put on the request.

413 Request Entity Too Large: The resource that was requested is too large to transmit using the current protocol.

414 Request-URI Too Long: The URI provided was too long for the server to process.

415 Unsupported Media Type: The request did not specify any media types that the server or resource supports. For example the client specified that an image resource should be served as image/svg+xml, but the server cannot find a matching version of the image.

416 Requested Range Not Satisfiable: The client has asked for a portion of the file, but the server cannot supply that portion (for example, if the client asked for a part of the file that lies beyond the end of the file).

417 Expectation Failed: The server cannot meet the requirements of the Expect request-header field.

422 Unprocessable Entity: The request was well-formed but was unable to be followed due to semantic errors.

423 Locked: The resource that is being accessed is locked

424 Failed Dependency: The request failed due to failure of a previous request

425 Unordered Collection

426 Upgrade Required: The client should switch to TLS/1.0.449 Retry With

Thursday, May 28, 2009

HTTP Status Codes : 5xx Server Error

As you must be aware that HTTP calls over the internet get different responses. Most common is 404 when you are not connected to Internet of your proxy has blocked a particular host.

The first digit of the status code specifies one of five classes of response; the bare minimum for an HTTP client is that it recognizes these five classes.

1xx: Informational
2xx: Success
3xx: Redirection
4xx: Client Errors
5xx: Server Errors

Here we are going to talk about 5xx (Server Errors).


The server failed to fulfill an apparently valid request.

Response status codes beginning with the digit "5" indicate cases in which the server is aware that it has encountered an error or is otherwise incapable of performing the request. Except when responding to a HEAD request, the server should include an entity containing an explanation of the error situation, and indicate whether it is a temporary or permanent condition. Likewise, user agents should display any included entity to the user. These response codes are applicable to any request method.


So if you are a tester and get 5xx http code, you need to assign the bug to Server Team :)

500 Internal Server Error

A generic error message, given when no more specific message is suitable.

501 Not Implemented

The server either does not recognize the request method, or it lacks the ability to fulfil the request.

502 Bad Gateway

The server was acting as a gateway or proxy and received an invalid response from the upstream server.

503 Service Unavailable

The server is currently unavailable (because it is overloaded or down for maintenance). Generally, this is a temporary state.

504 Gateway Timeout

The server was acting as a gateway or proxy and did not receive a timely request from the upstream server.

505 HTTP Version Not Supported

The server does not support the HTTP protocol version used in the request.

506 Variant Also Negotiates

507 Insufficient Storage

509 Bandwidth Limit Exceeded


510 Not Extended

Further extensions to the request are required for the server to fulfill it.

Wednesday, May 20, 2009

Tips for an Ergonomic Computer Workstation:


For last few weeks I have been going through severe back pain and its creating lot of problems in my work and personal life-style. So I was going to through some tips about the sitting posture while working. Here are few tips I found and I will also share my personal experience...


Tips for an Ergonomic Computer Workstation:

  1. Use a good chair with a dynamic chair back and sit back in this
  2. Top of monitor casing 2-3" (5-8 cm) above eye level
  3. No glare on screen, use an optical glass anti-glare filter where needed
  4. Sit at arms length from monitor
  5. Feet on floor or stable footrest
  6. Use a document holder, preferably in-line with the computer screen
  7. Wrists flat and straight in relation to forearms to use keyboard/mouse/input device
  8. Arms and elbows relaxed close to body
  9. Center monitor and keyboard in front of you
  10. Use a negative tilt keyboard tray with an upper mouse platform or downward tiltable platform adjacent to keyboard
  11. Use a stable work surface and stable (no bounce) keyboard tray
  12. Take frequent short breaks (microbreaks)


Apart from above I also want to share following things:

- While working don't sit for a very long time. Take breaks after every 15-20 minutes.
- Walk around your workspace for some time but make sure you are not disturbing others.
- Make a routine of doing some body excercises daily.
- Good rest is also important.

Friday, April 24, 2009

DoSHTTP for doing Denial of Service Testing for a website

DoSHTTP is an easy to use and powerful HTTP Flood Denial of Service (DoS) Testing Tool for Windows. DoSHTTP includes URL Verification, HTTP Redirection and performance monitoring. DoSHTTP uses multiple asynchronous sockets to perform an effective HTTP Flood. DoSHTTP can be used simultaneously on multiple clients to emulate a Distributed Denial of Service (DDoS) attack. DoSHTTP can help IT Professionals test web server performance and evaluate protection software. DoS-HTTP was developed by certified IT Security and Software Development professionals.

Free download from Shareware Connection - DoSHTTP is an easy to use and powerful HTTP Flood Denial of Service (DoS) Testing Tool. DoSHTTP uses multiple asynchronous sockets to perform an effective HTTP Flood and includes URL Verification, HTTP Redirection and performance monitoring.

I was looking for a tool which could help me in testing Service denial cases for an exe which interacts with different web-servers but this tools is only for a particular URL which does not solve my purpose...




I did a basic thing using this tool. I provided www.Google.com as URL and asked to run the test using Mozilla4. I liked one thing about the tool that you need not to install different versions of Browsers to test this. Tool has implicit capability of simulating different browser behavior.

Second image shows the results where is says that 34% requests per second are responded by server when 4950 requests are made using Mozailla4.

I don't have much knowledge about this tool so explore more about this....

Here are few features of DoSHTTP but I don't have much details about these:

Features:

* Easy to use and powerful HTTP Flood Denial of Service (DoS) Testing Tool
* Uses multiple asynchronous sockets to perform an effective HTTP Flood
* Allows multiple clients to emulate a Distributed Denial of Service (DDoS) Attack
* Allows target port designation within the URL [http://host:port/]
* Supports HTTP Redirection for automatic page redirection (optional)
* Includes URL Verification that displays the response header and document
* Includes Performance Monitoring and Enhanced Reporting
* Allows customized User Agent header fields
* Allows user defined Socket and Request settings
* Supports numeric addressing for Target URLs
* Includes a comprehensive User Guide
* Clear Target URLs and Reset All options

Thursday, April 9, 2009

What is Cloud Computing

Cloud computing is a style of computing in which dynamically scalable and often virtualised resources are provided as a service over the Internet.Users need not have knowledge of, expertise in, or control over the technology infrastructure "in the cloud" that supports them.

The concept incorporates infrastructure as a service (IaaS), platform as a service (PaaS) and software as a service (SaaS) as well as Web 2.0 and other recent technology trends that have the common theme of reliance on the Internet for satisfying the computing needs of the users. Examples of SaaS vendors include Salesforce.com and Google Apps which provide common business applications online that are accessed from a web browser, while the software and data are stored on the servers.

Cloud computing is a computing paradigm in which tasks are assigned to a combination of connections, software and services accessed over a network. This network of servers and connections is collectively known as "the cloud." Computing at the scale of the cloud allows users to access supercomputer-level power. Users can access resources as they need them.

Monday, April 6, 2009

How to Monitor online traffic on remotes machines on a Network?

Hi All,

For last 2-3 weeks I have been working on configuring Reverse proxy using Tomcat. I spent a lot of time and stuck at a point where my application stops working when it encounters an HTTPS request for logging in. HTTP traffic is working fine.

When checked I found that my proxy is not even getting the request.

Then I started exploring other methods through which I could monitor online traffic on different machines in my Network. I came across this solution. I have been using fiddler for debugging thing on my local machine.

Here are few basic steps:

NOTE : I have Master machine where I want to monitor the traffic.

1. Install Fiddler 2.x version on your machine. (Lower versions do not support this.)
2. Launch Fiddler on master machine
3. Go to Tools > Fiddler Options
4. Under General Tab, select the option "Allow Remote Computers to Connect"
5.
Go to Start>Run
6.
Type cmd : It will launch command prompt
7. Type ipconfig & Note IP address of your master machine

Now go to the machine which you want to monitor for web traffic.

1. Launch Internet Explorer
2. Go to Tools>Internet Options
3. Go to Connections Tab
4. Click "Lan Settings" button
5. Uncheck "Automatically Detect Settings"
6. Check the option "Use a Proxy Server...."
7. Click Advanced button
8. Add IP address of your master machine under 'Proxy Address to use' and port as 8888
9. Add this for HTTP & Secure
10. Click OK
11. Again click OK
12. Again click OK


Now restart Fiddler on Master machine.

On other machine enter www.google.com in internet explorer. Check what happens in Fiddler on Master machine.

YOU WILL SEE AN ENTRY FOR WWW.GOOGLE.COM

Similarly you can see other calls which are actually made on other machine. Similarly you can monitor traffic from more than one machine. Just enter IP with 8888 port in IE and you are done.

But still I am facing the same problem for HTTPS call for Log-in. I will share more details as I will get the solution.

+ VJ

Monday, March 23, 2009

Apache Tomcat

Apache Tomcat is a servlet container developed by the Apache Software Foundation (ASF). Tomcat implements the Java Servlet and the JavaServer Pages (JSP) specifications from Sun Microsystems, and provides a "pure Java" HTTP web server environment for Java code to run.

Tomcat should not be confused with the Apache web server, which is a C implementation of an HTTP web server; these two web servers are not bundled together. Apache Tomcat includes tools for configuration and management, but can also be configured by editing XML configuration files.

Components

Catalina

Catalina is Tomcat's servlet container. Catalina implements Sun Microsystems' specifications for servlet and JavaServer Pages (JSP).

Coyote

Coyote is Tomcat's HTTP Connector component that supports the HTTP 1.1 protocol for the web server or application container. Coyote listens for incoming connections on a specific TCP port on the server and forwards the request to the Tomcat Engine to process the request and send back a response to the requesting client.

Jasper

Jasper is Tomcat's JSP Engine. Tomcat 5.x uses Jasper 2, which is an implementation of the Sun Microsystems's JavaServer Pages 2.0 specification. Jasper parses JSP files to compile them into Java code as servlets (that can be handled by Catalina). At runtime, Jasper is able to automatically detect JSP file changes and recompile them.

Jasper 2

From Jasper to Jasper 2, important features were added :

* JSP Tag library Pooling - Each tag markup in JSP file is handled by a tag handler class. Tag handler class objects can be pooled and reused in the whole JSP servlet.
* Background JSP compilation - While recompiling modified JSP Java code, the older version is still available for server requests. The older JSP servlet is deleted once the new JSP servlet has been recompiled.
* Recompile JSP when included page changes - Pages can be inserted and included into a JSP at compile time. The JSP will not only be automatically recompiled with JSP file changes but also with included page changes.
* JDT Java compiler - Jasper 2 can use the Eclipse JDT Java compiler instead of Ant and javac.

Wednesday, March 18, 2009

How to install Apache on Win-Vista

http://www.thesitewizard.com/apache/install-apache-on-vista.shtml

Tuesday, March 10, 2009

What is the difference between Forward Proxy and Reverse Proxy?

First let’s review what a forward proxy or proxy is and how it works. A forward proxy acts a gateway for a client’s browser, sending HTTP requests on the client’s behalf to the Internet. The proxy protects your inside network by hiding the actual client’s IP address and using its own instead. When an outside HTTP server receives the request, it sees the requestor’s address as originating from the proxy server, not
from the actual client.

A Reverse Proxy proxies on behalf of the backend HTTP server not on behalf the
outside client’s request, hence the term reverse. It is an application proxy for servers using the HTTP protocol. It acts as a gateway to an HTTP server or HTTP server farm by acting as the final IP address for requests from the outside. The firewall works tightly with the Reverse Proxy to help ensure that only the Reverse Proxy can access the HTTP servers hidden behind it. From the outside client’s point of view, the Reverse Proxy is the actual HTTP server.

What is Reverse Proxy???

A reverse proxy or surrogate is a proxy server that is installed within the neighborhood of one or more servers. Typically, reverse proxies are used in front of Web servers. All connections coming from the Internet addressed to one of the Web servers are routed through the proxy server, which may either deal with the request itself or pass the request wholly or partially to the main web servers.

There are several reasons for installing reverse proxy servers:

1. Security: the proxy server may provide an additional layer of defense by separating or masquerading the type of server that is behind the reverse proxy. This configuration may protect the servers further up the chain.

2. Encryption / SSL acceleration: when secure websites are created, the SSL encryption is sometimes not done by the Web server itself, but by a reverse proxy that is equipped with SSL acceleration hardware.

3. Load distribution: the reverse proxy can distribute the load to several servers, each server serving its own application area.

4. Caching: A reverse proxy can offload the Web servers by caching static content, such as images, as well as dynamic content, such as a HTML-page rendered by a content management system.

5. Compression: the proxy server can optimize and compress the content to speed up the load time.

TERMS USED:

PROXY SERVER: a proxy server is a server (a computer system or an application program) that acts as a go-between for requests from clients seeking resources from other servers. A client connects to the proxy server, requesting some service, such as a file, connection, web page, or other resource, available from a different server. The proxy server evaluates the request according to its filtering rules. For example, it may filter traffic by IP address or protocol. If the request is validated by the filter, the proxy provides the resource by connecting to the relevant server and requesting the service on behalf of the client. A proxy server may optionally alter the client's request or the server's response, and sometimes it may serve the request without contacting the specified server. In this case, it 'caches' responses from the remote server, and returns subsequent requests for the same content directly.

SSL ACCERLERATION: SSL acceleration is a method of offloading the processor-intensive public key encryption algorithms involved in SSL transactions to a hardware accelerator. Typically, this is a separate card that plugs into a PCI slot in a computer that contains one or more co-processors able to handle much of the SSL processing.

Read more :

http://www.sans.org/reading_room/whitepapers/webservers/a_reverse_proxy_is_a_proxy_by_any_other_name_302?show=302.php&cat=webservers


Tuesday, February 17, 2009

What is Splunk

Splunk is an IT information search solution that indexes data and enables users to analyze, alert and report on all their IT data from every application, server and device; all in one place. It enables you to fi nd and fi x problems, investigate security incidents before attackers cover their tracks and generate compliance reports quickly and easily.

Splunk continuously indexes all your IT data by time so you can see change in action. And it dynamically interprets the data when you perform a search, eliminating the need to keep up
with ever changing data formats. It doesn’t require special agents, adapters or parsers for specifi c data formats and you get the correlation you need without writing lots of elaborate rules.

Splunk can integrate with your existing enterprise management, security and compliance tools right out of the box. The Splunk toolbar makes it simple to launch searches from any webbased
application and Splunk alerts can be sent to any of your existing consoles. It can even index the data already collected by your existing management tools to extend the life of your investments.

In my words : "A smart tool to know the health of your servers"

Thursday, January 8, 2009

What is the Difference between Desktop, WEB and Client-Server Apllication Testing

Each one differs in the environment in which they are tested and you will lose control over the environment in which application you are testing, while you move from desktop to web applications.


Desktop application runs on personal computers and work stations, so when you test the desktop application you are focusing on a specific environment. You will test complete application broadly in categories like GUI, functionality, Load, and backend i.e DB.

In Client-Server Application you have two different components to test. Application is loaded on server machine while the application exe on every client machine. You will test broadly in categories like, GUI on both sides, functionality, Load, client-server interaction, backend. This environment is mostly used in Intranet networks. You are aware of number of clients and servers and their locations in the test scenario.

Web Application is a bit different and complex to test as tester don’t have that much control over the application. Application is loaded on the server whose location may or may not be known and no exe is installed on the client machine, you have to test it on different web browsers. Web applications are supposed to be tested on different browsers and OS platforms so broadly Web application is tested mainly for browser compatibility and operating system compatibility, error handling, static pages, backend testing and load testing.

Keep in mind that even the difference exist in these three environment, the basic quality assurance and testing principles remains same and applies to all.

Monday, January 5, 2009

Charles Web debugging Proxy

Charles is an HTTP proxy / HTTP monitor / Reverse Proxy that enables a developer to view all of the HTTP traffic between their machine and the Internet. This includes requests, responses and the HTTP headers (which contain the cookies and caching information).
Charles can act as a man-in-the-middle for HTTP/SSL communication, enabling you to debug the content of your HTTPS sessions.
Charles simulates modem speeds by effectively throttling your bandwidth and introducing latency, so that you can experience an entire website as a modem user might (bandwidth simulator).

...