100,000+ hits on TechnoBeans

Thanks guys for your overwhelming responses to my blogs over the last couple of years..

Technobeans has managed to get 100,000+ hits and credit goes to you all who viewed and referred the content to your friends, and colleagues. While this is great milestone, I wish you guys continue to look forward to more blogs, articles, books, open source projects on Python, Java, NodeJS and many more interesting topics to follow..

Occasions like these keep me motivated to say the least.

Thanks again!

Advertisements

Understanding Test Driven Development with Python

Author: Chetan Giridhar and Vishal Kanaujia

Published at: agilerecord, February 2012 Edition

Objective

Test Driven Development is still a nascent concept for many. Developers often have erroneous assumptions, preconceived notions about the concept, and may fail to understand its potential. This article explains TDD with an intuitive Python example.

Let’s start with what TDD is?

 TDD stands for Test Driven Development. According to Wikipedia, “Test-driven development (TDD) is a software development process that relies on the repetition of a very short development cycle: first the developer writes a failing automated test case that defines
a desired improvement or new function, then produces code to pass that test and finally refactors the new code to acceptable standards.”

A typical TDD cycle comprises:

  • Writing a test
  • Run to fail
  • Develop the functionality (complete or partial in chunks)
  • Make the test pass
  • Refactor code (clean up)
  • Repeat the iteration

Ok! Ok! Enough of theory! Let’s get to the code!

Example: Developing code for validation of prime numbers with TDD

As a first step, let’s write a test. As there is no functionality developed, the test would fail. In the example below, the prime.py module contains a class and has a method isPrime() that aims to validate prime numbers but is not yet implemented.

You can download the complete magazine and this article from agilerecord

Daily Scrum meetings: What it is and what not, and what can be improved!

Author: Chetan Giridhar and Vishal Kanaujia

Published at: agilerecord, October 2011 Edition

Courtesy: http://agilerecord.com

According to Wikipedia, “Scrum is an iterative, incremental framework for project management often seen in agile software
development, a type of software engineering. Although the Scrum approach was originally suggested for managing product development projects, its use has focused on the management of software development projects, and it can be used to run software maintenance teams or as a general project/program management approach.”

This article talks about what are Scrum meetings, the good and not so good point about it. Article also suggests the product development teams should follow to benefit from these meetings..

Before discussing daily scrum meetings in detail, let’s get an insight into the scrum process.

What happens in Scrum?

In Scrum, three roles are defined:

■ Project Manager: represents both development and quality assurance teams who are responsible for requirement analysis, design & development, and testing.
■ Product Owner: represents the stakeholders and the business development professionals (also a Product Manager in product-based companies).
■ Scrum Master: is responsible for running the complete program (typically a Program Manager); the one who resolves
impediments faced by Project Management’

A prioritized set of product requirements provided by business analysts or the product development team is referred to as ‘product backlog’. Based on the priority of these requirements, the ‘sprint backlog’ is formed. Decision on what requirements shall feature in sprint backlog items is taken in the ‘sprint planning’ meeting which involves the Scrum Master, representatives from the project management and Product Owner. In a sprint (with a typical duration of two to four weeks), the team delivers the sprint backlog items, which are essentially the product requirements.
Subsequent sprints focus on the delivery of new requirements and the existing feature set is incrementally improved. Once a sprint is completed, a ‘Sprint review’ meeting is conducted to understand what was delivered in the sprint and what not. This is followed up with a ‘Sprint retrospective’ meeting that discusses the possible improvements the team might consider so that they can have a better sprint (in terms of more product requirements, product quality, better development/QA interactions and many such factors).

Each day during the sprint, a standup meeting, the so-called ‘Scrum meeting’, takes place. This article discusses the good and not so good elements of Scrum meetings. We also suggest useful practices that teams could follow for effective Scrum meetings.

What are Scrum meetings?

A stand-up meeting

Every member in the team has to stand up to answer three crucial questions:
■ What did I do yesterday, OR what did I do since the last time we met?
■ What am I going to do today?
■ Am I blocked?

The standup meeting typically lasts 15-20 minutes; longer discussions among engineers are generally avoided. Status is tracked with the help of burn down charts, in which the outstanding work (or backlog) is plotted versus time.

Easy flow of information across teams

Product Management, Project Management (Development and QA) and Program Management (Scrum Master) all take part in the Scrum meetings; hence requests for information needed from teams or program level communication are easily achieved. For instance, a development engineer may be facing issues implementing a backlog item as he is blocked on certain hardware tectural or implementation discussions about certain backlog items, or Development and QA unknowingly utilize this opportunity for synchronizing with each other on a backlog item. This may at times be helpful to other team members as they may get some benefit from these discussions with regard to product knowledge, but Scrum is not the right forum for these discussions. Scrum teams should avoid such situations; instead a separate knowledge-sharing session for these discussions might be appropriate.

You don’t report to the Scrum Master

This is a serious problem Scrum meeting are plagued with. Scrum meetings are held for the team and not for reporting the status to the Scrum Master. Sometimes Development and QA fall into the trap of reporting the status in a way the Scrum Master needs it, which defeats the purpose of Scrum.

It’s a standup; it’s time-bound

The term standup meeting itself suggests that the meeting would
be held for a time frame that the team can stand for. Scrum meetings shouldn’t take more than 15 to 20 minutes and shouldn’t be
used for solving problems.

Not a bug scrub session

Scrum meetings also have a tendency of getting converted to bug scrub sessions. Bug scrub is a process where, every week, Development and QA get together to discuss bugs logged by the QA team in that week. After listening to view points from both teams, bugs can get deferred, marked as duplicates or the severity of bugs could be modified. The outcome of this process is to ensure that Development and QA are on same page for all defects concerned. In Scrum, bugs from the previous sprint tend to become backlog items for the current sprint. Status reporting in Scrum
meetings on these bugs by Development and QA can take a bug scrub form.

Similarly, QA may file a bug during a sprint on the backlog assigned to him/her. The development engineer may not agree and start asking more questions or execution logs pertaining to the bug during the Scrum meeting. These situations should be avoided; an explicit bug scrub session
on a weekly basis should help.

Do not discourage engineers

The purpose of Scrum meetings is lost when engineers start feeling discouraged or unmotivated. Sometimes engineers may not be able to make any progress on certain backlog items and report the ‘No Progress’ status during Scrum meetings. Questions from fellow engineers, who are dependent on these backlog items, or from Product Management or from the Scrum Master may make these engineers feel discouraged as they have not
been able to make decisive progress on the backlog items. There could, of course, be genuine reasons for ‘No Progress’ being made, and constant questioning may frustrate the engineer for the wrong reasons.

A suggestive tone and genuine technical help in such situations will help a lot!

Improper use of communication channel

In a global setup, where teams are placed across different geographic locations, it’s often observed that the status reported from one location to another is not properly conveyed because of intermittent line breaks and voices in the background (if someone has dialled into the meeting from an outside office). Also, as teams are not meeting face-to-face, the facial gestures that are often used for effective communication are lost.

At times, crucial pieces of information get lost because of these reasons, which should be avoided. A video conference from office conference rooms, where teams can see and hear each other, would be a definite alternative.

Incorrect reporting on backlog items

For reporting of status, teams fall into the trap of using burn down charts. These charts typically are plotted as time (spent and left in a Sprint) v/s backlog items (completed or in progress or not started). Often, however, these charts fail to map time for a set of sub-features in a backlog item. So essentially, a backlog item is only marked complete by a developer when all sub-features of a backlog are developed. This is problematic for QA and for the stakeholders who are kept worrying about the completion of the backlog item. It can happen that even if 90% of work is done, the
status of the backlog item is seen as ‘In Progress’, which doesn’t present the correct picture of work completed.

The use of Kanban Task Boards, where backlog items are divided into sub-features and the status is tracked with respect to time, will improve reporting significantly.

Conclusion

It’s important to remember that Scrum meetings are a means to achieve something and not the objective itself. Of course, these meetings are helpful, but it’s crucial to understand they are not imperative. If they don’t suit your needs, please don’t fall into the trap of using them.

Don’t jump to conclusions too quickly. It’s often observed that teams find the benefits of Scrum meetings from Sprint 3 or Sprint 4. A start-up time where teams get used to this change is often required.

As always, you know best what’s good for you!

References

1) Wikipedia.org, the free encyclopedia that anyone can edit
2) 7 Tips for Improving the Daily Scrum, http://agilesoftwaredevelopment.com/blog/artem/7-tips-daily-scrum

Click to download AgileRecord: Daily Scrum Meetings

Light Weight Process – Dissecting Linux Threads

Article on Light Weight Process – Dissecting Linux Threads, got printed in August 2011  Issue Vol. 9  No. 6 of LinuxForYou magazine (ISSN 0974-1054)

Authors: Vishal Kanaujia and Chetan Giridhar

This article, aimed at Linux developers and students of computer science, explores the fundamentals of threads and their implementation in Linux with Light-Weight Processes, aiding understanding with a code implementation.

Threads are the core element of a multi-tasking programming environment. By definition, a thread is an execution context in a process; hence, every process has at least one thread. Multi-threading implies the existence of multiple, concurrent (on multi-processor systems), and often synchronised execution contexts in a process.

Threads have their own identity (thread ID), and can function independently. They share the address space within the process, and reap the benefits of avoiding any IPC (Inter-Process Communication) channel (shared memory, pipes and so on) to communicate. Threads of a process can directly communicate with each other — for example, independent threads can access/update a global variable. This model eliminates the potential IPC overhead that the kernel would have had to incur. As threads are in the same address space, a thread context switch is inexpensive and fast.

A thread can be scheduled independently; hence, multi-threaded applications are well-suited to exploit parallelism in a multi-processor environment. Also, the creation and destruction of threads is quick. Unlike fork(), there is no new copy of the parent process, but it uses the same address space and shares resources, including file descriptors and signal handlers.

A multi-threaded application uses resources optimally, and is highly efficient. In such an application, threads are loaded with different categories of work, in such a manner that the system is optimally used. One thread may be reading a file from the disk, and another writing it to a socket. Both work in tandem, yet are independent. This improves system utilisation, and hence, throughput.

A few concerns

The most prominent concern with threads is synchronisation, especially if there is a shared resource, marked as a critical section. This is a piece of code that accesses a shared resource, and must not be concurrently accessed by more than one thread. Since each thread can execute independently, access to the shared resource is not moderated naturally but using synchronisation primitives including mutexes (mutual exclusion), semaphores, read/write locks and so on.

These primitives allow programmers to control access to a shared resource. In addition, similar to processes, threads too suffer states of deadlock, or starvation, if not designed carefully. Debugging and analysing a threaded application can also be a little cumbersome.

How does Linux implement threads?

Linux supports the development and execution of multi-threaded applications. User-level threads in Linux follow the open POSIX (Portable Operating System Interface for uniX) standard, designated as IEEE 1003. The user-level library (on Ubuntu, glibc.so) has an implementation of the POSIX API for threads.

Threads exist in two separate execution spaces in Linux — in user space and the kernel. User-space threads are created with the pthread library API (POSIX compliant). These user-space threads are mapped to kernel threads. In Linux, kernel threads are regarded as “light-weight processes”. An LWP is the unit of a basic execution context. Unlike other UNIX variants, including HP-UX and SunOS, there is no special treatment for threads. A process or a thread in Linux is treated as a “task”, and shares the same structure representation (list of structtask_structs).

For a set of user threads created in a user process, there is a set of corresponding LWPs in the kernel. The following example illustrates this point:

#include <stdio.h>
#include <syscall.h>
#include <pthread.h>
int main()
{
    pthread_t tid = pthread_self();
    int sid = syscall(SYS_gettid);
    printf("LWP id is %dn", sid);
    printf("POSIX thread id is %dn", tid);
    return 0;
}

Running the ps command too, lists processes and their LWP/ threads information:

kanaujia@ubuntu:~/Desktop$ ps -fL

UID        PID     PPID       LWP   C   NLWP    STIME   TTY          TIME CMD
kanaujia 17281     5191     17281   0      1    Jun11   pts/2    00:00:02 bash
kanaujia 22838     17281    22838   0      1    08:47   pts/2    00:00:00 ps -fL
kanaujia 17647     14111    17647   0      2    00:06   pts/0    00:00:00 vi clone.s

What is a Light-Weight Process?

An LWP is a process created to facilitate a user-space thread. Each user-thread has a 1×1 mapping to an LWP. The creation of LWPs is different from an ordinary process; for a user process “P”, its set of LWPs share the same group ID. Grouping them allows the kernel to enable resource sharing among them (resources include the address space, physical memory pages (VM), signal handlers and files). This further enables the kernel to avoid context switches among these processes. Extensive resource sharing is the reason these processes are called light-weight processes.

How does Linux create LWPs?

Linux handles LWPs via the non-standard clone() system call. It is similar to fork(), but more generic. Actually, fork() itself is a manifestation of clone(), which allows programmers to choose the resources to share between processes. The clone() call creates a process, but the child process shares its execution context with the parent, including the memory, file descriptors and signal handlers. The pthread library too uses clone() to implement threads. Refer to./nptl/sysdeps/pthread/createthread.c in the glibc version 2.11.2 sources.

Create your own LWP

I will demonstrate a sample use of the clone() call. Have a look at the code in demo.c below:

#include <malloc.h>

#include <sys/types.h>
#include <sys/wait.h>
#include <signal.h>
#include <sched.h>
#include <stdio.h>
#include <fcntl.h>
// 64kB stack
#define STACK 1024*64
// The child thread will execute this function
int threadFunction( void* argument ) {
     printf( "child thread entering\n" );
     close((int*)argument);
     printf( "child thread exiting\n" );
     return 0;
}
int main() {
     void* stack;
     pid_t pid;
     int fd;
     fd = open("/dev/null", O_RDWR);
     if (fd < 0) {
         perror("/dev/null");
         exit(1);
     }
     // Allocate the stack
     stack = malloc(STACK);
     if (stack == 0) {
         perror("malloc: could not allocate stack");
         exit(1);
     }
     printf("Creating child thread\n");
     // Call the clone system call to create the child thread
     pid = clone(&threadFunction,
                 (char*) stack + STACK,
                 SIGCHLD | CLONE_FS | CLONE_FILES |\
                  CLONE_SIGHAND | CLONE_VM,
                 (void*)fd);
     if (pid == -1) {
          perror("clone");
          exit(2);
     }
     // Wait for the child thread to exit
     pid = waitpid(pid, 0, 0);
     if (pid == -1) {
         perror("waitpid");
         exit(3);
     }
     // Attempt to write to file should fail, since our thread has
     // closed the file.
     if (write(fd, "c", 1) < 0) {
         printf("Parent:\t child closed our file descriptor\n");
     }
     // Free the stack
     free(stack);
     return 0;
}
The program in demo.c allows the creation of threads, and is fundamentally similar to what thepthread library does. However, the direct use of clone() is discouraged, because if not used properly, it may crash the developed application. The syntax for calling clone() in a Linux program is as follows:
#include <sched.h>

int clone (int (*fn) (void *), void *child_stack, int flags, void *arg);

The first argument is the thread function; it will be executed once a thread starts. When clone()successfully completes, fn will be executed simultaneously with the calling process.

The next argument is a pointer to a stack memory for the child process. A step backward fromfork()clone() demands that the programmer allocates and sets the stack for the child process, because the parent and child share memory pages — and that includes the stack too. The child may choose to call a different function than the parent, hence needs a separate stack. In our program, we allocate this memory chunk in the heap, with the malloc() routine. Stack size has been set as 64KB. Since the stack on the x86 architecture grows downwards, we need to simulate it by using the allocated memory from the far end. Hence, we pass the following address to clone():

(char*) stack + STACK

The next field, flags, is the most critical. It allows you to choose the resources you want to share with the newly created process. We have chosen SIGCHLD | CLONE_FS | CLONE_FILES | CLONE_SIGHAND | CLONE_VM, which is explained below:

  • SIGCHLD: The thread sends a SIGCHLD signal to the parent process after completion. It allows the parent to wait() for all its threads to complete.
  • CLONE_FS: Shares the parent’s filesystem information with its thread. This includes the root of the filesystem, the current working directory, and the umask.
  • CLONE_FILES: The calling and caller process share the same file descriptor table. Any change in the table is reflected in the parent process and all its threads.
  • CLONE_SIGHAND: Parent and threads share the same signal handler table. Again, if the parent or any thread modifies a signal action, it is reflected to both the parties.
  • CLONE_VM: The parent and threads run in the same memory space. Any memory writes/mapping performed by any of them is visible to other process.

The last parameter is the argument to the thread function (threadFunction), and is a file descriptor in our case.

Please refer to the sample code implementation of LWP, in demo.c we presented earlier.

The thread closes the file (/dev/null) opened by the parent. As the parent and this thread share the file descriptor table, the file close operation will reflect in the parent context also, and a subsequent file write() operation in the parent will fail. The parent waits till thread execution completes (till it receives a SIGCHLD). Then, it frees the memory and returns.

Compile and run the code as usual; and it should be similar to what is shown below:

$gcc demo.c

$./a.out
Creating child thread
child thread entering
child thread exiting
Parent: child closed our file descriptor
$

Linux provides support for an efficient, simple, and scalable infrastructure for threads. It encourages programmers to experiment and develop thread libraries using clone() as the core component.

Please share your suggestions/feedback in the comments sections below.

References and suggested reading
  1. Wikipedia article on clone()
  2. clone() man page
  3. Using the clone() System Call by Joey Bernard
  4. Implementing a Thread Library on Linux
  5. IEEE Standards Interpretations for IEEE Std 1003.1c-1995
  6. Sources of pthread implementation in glibc.so
  7. The Fibers of Threads by Benjamin Chelf

Better Software: Improved Test Automation

According to Wikipedia, “Test automation is the use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions. Commonly, test automation involves automating a manual process already in place that uses a formalized testing process.”

Test Automation, contrary to perception, is an eco-system of automated test scripts, including automation libraries, frameworks and processes. Test automation is done when:

–       Tasks to be tested are highly redundant.

–       Software is stable and has been tested several times.

–       Minimal changes are expected in developed code.

–       Tasks are so repetitive; avoiding human error is advantageous.

Establishing the connection: Improved test automation and better software

Even though test automation provides benefits in terms of repeatability, reduced time and costs, reusability, reliability and better quality of software being developed, there has been very little focus on improvement and maintenance of test automation. One of the main arguments substantiating this thought is “Automated test scripts and libraries that I develop is not delivered to the end customer. Customers won’t review my code and not find faults with it. Why do I care so much?”

Well, here are some facts. Test scripts are developed and executed to validate the product shipped (developed application). If the quality of test automation is not maintained, we can’t be sure of the quality of production code. If a test case doesn’t catch a product defect, there is no way to find if the production code was developed well or the test script was faulty.

From the above arguments, it can be deduced that the quality of test automation is directly related to the overall product quality.

Suggestions to achieve improved test automation

During the course of this section, the author would suggest practices the automation teams should follow to improve the quality of test automation.

Test Automation Architecture

Architectural diagram of test automation helps in getting an overall idea of how test automation is mapped with the Test Architecture. While preparing this document, automation teams often realize the need of automation scripts and libraries for user scenarios (from end-to-end testing perspective) that they might not have thought of. This results in improved test and automation coverage and thus improved confidence in Dev/QA teams and other project stakeholders. Hence, it is imperative that automation teams sufficient time in this activity.

Automation Planning

Automation plan should involve detailing of the type of automation framework that would be used to automate test scripts. Typically, automation teams work with one of the following four models:

  • Keyword driven automation
  • Data driven automation
  • Modular automation
  • Hybrid automation

Automation of few test scripts by using the decided type of automation framework is definitely useful during planning as this would validate the applicability of chosen automation framework for testing the developed application. This also ensures there is clarity on the structure of test scripts among automation engineers before the actual automation begins.

Building a Comprehensive & Robust Automation framework

1. Infrastructure to support cross platform, compatibility, security, performance tests and code coverage runs should be made an integral part of automation framework.

  • Cross platforms testing can be achieved with the help of libraries or framework components that can perform Power On/Off, snapshot operations on virtual machines installed with different Operating Systems that application supports.
  • Standard penetration tests including SQL Injection, cross site scripting, buffer overflow can be developed as test tools or libraries and can be executed on applications (wherever applicable) with the help of automation framework.
  • Scripts can be written for installation of set of software against which the application needs to be tested for compatibility. It would be simple to call the installation scripts and check for successful installation for each build.
  • On similar lines, libraries or automated tools can be made use of, for performance tests including soak, load and stress tests.
  • Including capability of executing code coverage runs in the framework would boost the overall project productivity as this activity is jointly owned by automation, quality assurance and development teams.

By building this infrastructure, automation teams would ensure that they have improved test coverage (in turn finding more defects in the product) significantly with implicit advantages of test automation.

2.  Monitoring and Reporting components of automation framework would not only give an indication of tests running at particular instance but also would help in getting the results of test runs at a single location. Graphs depicting results of test runs for product components would help in mining information including:

  • Stability of component under test.
  • Coverage of each component (results of code coverage runs).
  • Type of testing that uncovered more defect.
  • Overall product quality.

3. Logging and log analysis are important aspects of automation framework. Report of executed test runs may not be helpful in triaging bugs. Execution logs need to be analyzed to root cause and fix defects. Automated log analysis (log analyzer tool) would definitely improve turnaround time by development to fix defects and thus improve product quality as a whole.

Design review of test scripts and libraries

Automation frameworks provide infrastructure to execute tests, monitor & report test executions, analyze test results and improve test coverage. But sufficient care must be taken while designing automated scripts and libraries. Here are few points that should be taken care of:

  1. Methods in the libraries must be designed taking care of requirements provided by quality assurance engineers. Limited functionality or modular tasks must be developed in the form of methods. Tasks that fall under bigger umbrella should be part of automation library. For example, method would be written for testing a functionality of component, and all the methods for testing functionalities of a component would be part of component library. Test Scripts can be divided into following parts: setup, test and teardown where setup part handles the pre-requisites, configurations required for test case, test part handles the actual test case scenario and the teardown section performs the clean up and reverts the environment to the original state. This gives a structured look to the test script, thus improving readability and ease of maintenance.
  2. Design patterns are general reusable solution to a commonly occurring problem in software design. Considering design patterns while designing libraries would pay rich dividends. Library reviews should point out the need and the applicability of relevant design patterns. Aligning with such standard development practices help in easy maintenance of code within and across teams.
  3. Embedding the element of re-usability in scripts and libraries is imperative. Code reusability can be achieved with:
  • Modular Scripts that can be useful to automate larger number of test cases.
  • Library files for Project components.
  • Functions catering to less complex project functionality.

Automation engineers should design test script and libraries to exploit the plug-and-play flexibility where each step of test case is only a matter of calling an already developed function, script or method in the library. Design reviews must take care of this aspect.

Automation code review

  1. Automation code reviews must emphasize test coverage aspect. For example, if a reviewer is assigned with test scripts for a single component, s/he can identify slippages in test coverage during reviews.
  2. Strict review of test scripts and library design would also help in standardizing documentation in turn substantiate easy understanding of script/library flow. Maintenance of documented artifacts is also easier.
  3. Test script and libraries can be executed and profiled for memory and cpu cycle times. This can be useful to remove bottlenecks in scripts/libraries that hamper faster execution of tests.

Conclusion

During the course of this article, we understood the needs and advantages of test automation. We established the impact of quality of test automation on the quality of production code. The author also provided suggestions and tips for achieving improved test automation.

References

  • Wikipedia.org – Wikipedia, the free encyclopedia – Article on ‘Test Automation’, taken on 8 Jun 2011.
  • Microsoft.com – Article on ‘Quality in the Test Automation Review Process and Design Review Template’, taken on 8 Jun 2011.


Applying Automation in Test Driven Development

Author: Chetan Giridhar and Vishal Kanaujia

Published at: agilerecord, July 2011 Edition

Applying Automation in Test Driven Development
AgileRecord_07

According to Wikipedia, “Test-driven development (TDD) is a software development process that relies on the repetition of a very short development cycle: first the developer writes a failing automated test case that defines a desired improvement or new function,
then produces code to pass that test and finally refactors the new code to acceptable standards.”

Understanding TDD

TDD has become an integral part of Agile development methodology. A typical test driven development model cycle consists of:

1. Writing a Test: A unit test (manual or automated, preferably automated) is first written to exercise the functionality that
is targeted for development. Before writing the test, the developer is responsible for understanding the requirements well. A unit test shall also contain assertions to confirm the pass/fail criteria of the unit test.

2. Run to fail /make it compile: Since the feature is yet to be implemented, the unit test that was written in Step 1 is bound to fail. This step is essentially a validation step for the unit test written, as the test shouldn’t pass even if there is no code written for it. Often unit tests are automated and there are chances that the tests fail because of syntax or compilation errors. Sanitization of the tests by removing these errors is also an essential part of this step.

3. Implementing the (complete /partial) functionality: This step involves developing the part of the functionality for which the unit test is written and will be validated.

4. Making tests to PASS: Once the unit tests for the developed code have passed, the developer derives confidence that the code fulfills the requirements.

5. Code refactoring: The unit tests might have passed, but code refactoring may still be required for reasons including handling errors elegantly, reporting the results in the required format, or carving a subroutine out of the written code for re-usability.

6. Repeating the cycle (improving the scope): The unit test/set of unit tests is/are refactored to cater to new functionality or push towards completion of the functionality (if only part of the functionality is developed in first cycle). The process of continuous integration would ensure that developers can revert to older checkpoints in case the newly developed code doesn’t PASS the new unit test.

Challenges with acceptance of TDD in Agile

TDD is a great process to comply with, but has challenges in real world development environments.

Element of change in Agile:

In Agile, there is a possibility of features getting removed during customer interactions at the end of every development cycle. For example, a developer might have spent time developing automated unit tests for a feature of a web application, and in the next cycle, the feature may not exist or could be drastically changed.

Time constraints:

With a lot more to achieve in less time, writing unit tests can become an overhead when compared to the traditional development model where unit tests development is usually a one-time activity.

Challenges to traditional project planning

Traditional project planning might not consider efforts estimation for iterative unit tests development. For instance, if developing a feature is a metric against which developers are measured, the purpose of writing unit tests in TDD might get lost.
Change in perspective for developing unit tests:

Writing unit tests in TDD requires know-how and background in the development and testing domains. A professional has to be creative in thinking of new tests (motivated with run-to-fail), and should be proficient in developing unit test code, too.

Maintenance of unit tests:

At times, it is cumbersome for developers to develop and maintain unit tests. It requires time and effort, especially if the unit test needs to be re-used for testing an ever-evolving code base.

Environment:

Setting up the correct environment for testing becomes imperative in TDD, as unit tests are validated against the developed code. Consider a case of setting up a web application that requires installing a Database or Web Server. Bringing up such an environment demands intensive effort.

Role of automation in TDD

Interestingly, automation can play a big role in the wider adoption of TDD in Agile teams. We would consider automation’s role from two different perspectives:

Effort of automation engineers

Automation engineers essentially perform the role of ‘software development engineer in test’. Not only are they aware of development practices, but they also possess a test-to-break attitude. Developing automated unit test can be shared between development and automation engineers. This would reduce the load on the development team, and also give added value for automation teams, as they get a first-hand understanding of the feature set.

Features of automation frameworks

Rich feature sets provided by the automation framework would essentially reduce the effort put in by the development teams in the TDD workflow. Here are some of the aspects of automation frameworks that can pay rich dividends:

• Ease of tracking: Unit tests would be stored in a central repository (part of the automation framework) with all the development team members submitting their unit tests to it. Tests would be stored in hierarchical folder structure based on the product features and its components. With this, viewing and tracking of unit tests within and across teams would be smoother.

• Traceability of unit tests: Automation frameworks can ensure that each product requirement has a unit test associated with it. This ensures that all requirements are developed as part of the TDD process, thus avoiding development slippages.

• Improving the development and review process: Automation infrastructure can facilitate tracking of all requirements by associating them with a developer and reviewer(s). This would ensure that development and review processes are organized.

• Unit test execution: A good automation framework ensures quick running of automated unit tests. The tests could be executed selectively for a component, set of features or the product itself.

• Reporting of test execution results: Results of the automated unit test for a component/ feature would be sent to the respective developer; this ensures quick reporting and cuts short the response time in refactoring unit tests from the developer.

• Automation infrastructure components: Automation frameworks could facilitate:
◦ Cross-platform testing
◦ Compatibility testing

Suggested Automation Framework

As discussed in the previous section, automation frameworks can play a crucial role in simplifying the development workflow. An automated framework for such a purpose can be developed along the following lines:

Conclusion

In this article, the authors introduced the concept of Test Driven Development (TDD) and the steps involved in its implementation in the development workflow. The article also discussed the challenges the development teams face while working with TDD. The authors emphasize the role of automation engineers & automation frameworks in easing out the process of employing TDD in the development process and share tips on building automation frameworks with the help of a pictorial representation.

References

– Wikipedia – the free encyclopedia, Article on ‘Test Driven Development’.
– Parasoft, Article on ‘ALM Best Practices’

Click to download article..  Applying Automation in Test Driven Development

TechBytes: Extended Validation SSL Certificates

Extended Validation SSL Certificates: a new standard to inspire trust, improve confidence

Authors: Manish Gupta

E commerce, banking business faces a crisis of a confidence. User trust in site security is declining and increase in number of internet user is scaling back their online transaction.

To gain user trust again in web and online transaction, CA authorities and CA/Browser forum, evolve a new trust level mechanism which is more user friendly. Here is a brief on “why SSL is losing identity promise in the today’s web model” and “How CA authorities and CA/browser trying to gain user trust”.

The Erosion of SSL’s Identity Promise:

Secure Sockets Layer (SSL) is the World Standard for Web Security. SSL technology confronts the potential problems of unauthorized viewing of confidential information, data manipulation, data hijacking, phishing, and other insidious.

In 1995 when SSL was discovered and introduces, a standard SSL Certificate provided adequate protection for consumers. Times have changed; web scams became more sophisticated and these traditional certificates may no longer be adequate.

According to a Gartner Report, that in 2006 41.2% of online adults in the U.S definitely received Phishing emails, 46%changed their purchasing and online behaviour as a direct result of security concerns and 10% reduced their online spending by at least 50%. As a result nearly $2 Billion in e-commerce sales were lost due to user concern over security.

In the beginning the promise of a standard SSL Certificate was enough. Today, however it is not. The reason in the architecture of authenticating the identity. While some CAs do a very good job of authenticating identity, others do very little or employ easily fooled practices. A site can even use a self-signed SSL Certificate with no identity authentication whatsoever.

To combat this problem, The CA/Browser Forum, consisting of over 20 leading Web browser manufacturers, SSL Certificate  providers, and Web Trust auditors) joined forces to create a new standard for web site identity authentication. After more than a year of effort, the CA/Browser Forum introduced the new Extended Validation (EV) SSL Certificate. This new standard is the most significant advancement for the World Wide Web’s secure backbone since SSL Certificates were first introduced over a decade ago.

Extended Validation SSL

Extended Validation SSL Certificates offer web sites a better method for assuring their visitors of their legitimate identity.

Extended Validation (EV) SSL certificates are the result of an industry-wide effort to help increase identity awareness and provide consumers with a higher level of trust while online. These new certificates require businesses to complete a thorough documentation process and verify current business licensing and incorporation paperwork, in addition to verifying that the entity named in the EV certificate has authorized the issuance of the EV certificate.

An EV SSL Certificate offers the e- commerce (on line business) and consumer a highly endorsed and widely recognized level of protection from increasingly sophisticated Internet spoofing scams.

EV SSL contains a number of user interface enhancements aimed at making the identification of an authenticated site immediately more noticeable to the end user. New high-security browsers display EV SSL Certificates differently than traditional SSL Certificates. Rather than the subtle padlock symbol displayed by traditional SSL Certificates, EV SSL Certificates trigger the browser address bar in high-security browsers to change to an eye-catching green colour.

How Extended Validation Works

The EV architecture has been designed to offer reliable Web site identity information to end consumers so that they can make the best possible decisions about which sites to trust. Achieving this mission has required modification to every component of the Web’s trust architecture. In addition to the new, highly understandable interface conventions, EV certificates owe their dependability to

1) Modifications in authentication procedures and
2) Real-time certificate checking.

1.  The first step is authentication. This procedure ensures that all information in the certificate is accurate and that the certificate requestor has the authority to obtain this certificate for this organization

The CA/Browser Forum carefully crafted the EV authentication guidelines over the course of more than a year to ensure that the results of authentication were reliable.

2. Real time certificate checking: Ensure the Once a certificate is issued, the next step is to ensure that the certificate presented to the customer accurately reflects what the CA discovered and that certificates purporting to meet the EV authentication standard actually do so. Certificate integrity is assured because every SSL Certificate includes secure hash functions and will not work correctly if tampered with in any way.

The EV infrastructure goes on to ensure that the certificate exists in good standing by using real-time certificate validity checking. This checking depends on two parallel infrastructures.

A.   First is OCSP: OCSP performs a real-time revocation check for each certificate so that if an EV certificate is compromised or for some other reason requires revocation, that certificate will not appear as valid on EV-compatible browsers.

B.  The second real-time service is the Microsoft® Root Store. A very simple metadata marker indicates each EV certificate’s status as such. To protect against the contingency that an unprincipled or incompetent CA might incorrectly issue certificates marked as EV certificates even though they haven’t undergone correct EV authentication, the IE7 browser performs a real-time check against the Microsoft Root Store to ensure that this SSL root is approved for EV certificates.

Because of this check, if a CA were to issue certificates with the EV marker even though that CA was not approved to issue EV certificates, those certificates still would not activate the green address bar and the other EV interface enhancements. Likewise, if an existing CA were to fail its annual audit or repeatedly issue incorrect certificates under the EV banner, Microsoft would then have the ability to remove that root from the list of approved EV roots in the Microsoft Root Store.

Browser Support for EV SSL

Microsoft, the first browser manufacturer to support this new standard, integrated the EV SSL interface enhancement with Microsoft IE7. Although relatively new to the market, IE7 has already garnered 31% of the browser market. Additionally, Firefox 2.0 users can download an extension that enables them to see the green address bar when they encounter a VeriSign EV SSL Certificate.

1. IE 7 Address Bar.

2.  SSL Padlock

3. IE 7 Security Status Bar, alternating displays the legal entity for this website  and CA identifying the legal identity

In addition to changing the Address Bar shading, EV certificates display details about the business, such as location for incorporation and country. Figure 2 provides an example showing the Microsoft Corporation and the country Redmond (US).

Reference

Microsoft – http://www.microsoft.com

Certificate Authority Websites