Testing Enhancement for Existing Feature, New Requirements & Resolved Issues

I have been asked this question a long back by one of my internet friend that he needs to test 3 different things in the next release i.e
·         Enhancement for already existing feature
·         New requirements
·         Resolved issues.
So how should I proceed to test & what should be the order to test everything?
This is a good question & I think in the current agile word of software development it happens.
I would suggest that in this situation, there should be a risk assessment considering the time frame you have, for the next release & then take decisions that what to do a test first.
Ideal order in my view for the above scenario will be, to test the new requirements first followed by testing the enhancements for an already existing feature in project/module & then finally doing regression test to verify the resolved issues & stability of the application.
The order that is taken above will depend on the risks and time frame available.
I hope this would help in taking the decision in the scenario explained above.
Happy Testing,
Javed Nehal

10 Requirements Traps to Avoid

10 Requirements Trap to Avoid
The path to quality software begins with excellent requirements. Slighting the processes of requirements development and management is a common cause of software project frustration and failure. This article describes ten common traps that software projects can encounter if team members and customers don’t take requirements seriously. I describe several symptoms that might indicate when you’re falling victim to each trap, and I offer several solutions to control the problem.
Trap #1: Confusion over “Requirements”
Symptoms: The simple word “requirements” means different things to different people. An executive thinks that “requirements” might be a high-level product/project concept or business vision, while a developer’s “requirements” might look suspiciously like detailed user interface designs. One symptom of potential problems is that project stakeholders refer to “the requirements” with no qualifying adjectives. The project participants, therefore, will likely have different expectations of how much detail to expect in the requirements.
Solutions: The first step is to recognize that there are several types of requirements, all legitimate and all necessary. A second step is to educate all project participants about key requirements engineering concepts, terminology, and practices.
Trap #2: Inadequate Customer Involvement
Symptoms: Despite considerable evidence that it doesn’t work, many projects seem to rely on telepathy as the mechanism for communicating requirements from users to developers. Users sometimes believe that the developers should already know what users need, or that technical stuff like requirements development doesn’t apply to users. Often, users claim to be too busy to spend the time it takes to iteratively gather and refine the requirements. (Isn’t it funny how we never have time to do things right, but somehow we always find the time to do them over?)
One indication of inadequate customer involvement is that user surrogates (such as user managers, marketing staff, or software developers) supply all of the input to requirements. Another clue is that developers have to make many requirements decisions without adequate information and perspective. If you’ve overlooked or neglected to gather input from some of the product’s likely user classes, someone will be unhappy with the delivered product. On one project I heard about, the customers rejected as unacceptable the first time they saw it, which was at its initial rollout. This is a strong—but late and painful—indication of inadequate customer involvement in requirements development.
Solutions: Begin by identifying your various user classes. User classes are groups of users who differ in their frequency of using the product, the features they use, their access privilege level, or in other ways. (See “User-Driven Design” by Donald Gause and Brian Lawrence in STQE, January/February 1999, for an excellent discussion of user classes.)
An effective technique is to identify individual “product champions” to represent specific user classes. Product champions collect input from other members of their user class, supply the user requirements, and provide input on quality attributes and requirement priorities.
This approach is particularly valuable when developing systems for internal corporate use; for commercial product development, it might be easier to convene focus groups of representative users. Focus group participants can provide a broad range of input on desired product features and characteristics. The individuals you select as user representatives can also evaluate any prototypes you create, and review the SRS for completeness and accuracy. Strive to build a collaborative relationship between your customer representatives and the development team.
Trap #3: Vague and Ambiguous Requirements
Symptoms: Ambiguity is the great concern of software requirements. You would have encountered ambiguity if a requirement statement can have several different meanings and you’re not sure which is correct. This ambiguity results may be more harmful when multiple readers interpret a requirement in different ways. Each reader conclusion may be correct, and then ambiguity remains undetected until later—when it’s more expensive to resolve.
Another hint that your requirements are vague or incomplete is that the SRS is missing information the developers need. A Tester can’t think of test cases to verify whether each requirement was properly implemented, your requirements are not sufficiently well defined. Developers might assume that whatever they’ve been given in the form of requirements is a definitive and complete product description, but this is a risky assumption.
The ultimate symptom of vague requirements is that developers have to ask the analyst or customers many questions, or they have to guess about what is really intended. The extent of this guessing game might not be recognized until the project is far along and implementation has diverged from what is really required. At this point, expensive rework may be needed to bring things back into alignment.
Solutions: Avoid using intrinsically subjective and ambiguous words when you write requirements. Terms like minimize, maximize, optimize, rapid, user-friendly, easy, simple, often, normal, usual, large, intuitive, robust, state-of-the-art, improved, efficient, and flexible are particularly dangerous. Avoid “and/or” and “etc.” like the plague. Requirements that include the word “support” are not verifiable; define just what the software must do to “support” something. It’s fine to include “TBD” (to be determined) markers in your SRS to indicate current uncertainties, but make sure you resolve them before proceeding with design and construction.
To ferret out ambiguity, have a team that represents diverse perspectives formally inspect the requirements documents. Suitable inspectors include:
  • the analyst who wrote the requirements
  • the customer or marketing representative who supplied them (particularly for use case reviews)
  • a developer who must implement them
  • a tester who must verify them
Another powerful technique is to begin writing test cases early in requirements development. Writing conceptual test cases against the use cases and functional requirements crystallize your vision of how the software should behave under certain conditions. This practice helps reveal ambiguities and missing information, and it also leads to a requirements document that supports comprehensive test case generation.
Consider developing prototypes; they make the requirements more tangible than does a lifeless textual SRS. Create a partial, preliminary, or possible implementation of a poorly understood portion of the requirements to clarify gaps in your knowledge. Analysis models such as data flow diagrams, entity-relationship diagrams, class and collaboration diagrams, state-transition diagrams, and dialog maps provide alternative and complementary views of requirements that also reveal knowledge gaps.
Trap #4: Unprioritized Requirements
Symptoms: “We don’t need to prioritize requirements,” said the user representative. “They’re all important, or I wouldn’t have given them to you.” Declaring all requirements to be equally critical deprives the project manager of a way to respond to new requirements and to changes in project realities (staff, schedule, quality goals). If it’s not clear which features you could defer during the all-too-common “rapid descoping phase” late in a project, you’re at risk from unprioritized requirements.
Another symptom of this trap is that more than 90% of your requirements are classified as high priority. Various stakeholders might interpret “high” priority differently, leading to mismatched expectations about what functionality will be included in the next release. Sometimes developers balk at prioritizing requirements because they don’t want to admit they can’t do it all in the time available. Often users are also reluctant to prioritize because they fear the developers will automatically restrict the project to the highest priority items and the others will never be implemented. They might be right about that, but the alternatives can include software that is never delivered and having ill-informed people make the priority trade-off decisions.
Solutions: The relative implementation priority is an important attribute of each use case, feature, or individual functional requirement. Align use cases with business requirements, so you know which functionality most strongly supports your key business objectives. Your high-priority use cases might be based on:
  • The anticipated frequency or volume of usage
  • Satisfying your most favored user classes
  • Implementing core business processes
  • Functionality demanded regulatory compliance
If you derived functional requirements from the use case descriptions, this alignment helps you implement the truly essential functionality first. Allocate each requirement or feature to a specific build or release.
Many organizations use a three-level prioritization scale. If you do, define the priority categories clearly to promote consistent classification and common expectations. A more robust solution is to analytically prioritize discretionary requirements, based on their projected customer value and the estimated cost and technical risk associated with construction. (A spreadsheet to assist with this approach is available online; see this article’s Web Infolink for more information.)
Trap #5: Building Functionality No One Uses
Symptoms: I’ve experienced the frustration of implementing features that users swore they needed, then not seeing anyone use them. I could have spent that development time much more constructively. Beware of customers who don’t distinguish glitzy user interface “chrome” from the essential “steel” that must be present for the software to be useful. Also beware of developer gold plating, which adds unnecessary functionality that “the users are just going to love.” In short, watch out for proposed functionality that isn’t clearly related to known user tasks or to achieving your business goals.
Solutions: Make sure you can trace every functional requirement back to its origins, such as a specific use case, higher-level system requirement, business rule, industry standard, or government regulation. If you don’t know where a requirement came from, question whether you really need it. Identify the user classes that will benefit from each feature or use case.
Deriving the functional requirements from use cases is an excellent way to avoid orphan functionality that just seems like a cool idea. Analytically prioritizing the requirements, use cases, or features also helps you avoid this trap. Have customers rate the value of each proposed feature, based on the relative customer benefit provided if it is present—and the relative penalty if it is not. Then have developers estimate the relative cost and risk for each feature. Use the spreadsheet mentioned under Trap #4 to calculate a range of priorities, and avoid those requirements that incur a high cost but provide relatively low value.
Trap #6: Analysis Paralysis
Symptoms: If requirements development seems to go on forever, you might be a victim of analysis paralysis. Though less common than skimping on the requirements process, analysis paralysis results when the view prevails that construction cannot begin until the SRS is complete and perfect. New versions of the SRS are released so frequently that version numbers resemble IP addresses, and a requirements baseline is never established. All requirements are modeled six ways from Sunday, the entire system is prototyped, and development is held up until all requirement changes cease.
Solutions: Your goal is not to create a perfect SRS, but to develop a set of clearly expressed requirements that permit development to proceed at
acceptable risk. If some requirements are uncertain, select an appropriate development lifecycle that will let you implement portions of the requirements as they become well understood. (Some lifecycle choices include the spiral model, staged release, evolutionary prototyping, and time-boxing.) Flag any knowledge gaps in your SRS with “TBD” markers, to indicate that proceeding with construction of those parts of the system is a high-risk activity.
Identify your key decision-makers early in the project, so you know who can resolve issues to let you break out of the paralysis and move ahead with development. Those who must use the requirements for subsequent work (design, coding, testing, writing user documentation) should review them to judge when it’s appropriate to proceed with implementation. Model and prototype just the complex or poorly understood parts of the system, not the whole thing. Don’t make prototypes more elaborate than necessary to resolve the uncertainties and clarify user needs.
Trap #7: Scope Creep
Symptoms: Most projects face the threat of scope creep, in which new requirements are continually added during development. The Marketing department demands new features that your competitors just released in their products. Users keep thinking of more functions to include, additional business processes to support, and critical information they overlooked initially. Typically, project deadlines don’t change, no more resources are provided, and nothing is deleted to accommodate the new functionality.
Scope creep is most likely when the product scope was never clearly defined in the first place. If new requirements are proposed, rejected, and resurface later—with ongoing debates about whether they belong in the system—your scope definition is probably inadequate.
Requirement changes that sneak in through the back door, rather than through an established and enforced change control process, lead to the schedule overruns characteristic of scope creep. If Management’s sign-off on the requirements documents is just a game or a meaningless ritual, you can expect a continuous wave of changes to batter your project.
Solutions: All projects should expect some requirements growth, and your plans should include buffers to accommodate such natural evolution. The first question you should ask when a new feature, use case, or functional requirement is proposed is: “Is this in scope?” To help you answer this question, document the product’s vision and scope and use it as the reference for deciding which proposed functionality to include.
Apparent scope creeps often indicates that requirements were missed during elicitation, or that some user classes were overlooked. Using effective requirements gathering methods early on will help you control scope creep. Also, establish a meaningful process for baselining your requirements specifications. All participants must agree on what they are saying when they approve the requirements, and they must understand the costs of making changes in the future. Follow your change control process for all changes, recognizing that you might have to renegotiate commitments when you accept new requirements.
Trap #8: Inadequate Change Process.
Symptoms: The most glaring symptom of this trap is that your project doesn’t have a defined process for dealing with requirements changes. Consequently, new functionality might become evident only during system or beta testing. Even if you have a change process in place, some people might bypass it by talking to their buddies on the development team to get changes incorporated. Developers might implement changes that were already rejected or work on proposed changes before they’re approved. Other clues that your change process is deficient are that it’s not clear who makes decisions about proposed changes, change decisions aren’t communicated to all those affected, and the status of each change request isn’t known at all times.
Solutions: Define a practical change control process for your project. You can supplement the process with a problem- or issue-tracking tool to collect, track, and communicate changes. However, remember that a tool is not a substitute for a process. Set up a change control board (CCB) to consider proposed changes at regular intervals and make binding decisions to accept or reject them. (See “How to Control Software Changes” by Ronald Starbuck inSTQE, November/December 1999, for more about the CCB.)The CCB shouldn’t be any larger or more formal than necessary to ensure that changes are processed effectively and efficiently. Establish and enforce realistic change control policies. Compare the priority of each proposed requirement change against the body of requirements remaining to be implemented.
Trap #9: Insufficient Change Impact Analysis
Symptoms: Sometimes developers or project managers agree to make suggested changes without carefully thinking through the implications. The change might turn out to be more complex than anticipated, take longer than promised, be technically or economically infeasible, or conflict with other requirements. Such hasty decisions reflect an insufficient analysis of the impact of accepting a proposed change. Another indication of inadequate impact analysis is that developers keep finding more affected system components as they implement the change.
Solutions: Before saying “sure, no problem,” systematically analyze the impact of each proposed change. Understand the implications of accepting the change, identify all associated tasks, and estimate the effort and schedule impact. Every change will consume resources, even if it’s not on the project’s critical path. Use requirements traceability information to help you identify all affected system components. Provide estimates of the costs and benefits of each change proposal to the CCB before they make commitments. (A checklist and planning worksheet to assist with requirements change impact analysis is available online; see this article’s Web Infolink for more information.)
Trap #10: Inadequate Version Control
Symptoms: If accepted changes aren’t incorporated into the SRS periodically, project participants won’t be sure what all is in the requirements baseline at any time. If team members can’t distinguish different versions of the requirements documents with confidence, your version control practices are falling short. A developer might implement a canceled feature because she didn’t receive an updated SRS. I know of a project that experienced a spate of spurious defect reports because the system testers were testing against an obsolete version of the SRS.
Using the document’s date to distinguish versions is risky. The dates might be the same but the documents may be different (if you made changes more than once in a day), and identity documents can have different “date printed” labels. If you don’t have a reliable change history for your SRS, and earlier document versions are gone forever, you’re caught in this trap.
Solutions: Periodically merge approved changes into the SRS and communicate the revised SRS to all who are affected. Adopt a versioning scheme for documents that clearly distinguishes drafts from baselined versions. A more robust solution is to store the requirements documents in a version control tool. Restrict read/write access to a few authorized individuals, but make the current versions available in the read-only format to all project stakeholders. Even better, store your requirements in the database of a commercial requirements management tool. In addition to many other capabilities, such tools record the complete history of every change made in every requirement.
Keys to Excellent Software Requirements
While these ten traps aren’t the only ones lurking in the requirements minefield, they are among the most common and most severe. To avoid or control them, assemble a robust toolkit of practices for eliciting, analyzing, specifying, verifying, and managing a product’s requirements:
  • Educating developers, managers, and customers about requirements engineering practices and the application domain
  • Establishing a collaborative customer-developer partnership for requirements development and management
  • Understanding the different kinds of requirements and classifying customer input into the appropriate categories
  • Taking an iterative and incremental approach to requirements development
  • Using standard templates for your vision and scope, use case, and SRS documents
  • Holding formal and informal reviews of requirements documents
  • Writing test cases against requirements
  • Prioritizing requirements in some analytical fashion
  • Instilling the team and customer discipline to handle requirements changes consistently and effectively

These approaches will help your next product’s requirements provide a solid foundation for efficient construction and a successful rollout.

Courtesy – ProcessImpact


Javed Nehal

Top 10 Estimation Best Practices in Agile

1. Use more than one person – By engaging the team in the estimation process we gain the benefits of additional insights and consensus building. Additional people bring different perspectives to estimating and spot things individuals may miss. Also, the involvement in the process generates better consensus and commitment for the estimates being produced.

2. Use more than one approach – Just as one person is likely to miss perspectives of estimating so too are single approaches. Use multiple estimation approaches (comparison to similar projects, bottom up, user story points, etc) and look for convergence between multiple approaches to reinforce likely estimate ranges.

3. Agree on what “It” and “Done” means – make sure everyone is estimating in the same units (e.g. ideal days), have the same assumptions, and are based on standard developer ability/effort. When asking for estimates spell out what you are asking them to estimate. What does “Done” include? Coded, unit tested? How about integrated and system tested? What about refactoring contingencies? User meeting time?

4. Know when to stop – estimating an inherently unpredictable process (custom software development with evolving requirements) will never be an exact science. Balance enough effort against the diminishing returns and false accuracies of over-analysis. Look for broad consensus between team members at a coarse-grained level and then move on. It is better to save estimation time for periodic updates than over analyze.

5. Present estimates as a range – We call them “estimates” not “predictions” because they have a measure of uncertainty associated with them. Manage the expectation of project stakeholders and present them as a range of values. E.G. Between $90,000 and $120,000

6. Defend/explain estimate range probabilities – If stakeholders automatically latch onto the low end of an estimated range explain the low probability of achieving this and steer them to a more likely value. If your organization persistently fails to understand, present a range of likely values (e.g. around the 50% to 97% probability range)

7. Don’t reserve estimating for when you know least about the project – Estimation should not be reserved for the beginning of projects. Instead done throughout as we learn more about the emerging true requirements and the ability of the team to build and evaluate software.

8. Be aware of common estimation omissions – Consult lists of common estimating omissions (such as Capers Jones’) and ensure these items are taken into account. Look back at retrospective notes for things that did not go so well, and tasks that were missed or ran late – make sure we include enough time for these.

9. Embrace reality early – As the project progresses, it is tempting to think development will get faster and faster now all the technical problems have been overcome. However, don’t underestimate the load of maintaining and refactoring a growing code based. Especially if the system is now live; support, maintenance, test harness updates, and refactoring can quickly erode the velocity improvements anticipated, so use the real velocity numbers.

10. Review, Revisit, Remove head from the sand, Repeat – Our first estimates will likely be our worst. Don’t leave it there; review the project velocities to see how fast we are really going. Revisit the estimates armed with the real velocities to determine likely end dates. Embrace the reality you see, “The map is not the territory”, reprioritize and repeat the estimation process often to adapt and iterate to the most accurate estimates.


Javed Nehal

Restrict Removable Storage Devices Using Group Policy in Windows Server 2008

In Windows Server 2008 domain, there are a set of built-in policies on removable storage access and installation. It makes restricting USB mass storage device easier.

Open the Group Policy Manager in your windows server 2008 and Do the following Steps,

1. Computer Configuration–>Policies–>Administrative Templates–>System–>Removable Storage Access
    User Configuration–>Policies–>Administrative Templates–>System–>Removable Storage Access
It specifies read and write permission on all kinds of the removable storage device.
2. Computer Configuration–>Policies–>Administrative Templates–>System–>Device Installation–>Device Installation Restrictions
With device installation restrictions, the installation of the removable storage device will be totally under control.


Why does Scrum Work?

1. The basic premise is that if you are committed to the team and the project, and if your boss really trusts you, then you can spend time being productive instead of justifying your work.
2. This reduces the need for meetings, reporting, and authorization.
3. There is control, but it is subtle and mostly indirect.
4. It is exercised by selecting the right people, creating an open work environment, encouraging feedback, establishing an evaluation and reward program based on group performance, managing the tendency to go off in different directions early on, and tolerating mistakes.
5. Every person on the team starts with an understanding of the problem, associates it with a range of solutions experienced and studied, then using skill, intelligence, and experience will narrow the range to one or a few options.
6. Keep in mind that it can be difficult to give up the control that it takes to support the Scrum methodology.
7. The approach is risky, there is no guarantee that the team will not run up against real limits, which could kill the project.
8. The disappointment of the failure could adversely affect the team members because of the high levels of personal commitment involved.
9. Each person on the team is required to understand all of the problem and all of the steps in developing a system to solve it, this may limit the size of the system developed using the methodology.

How does Scrum work?

•  The first thing that happens is the initial leader will become primarily a reporter.
•  The leadership role will bounce around within the team based on the task at hand.
•  Soon QA developers will be learning how requirements are done and will be actively contributing, and requirements people will be seeing things from a QA point of view.
•  As work is done in each of the phases, all the team learns and contributes, no work is done alone, the team is behind everything.
•  From the initial meeting, the finished product is being developed.
•  Someone can be writing code, working on functional specifications, and designing during the same day, i.e. “all-at-once”.
•  Don’t be surprised if the team cleans the slate numerous times, many new ways will be picked up and many old ways discarded.
•  The team will become autonomous, and will tend to transcend the initial goals, striving for excellence.
•  The people on the team will become committed to accomplish the goal and some members may experience emotional pain when the project is completed.


•  Scrum is an agile process to manage and control development work.
•  Scrum is a wrapper for existing engineering practices.
•  A scrum is a team-based approach to iteratively, incrementally develop systems and products when requirements are rapidly changing
•  Scrum is a process that controls the chaos of conflicting interests and needs.
•  Scrum is a way to improve communications and maximize co-operation.
•  Scrum is a way to detect and cause the removal of anything that gets in the way of developing and delivering products.
•  Scrum is a way to maximize productivity.
•  Scrum is scalable from single projects to entire organizations. Scrum has controlled and organized development and implementation for multiple interrelated products and projects with over a thousand developers and implementers.
•  Scrum is a way for everyone to feel good about their job, their contributions, and that they have done the very best they possibly could.


Scrum naturally focuses an entire organization on building successful products. Without major changes -often within thirty days – teams are building useful, demonstrable product functionality. Scrum can be implemented at the beginning of a project or in the middle of a project or product development effort that is in trouble.

Scrum is a set of interrelated practices and rules that optimize the development environment, reduce organizational overhead, and closely synchronize market requirements with iterative prototypes. Based on modern process control theory, Scrum causes the best possible software to be constructed given the available resources, acceptable quality and required release dates. Useful product functionality is delivered every thirty days as requirements, architecture, and design emerge, even when using unstable technologies.

Smoke Testing Vs Sanity Testing

Smoke Test:

When a build is received, a smoke test is run to ascertain if the build is stable and it can be considered for further testing.

Smoke testing can be done for testing the stability of any interim build.

Smoke testing can be executed for platform qualification tests.

Sanity testing:

Once a new build is obtained with minor revisions, instead of doing a thorough regression, a sanity is performed so as to ascertain the build has indeed rectified the issues and no further issue has been introduced by the fixes.  Its generally a subset of regression testing and a group of test cases are executed that are related with the changes made to the app.

Generally, when multiple cycles of testing are executed, sanity testing may be done during the later cycles after through regression cycles.






Smoke testing originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch fire and smoke.  In the software industry, smoke testing is a shallow and wide approach whereby all areas of the application without getting into too deep, is tested.

A sanity test is a narrow regression test that focuses on one or a few areas of functionality. Sanity testing is usually narrow and deep.


A Smoke test is designed to touch every part of the application in a cursory way. It’s is shallow and wide.

A Sanity test is used to determine a small section of the application is still working after a minor change.


Smoke testing will be conducted to ensure whether the most crucial functions of a program work, but not bothering with finer details. (Such as build verification).

Sanity testing is a cursory testing; it is performed whenever a cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing.


Smoke testing is normal health check up to a build of an application before taking it to test in depth.


sanity testing is to verify whether requirements are met or not,

checking all features breadth-first.



Javed Nehal

Enabling Email Functionality in the Team System Web Access (TSWA)

By default, the email functionality in Team System Web Access (TSWA) is disabled and users will receive the following message when trying to use it:

“Sending email is not enabled. Please contact your administrator.”

Email Functionality

To enable you to need to change TSWA’s web.config file, which can be found following path: \Program Files\Microsoft Visual Studio 2005/2008 Team System Web Access\Web.

  • Change the setting “sendingEmailEnabled” to true.
  • And specify your SMTP server name under “host”.

 <emailSettingssendingEmailEnabled=”true”enableSsl=”false” />

      <smtpdeliveryMethod=”network”from=”[email protected]”>
        <networkhost=”″port=”25″defaultCredentials=”true” />

Optionally you can specify which account to use when authenticating with the SMTP server and if SSL should be enabled as well as the default email address for the sender.

Where can you use the email functionality?

 You can send… 
  • single work items (from the work item form)
  • multiple work items, i.e. the result of a work item query

The email function can be found in the “Tools” menu.

Email Functionality 01

The “Send Email” window allows you to specify the sender’s (“From”) and receiver’s (“To”) email address, as well as subject and message.

Email Functionality 02

After hitting send a message box confirms that the mail was successfully routed to the email server.

Mail Delivery Failed

This is how the receiver will see the message if it’s about a single work item:
Mail Error
Configure SMTP Server and E-mail Notification Settings in the Services Web.Config File

You can configure Team Foundation Server to use an existing SMTP server to send e-mail alerts. Users can configure alerts for various projects, work item, and build event notifications. Although you can specify the SMTP server during Team Foundation Server installation, you might want to change the STMP server later. Similarly, if you to change the application pool service account by using the TFSAdminUtil ChangeAccount command, you must manually change the sender account e-mail address to the new service account’s e-mail address.


The content of Team Foundation Server alert e-mails is not customizable. The content of the e-mails is automatically generated from the TeamFoundation.xsl file. Modifying this file is not recommended. If you do modify the contents of this file, be sure to thoroughly test your modifications. Incorrect modifications of this file can result in the failure of Team Foundation Server e-mail alerts and the inability to view Team Foundation work items, changesets, or files in a Web browser.

Required Permissions

To perform this procedure, you must be a member of the Administrators group on the Team Foundation application-tier server. For more information, see Team Foundation Server Permissions.


Don’t use the ASP.Net tab of the IIS Manager (inetmgr) to edit a configuration file. If you use this tab, an attribute is added to the configuration element of the configuration file. This attribute interferes with normal functioning

To designate or change the SMTP server for sending e-mail alerts
  1. On the application-tier server for Team Foundation, locate the installation directory for the application tier.
  2. Open the Web Services directory, and then open the Services subdirectory.
  3. In a text or XML editor, open the Web.Config file, and locate the element.
  4. Update the element by typing the fully qualified domain name of the SMTP server. For example, type the following string:
  5. Save and close the Web.Config file.
You must close and restart the Web services application for Team Foundation before your changes will take effect.

To designate or change the sender e-mail address for e-mail alerts

  1. On the application-tier server for Team Foundation, locate the installation directory for the application tier.
  2. Open the Web Services directory, and then open the Services subdirectory.
  3. In a text or XML editor, open the Web.Config file, and locate the element.
  4. Update the element by typing the e-mail address that is associated with the service account (for example,Domain/TFSService). That is used for the application pool identity for Team Foundation. For example, type the following string:
  5. Save and close the file.

You must close and restart the Web services application for Team Foundation before your changes will take effect.