Barringer & Ireland (p. 52) define Feasibility analysis as "the process of determining if a business idea is viable.... to determine if an idea is worth pursuing." In their model, there are four considerations to this analysis which should be conducted prior to spending any of the firm's three Ms (money, men, machines) pursuing the idea: (1) Product/Service, (2) Industry/Market, (3) Organizatioal, and (4) Financial feasibility (p. 54). Prior to undertaking a feasibility analysis, they suggest a Concept Statement (p.56) be prepared. Such a statement "puts a face" on the concept in marketing terms. A Concept Statement is defined thusly: "..a preliminary description of a business and includes the following:
A description of the product or service being offered: ...details the features of the product or service and may include a sketch...A computer generated simulation of the functionalty of the product or service is also helpful."
"The benefits of the product or service:...how the product or service adds value and or solves a problem.
A description of how the product will be positioned relative to similar ones in the market:" Positioning is something (perception) that happens in the minds of the target market. It is the aggregate perception the market has of a particular company, product or service in relation to their perceptions of the competitors in the same category. Positioning is expressed relative to the position of competitors. The term was coined in 1969 by Al Ries and Jack Trout in the paper "Positioning" is a game people play in today’s me-too market place" in the publication Industrial Marketing. It was then expanded into their ground-breaking first book, "Positioning: The Battle for Your Mind".
A description of how the product or service will be sold and distributed: refers to how the product gets to the customer; for example, point of sale placement or retailing. This fourth P has also sometimes been called Place, referring to the channel by which a product or service is sold (e.g. online vs. retail), which geographic region or industry, to which segment (young adults, families, business people), etc.
Product: Alarm-a-Diaper will market a conventional diaper (much like Huggies or Luvs) that provides an audible and visual alarm when a diaper becomes wet. The diapers will be sold through major retailers such as Meijer, Wal-Mart, and Target. The diapers were deveoped by world renown feces engineer, Hank Feeser for Alarm-a-Diaper using urine-battery technology developed by Dr. Lee in Singapore.
Target Market: is parents of not-yet potty-trained children who benefit from being notified that their baby's diaper requires changing; and caretakers of the elderly who are also in need of diapers.
Why Alarm-a-Diaper? The existing disposable diaper industry has no product on the market with organic wetness alarm systems. Alarm-a-Diaper is an incredibly mobile, wireless, and self-contained easy to use system. It offers all the benefits and absorbency of a convential diaper with the added feature of remotely informing parents when diapers need to be changed.
Special Features - No More Guessing: The Alarm-a-Diapers is absolutely "loaded" with helpful and beneficial attributes. The diapers is powered by an innovative, urine-initiated battery that is paper-thin and has a life similar to a typical AA battery. This battery, when urine dampened, triggers an RF notification system tat tells the parent or caretaker through an audible signal, that the diaper is wet. An option visual alarm is also available, using a small, electro-luminescent strip placed under the plastic lining of the diaper. When the battery is activated, both the alarm, and this strip provide notification that the diaper needs to be changed.
Management Team: Alarm-a-Diaper is managed by co-founders Hank & Brutti Feeser. Hank has over 40 years experience in engineering as well as an advanced FE degree from IU. Brutti is an expert at providing urine for testing Dr Lee's batteries.
Industry/Market Feasibility
attempts to ascertain what's going on in the external environment in which the envisioned product/service with compete. Such analysis is the OT in SWOT analysis, i.e., what opportunities and threats are there in the potential industry and its set of competitions. Analysis in this area is addressed in the Industry and Competitive Analysis for Entrepreneurs section of this Wiki.
Organization Feasibility Analysis
is part of the SW analysis (Strength & Weaknesses) of the team required to bring the proposed product/service to market. This part of the analysis is the Men and Material portion of the 3Ms required, and is covered in the Team section of this Wiki.
Financial Feasibility Analysis
is generally a quick and dirty assessment of "What will it take to do this, where & can we obtain financial resourses envisioned," and what is the expected return on these investments. Something entrepreneurs seldom ask, maybe because they are entrepreneurs not finance-types, is "What alternative investments can I make that will yield the same of higher returns?"
Perhaps the most important and most overlooked of the four components of feasibility analysis are Concept and Usability testing as a central part of the initial Product Analysis. Boodotcom, one of the top ten dotcom failures was very guilty of no conducting usability testing. If they had, they would have known that it took one initial customer 81 minutes to order one item over his 56k modem.
Concept testing
From Wikipedia, the free encyclopedia
Concept testing is the process of using quantitative methods and qualitative methods to evaluate consumer response to a product idea prior to the introduction of a product to the market. It can also be used to generate communication designed to alter consumer attitudes toward existing products. These methods involve the evaluation by consumers of product concepts having certain rational benefits, such as "a detergent that removes stains but is gentle on fabrics," or non-rational benefits, such as "a shampoo that lets you be yourself." Such methods are commonly referred to as concept testing and have been performed using field surveys, personal interviews and focus groups, in combination with various quantitative methods, to generate and evaluate product concepts.
The concept generation portions of concept testing have been predominantly qualitative. Advertising professionals have generally created concepts and communications of these concepts for evaluation by consumers, on the basis of consumer surveys and other market research, or on the basis of their own experience as to which concepts they believe represent product ideas that are worthwhile in the consumer market.
The quantitative portions of concept testing procedures have generally been placed in three categories:
(1) concept evaluations, where concepts representing product ideas are presented to consumers in verbal or visual form and then quantitatively evaluated by consumers by indicating degrees of purchase intent, likelihood of trial, etc.,
(2) positioning, which is concept evaluation wherein concepts positioned in the same functional product class are evaluated together, and
(3) product/concept tests, where consumers first evaluate a concept, then the corresponding product, and the results are compared.
Shortcomings of traditional concept testing
Traditionally, concept testing has been inadequate as a means to identify and quantify the criteria upon which consumer preference of one concept over another was based. These methods were insufficient to ascertain the relative importance of the factors responsible for or governing why consumers, markets and market segments reacted differently to concepts presented to them in the concept tests. Without such information, market researchers and advertisers, with their expertise, could generalize, on the basis of a concept test, as to how consumers might react to the actual products or to variations of the tested concepts. Communication of the concept, as embodied in a new product, has generally been left to the creativity of the advertising agency. No systematic quantitative method was known, however, which could accurately identify the criteria on which the consumer choices were based and the contribution or importance of each criterion to the purchase decision. Therefore, previous concept testing methods have failed to provide market researchers with the complete information necessary for them to create products specifically tailored to satisfy a consumer group balance of purchase criteria.
Moreover, traditional concept testing methods have failed to accurately quantify the relationships between consumer response to concepts and consumer choice of existing products which compete in the same consumer market. Thus, they were unable to provide a communication of the benefits of a consumer product, closely representing the tested concept, to a high degree of accuracy.
These problems of concept testing have been identified in business and marketing journals. For example, Moore and William (1982) in a literature survey and review of concept testing methodology, point out that concept tests have failed to account for changes between the concept tested and the communication describing the benefits of the product which embodies the concept. The Moore article reports that "no amount of improvement in current concept testing practices can remedy these problems." This is reflective of the fact that none of the traditional methods provided a quantitative means for ascertaining the relative importance of the underlying criteria of concept choices as a means for identifying the visual and verbal expressions of the concepts which best communicate the benefits sought by the consumer. Nor did the traditional methods quantify the relationships between concepts and existing products offered in the same consumer market. The ability of a method to ameliorate or overcome the above shortcomings would provide substantial improvement in communication of the concepts identified in testing and offered to the market as a product.
Usability testing is a means for measuring how well people can use some human-made object (such as a web page, a computer interface, a document, or a device) for its intended purpose, i.e. usability testing measures the usability of the object. Usability testing focuses on a particular object or a small set of objects, whereas general human-computer interaction studies attempt to formulate universal principles.
If usability testing uncovers difficulties, such as people having difficulty understanding instructions, manipulating parts, or interpreting feedback, then developers should improve the design and test it again. During usability testing, the aim is to observe people using the product in as realistic a situation as possible, to discover errors and areas of improvement. Designers commonly focus excessively on creating designs that look "cool", compromising usability and functionality. This is often caused by pressure from the people in charge, forcing designers to develop systems based on management expectations instead of people's needs. A designers' primary function should be more than appearance, including making things work with people.
Simply gathering opinions on an object or document is market research, rather than usability testing. Usability testing usually involves a controlledexperiment to determine how well people can use the product. 1
Rather than showing users a rough draft and asking, "Do you understand this?", usability testing involves watching people trying to use something for its intended purpose. For example, when testing instructions for assembling a toy, the test subjects should be given the instructions and a box of parts. Instruction phrasing, illustration quality, and the toy's design all affect the assembly process.
Setting up a usability test involves carefully creating a scenario, or realistic situation, wherein the person performs a list of tasks using the product being tested while observers watch and take notes. Several other test instruments such as scripted instructions, paper prototypes, and pre- and post-test questionnaires are also used to gather feedback on the product being tested. For example, to test the attachment function of an e-mail program, a scenario would describe a situation where a person needs to send an e-mail attachment, and ask him or her to undertake this task. The aim is to observe how people function in a realistic manner, so that developers can see problem areas, and what people like. Techniques popularly used to gather data during a usability test include think aloud protocol and eye tracking.
What to measure
Usability testing generally involves measuring how well test subjects respond in four areas: time, accuracy, recall, and emotional response. The results of the first test can be treated as a baseline or control measurement; all subsequent tests can then be compared to the baseline to indicate improvement.
Time on Task -- How long does it take people to complete basic tasks? (For example, find something to buy, create a new account, and order the item.)
Accuracy -- How many mistakes did people make? (And were they fatal or recoverable with the right information?)
Recall -- How much does the person remember afterwards or after periods of non-use?
Emotional Response -- How does the person feel about the tasks completed? (Confident? Stressed? Would the user recommend this system to a friend?)
In the early 1990s, Jakob Nielsen, at that time a researcher at Sun Microsystems, popularized the concept of using numerous small usability tests -- typically with only five test subjects each -- at various stages of the development process. His argument is that, once found that two or three people are totally confused by the home page, little is gained by watching more people suffer through the same flawed design. "Elaborate usability tests are a waste of resources. The best results come from testing no more than 5 users and running as many small tests as you can afford." 2. Nielsen subsequently published his research and coined the term heuristic evaluation.
The claim of "Five users is enough" was later described by a mathematical model (Virzi, R.A., Refining the Test Phase of Usability Evaluation: How Many Subjects is Enough? Human Factors, 1992. 34(4): p. 457-468.) which states for the proportion of uncovered problems U
U = 1 − (1 − p)n
where p is the probability of one subject identifying a specific problem and n the number of subjects (or test sessions). This model shows up as an asymptotic graph towards the number of real existing problems (see figure below).
In later research Nielsen's claim has eagerly been questioned with both empirical evidence 3 and more advanced mathematical models (Caulton, D.A., Relaxing the homogeneity assumption in usability testing. Behaviour & Information Technology, 2001. 20(1): p. 1-7.). Two of the key challeges to this assertion are: (1) since usability is related to the speciifc set of users, such a small sample size is unlikely to be representative of the total population so the data from such a small sample is more likely to reflect the sample group than the population they may represent and (2) many usability problems encountered in testing are likely to prevent exposure of other usability problems, making it impossible to predict the percentage of problems that can be uncovered without knowing the relationship between existing problems. Most researchers today agree that, although 5 users can generate a significant amount of data at any given point in the development cycle, in many applications a sample size larger than five is required to detect a satisfying amount of usability problems.
Bruce Tognazzini advocates close-coupled testing: "Run a test subject through the product, figure out what's wrong, change it, and repeat until everything works. Using this technique, I've gone through seven design iterations in three-and-a-half days, testing in the morning, changing the prototype at noon, testing in the afternoon, and making more elaborate changes at night." 4 This testing can be useful in research situations.
The following provides an alternative model or approach to Feasibility Analysis, a three-step process.
Assessing the Feasibility of Business Propositions
Vincent Amanor-Boadu, Ph.D. Department of Agricultural Economics KansasStateUniversity vincent@agecon.ksu.edu
Introduction
It is becoming increasingly important that producers are given the appropriate tools to succeed in value-adding initiatives. This document presents an overview of a feasibility assessment (analysis) from the viewpoint of its role in helpings you determine the potential viability of your business ideas. We begin with a definition of a feasibility assessment and provide a framework for performing a feasibility assessment. We end with a check list of the characteristics of an effective feasibility report. The objective is to help you assess the value of feasibility reports that you have received from contracts with consultants to ensure that the pertinent questions relative to the ability of your project to succeed have been adequately addressed.
What is Feasibility Assessment?
A feasibility assessment is the disciplined and documented process of thinking through an idea from its logical beginning to its logical end. This is to determine its potential to be a viable business given the realities of the economic and social environment in which it will operate. While feasibility studies are conducted for engineering, educational and program initiatives, our discussion in this document is limited to the feasibility of business initiatives. In this vein, feasibility studies help you decide if your business idea can be viable given its domain conditions.
A feasibility study or assessment is conducted at three levels.
The first level involves the operational feasibility of your idea. The question that is asked at this level is “Will it work?”
The second level involves technical feasibility and its associated question is “Can it be built?” Sometimes, the first and second levels are addressed together and simply referred to as technical feasibility.
The third and final level is economic feasibility and it brings the operational and technical levels together into a common unit by asking “Will it make economic sense if it works and is built?” In other words, “Will it generate profits?”
Let's look at the following video about solar panels in view of the above three assessments: