• Tweet

  • Post

  • Share

  • Relieve

  • Become PDF

  • Buy Copies

U.S. managers know that they have to improve the quality of their products because, alas, U.Southward. consumers have told them and so. A survey in 1981 reported that nearly 50% of U.S. consumers believed that the quality of U.S. products had dropped during the previous five years; more contempo surveys have found that a quarter of consumers are "not at all" confident that U.S. industry can be depended on to deliver reliable products. Many companies have tried to upgrade their quality, adopting programs that have been staples of the quality movement for a generation: cost of quality calculations, interfunctional teams, reliability applied science, or statistical quality control. Few companies, however, have learned to compete on quality. Why?

Office of the problem, of course, is that until Japanese and European competition intensified, not many companies seriously tried to brand quality programs piece of work fifty-fifty as they implemented them. But fifty-fifty if companies had implemented the traditional principles of quality control more than rigorously, it is doubtful that U.S. consumers would be satisfied today. In my view, most of those principles were narrow in scope; they were designed as purely defensive measures to preempt failures or eliminate "defects." What managers need now is an ambitious strategy to gain and concord markets, with high quality as a competitive linchpin.

Quality Control

To get a meliorate grasp of the defensive character of traditional quality control, nosotros should understand what the quality movement in the United states of america has accomplished so far. How much expense on quality was tolerable? How much "quality" was enough? In 1951, Joseph Juran tackled these questions in the offset edition of his Quality Control Handbook, a publication that became the quality movement's bible. Juran observed that quality could be understood in terms of avoidable and unavoidable costs: the onetime resulted from defects and product failures like scrapped materials or labor hours required for rework, repair, and complaint processing; the latter were associated with prevention, i.east., inspection, sampling, sorting, and other quality control initiatives. Juran regarded failure costs as "gold in the mine" because they could be reduced sharply by investing in quality improvement. He estimated that avoidable quality losses typically ranged from $500 to $1,000 per productive operator per yr—big money back in the 1950s.

Reading Juran's book, executives inferred roughly how much to invest in quality improvement: expenditures on prevention were justified if they were lower than the costs of product failure. A corollary principle was that decisions fabricated early in the production concatenation (e.g., when engineers first sketched out a product's design) take implications for the level of quality costs incurred later, both in the factory and the field.

In 1956, Armand Feigenbaum took Juran'southward ideas a step further by proposing "total quality control" (TQC). Companies would never make high-quality products, he argued, if the manufacturing section were forced to pursue quality in isolation. TQC called for "interfunctional teams" from marketing, engineering, purchasing, and manufacturing. These teams would share responsibility for all phases of design and manufacturing and would disband only when they had placed a product in the easily of a satisfied customer—who remained satisfied.

Feigenbaum noted that all new products moved through 3 stages of activity: design control, incoming fabric control, and product or shopfloor control. This was a stride in the right management. But Feigenbaum did not really consider how quality was first of all a strategic question for any business; how, for instance, quality might govern the evolution of a design and the option of features or options. Rather, pattern control meant for Feigenbaum mainly preproduction assessments of a new design's manufacturability, or that projected manufacturing techniques should be debugged through pilot runs. Materials control included vendor evaluations and incoming inspection procedures.

In TQC, quality was a kind of burden to be shared—no single section shouldered all the responsibility. Top direction was ultimately accountable for the effectiveness of the system; Feigenbaum, like Juran, proposed careful reporting of the costs of quality to senior executives in club to ensure their commitment. The 2 likewise stressed statistical approaches to quality, including process control charts that ready limits to acceptable variations in key variables affecting a production's production. They endorsed sampling procedures that allowed managers to draw inferences about the quality of entire batches of products from the condition of items in a modest, randomly selected sample.

Despite their attending to these techniques, Juran, Feigenbaum, and other experts like W. Edwards Deming were trying to go managers to see across purely statistical controls on quality. Meanwhile, some other branch of the quality motion emerged, relying fifty-fifty more heavily on probability theory and statistics. This was "reliability applied science," which originated in the aerospace and electronics industries.

In 1950, only one-third of the U.Southward. Navy'due south electronic devices worked properly. A subsequent study past the Rand Corporation estimated that every vacuum tube the military used had to be backed past nine others in warehouses or on club. Reliability applied science addressed these problems past adapting the laws of probability to the challenge of predicting equipment stress.

Reliability engineering measures led to:

Techniques for reducing failure rates while products were still in the design phase.

Failure fashion and issue analysis, which systematically reviewed how alternative designs could fail.

Individual component analysis, which computed the failure probability of key components and aimed to eliminate or strengthen the weakest links.

Derating, which required that parts be used below their specified stress levels.

Back-up, which called for a parallel system to back up an important component or subsystem in case it failed.

Naturally, an effective reliability program required managers to monitor field failures closely to give company engineers the information needed to plan new designs. Effective field failure reporting likewise demanded the development of systems of information collection, including return of failed parts to the laboratory for testing and analysis.

Now, the proponents of all these approaches to quality control might well have denied that their views of quality were purely defensive. But what else was implied by the solutions they stressed—fabric controls, outgoing batch inspections, stress tests? Perhaps the best way to see the implications of their logic is in traditional quality control's most extreme form, a program called "Zero Defects." No other programme defined quality so stringently equally an absenteeism of failures—and no wonder, since it emerged from the defence force industries where the production was a missile whose flawless operation was, for obvious reasons, imperative.

In 1961, the Martin Company was edifice Pershing missiles for the U.S. Army. The design of the missile was sound, simply Martin found that it could maintain loftier quality only through a massive program of inspection. It decided to offer workers incentives to lower the defect charge per unit, and in December 1961, delivered a Pershing missile to Cape Canaveral with "zero discrepancies." Buoyed past this success, Martin'south full general director in Orlando, Florida accepted a challenge, issued past the U.South. Army's missile command, to deliver the outset field Pershing i month alee of schedule. But he went even further. He promised that the missile would be perfect, with no hardware problems or document errors, and that all equipment would exist fully operational 10 days after delivery (the norm was 90 days or more than).

Two months of feverish action followed; Martin asked all employees to contribute to building the missile exactly correct the first time since at that place would be virtually no time for the usual inspections. Management worked hard to maintain enthusiasm on the plant floor. In February 1962, Martin delivered on time a perfect missile that was fully operational in less than 24 hours.

This experience was center-opening for both Martin and the rest of the aerospace industry. After careful review, management concluded that, in upshot, its own inverse attitude had assured the project's success. In the words of 1 close observer: "The one time management demanded perfection, information technology happened!"ane Martin management thereafter told employees that the only acceptable quality standard was "nil defects." It instilled this principle in the work force through preparation, special events, and by posting quality results. Information technology set goals for workers and put cracking endeavour into giving each worker positive criticism. Formal techniques for problem solving, however, remained limited. For the most part, the program focused on motivation—on changing the attitudes of employees.

Strategic Quality Management

On the whole, U.S. corporations did not keep stride with quality control innovations the mode a number of overseas competitors did. Particularly after World War Ii, U.S. corporations expanded rapidly and many became complacent. Managers knew that consumers wouldn't drive a VW Beetle, indestructible as information technology was, if they could afford a fancier car—fifty-fifty if this meant more visits to the repair shop.

But if U.S. motorcar manufacturers had gotten their products to outlast Beetles, U.South. quality managers still would not have been prepared for Toyota Corollas—or Sony televisions. Indeed, in that location was nothing in the principles of quality control to disabuse them of the idea that quality was only something that could hurt a company if ignored; that added quality was the designer's business—a matter, perchance, of chrome and push buttons.

The beginnings of strategic quality management cannot be dated precisely because no single book or article marks its inception. Only even more than than in consumer electronics and cars, the volatile market in semiconductors provides a telling instance of change. In March 1980, Richard Westward. Anderson, general manager of Hewlett-Packard's Information Systems Division, reported that later on testing 300,000 16K RAM chips from three U.South. and three Japanese manufacturers, Hewlett-Packard had discovered broad disparities in quality. At incoming inspection, the Japanese chips had a failure rate of zero; the comparable rate for the iii U.Southward. manufacturers was betwixt 11 and xix failures per i,000. Afterward ane,000 hours of use, the failure rate of the Japanese fries was between 1 and 2 per 1,000; usable U.S. chips failed up to 27 times per thousand.

Several U.Due south. semiconductor companies reacted to the news impulsively, complaining that the Japanese were sending only their best components to the all-of import U.S. market. Others disputed the basic information. The most perceptive market analysts, nonetheless, noted how differences in quality coincided with the rapid ascendancy of Japanese fleck manufacturers. In a few years the Japanese had gone from a standing offset to significant marketplace shares in both the 16K and 64K chip markets. Their message—intentional or not—was that quality could be a potent strategic weapon.

U.S. semiconductor manufacturers got the message. In 16K chips the quality gap soon closed. And in industries as various as machine tools and radial tires, each of which had seen its position erode in the face of Japanese competition, there has been a new seriousness well-nigh quality besides. But how to translate seriousness into activeness? Managers who are now determined to compete on quality have been thrown back on the onetime questions: How much quality is plenty? What does it accept to look at quality from the customer's vantage point? These are still hard questions today.

To achieve quality gains, I believe, managers need a new fashion of thinking, a conceptual span to the consumer's vantage point. Obviously, market place studies acquire a new importance in this context, as does a careful review of competitors' products. 1 matter is certain: high quality means pleasing consumers, not merely protecting them from annoyances. Production designers, in plow, should shift their attention from prices at the time of purchase to life cycle costs that include expenditures on service and maintenance—the customer's total costs. Even consumer complaints play a new office because they provide a valuable source of product data.

But managers accept to have a more preliminary pace—a crucial one, however obvious information technology may announced. They must first develop a clear vocabulary with which to discuss quality as strategy. They must break downwards the word quality into manageable parts. Only then can they define the quality niches in which to compete.

I advise viii critical dimensions or categories of quality that can serve every bit a framework for strategic assay: performance, features, reliability, conformance, durability, serviceability, aesthetics, and perceived quality.2 Some of these are ever mutually reinforcing; some are not. A product or service tin can rank high on one dimension of quality and low on another—indeed, an improvement in one may be achieved only at the expense of another. It is precisely this interplay that makes strategic quality management possible; the challenge to managers is to compete on selected dimensions.

ane Performance

Of course, functioning refers to a product's principal operating characteristics. For an automobile, performance would include traits like dispatch, handling, cruising speed, and comfort; for a television set, operation means sound and moving-picture show clarity, color, and the ability to receive distant stations. In service businesses—say, fast food and airlines—performance often means prompt service.

Because this dimension of quality involves measurable attributes, brands can usually exist ranked objectively on individual aspects of functioning. Overall performance rankings, notwithstanding, are more hard to develop, peculiarly when they involve benefits that not every consumer needs. A power shovel with a capacity of 100 cubic yards per hour will "outperform" i with a capacity of 10 cubic yards per hr. Suppose, however, that the two shovels possessed the identical capacity—60 cubic yards per 60 minutes—simply achieved it differently: 1 with a 1-cubic-yard bucket operating at 60 cycles per hour, the other with a 2-cubic-1000 bucket operating at 30 cycles per 60 minutes. The capacities of the shovels would then be the aforementioned, just the shovel with the larger saucepan could handle massive boulders while the shovel with the smaller bucket could perform precision work. The "superior performer" depends entirely on the task.

Some cosmetics wearers approximate quality by a product'south resistance to smudging; others, with more sensitive skin, assess it by how well information technology leaves peel irritation-gratuitous. A 100-watt low-cal bulb provides greater candlepower than a 60-watt bulb, yet few customers would regard the difference as a mensurate of quality. The bulbs simply vest to different performance classes. So the question of whether performance differences are quality differences may depend on circumstantial preferences—just preferences based on functional requirements, not taste.

Some performance standards are based on subjective preferences, only the preferences are and then universal that they have the force of an objective standard. The quietness of an automobile's ride is usually viewed as a straight reflection of its quality. Some people like a dimmer room, but who wants a noisy auto?

ii Features

Similar thinking can be applied to features, a second dimension of quality that is often a secondary attribute of performance. Features are the "bells and whistles" of products and services, those characteristics that supplement their bones functioning. Examples include complimentary drinks on a plane, permanent-printing cycles on a washing machine, and automated tuners on a color television set. The line separating principal operation characteristics from secondary features is oftentimes difficult to draw. What is crucial, again, is that features involve objective and measurable attributes; objective individual needs, not prejudices, bear upon their translation into quality differences.

To many customers, of course, superior quality is less a reflection of the availability of particular features than of the total number of options available. Often, choice is quality: buyers may wish to customize or personalize their purchases. Allegiance Investments and other common fund operators have pursued this more "flexible" approach. By offering their clients a wide range of funds covering such diverse fields equally health care, engineering, and energy—and by then encouraging clients to shift savings amongst these—they have virtually tailored investment portfolios.

Employing the latest in flexible manufacturing engineering science, Allen-Bradley customizes starter motors for its buyers without having to price its products prohibitively. Fine furniture stores offering their customers countless variations in fabric and color. Such strategies impose heavy demands on operating managers; they are an attribute of quality likely to abound in importance with the perfection of flexible manufacturing engineering science.

3 Reliability

This dimension reflects the probability of a product malfunctioning or failing within a specified fourth dimension period. Amid the most common measures of reliability are the mean time to first failure, the mean time betwixt failures, and the failure rate per unit of measurement time. Because these measures require a production to be in use for a specified period, they are more relevant to durable goods than to products and services that are consumed instantly.

Reliability usually becomes more important to consumers as reanimation and maintenance become more expensive. Farmers, for example, are especially sensitive to downtime during the brusk harvest season. Reliable equipment can mean the difference betwixt a adept year and spoiled crops. But consumers in other markets are more attuned than ever to production reliability too. Computers and copying machines certainly compete on this basis. And recent market research shows that, especially for young women, reliability has become an automobile's most desired attribute. Nor is the government, our biggest single consumer, immune. After seeing its expenditures for major weapons repair jump from $7.4 billion in fiscal year 1980 to $14.ix billion in fiscal twelvemonth 1985, the Section of Defense has begun cracking down on contractors whose weapons fail frequently in the field.

4 Conformance

A related dimension of quality is conformance, or the degree to which a production's design and operating characteristics run across established standards. This dimension owes the nearly to the traditional approaches to quality pioneered by experts like Juran.

All products and services involve specifications of some sort. When new designs or models are developed, dimensions are set for parts and purity standards for materials. These specifications are unremarkably expressed every bit a target or "center"; deviance from the center is permitted within a specified range. Considering this approach to conformance equates good quality with operating inside a tolerance band, there is niggling interest in whether specifications have been met exactly. For the near part, dispersion within specification limits is ignored.

One drawback of this approach is the trouble of "tolerance stack-upwards": when two or more parts are to be fit together, the size of their tolerances often determines how well they will match. Should one part autumn at a lower limit of its specification, and a matching role at its upper limit, a tight fit is unlikely. Even if the parts are rated acceptable initially, the link between them is probable to habiliment more chop-chop than one made from parts whose dimensions take been centered more exactly.

To address this problem, a more than imaginative approach to conformance has emerged. It is closely associated with Japanese manufacturers and the piece of work of Genichi Taguchi, a prizewinning Japanese statistician. Taguchi begins with the idea of "loss part," a mensurate of losses from the time a production is shipped. (These losses include warranty costs, nonrepeating customers, and other issues resulting from performance failure.) Taguchi then compares such losses to two culling approaches to quality: on the one manus, simple conformance to specifications, and on the other, a measure of the degree to which parts or products diverge from the ideal target or center.

He demonstrates that "tolerance stack-up" volition exist worse—more plush—when the dimensions of parts are more distant from the center than when they cluster around information technology, fifty-fifty if some parts fall outside the tolerance band entirely. According to Taguchi's approach, production process one in the Showroom is better even though some items autumn beyond specification limits. Traditional approaches favor production process 2. The claiming for quality managers is obvious.

Exhibit Two approaches to conformance Source: 50.P. Sullivan, "Reducing Variability: A New Approach to Quality," Quality Progress, July 1984, p. 16.

Incidentally, the two near common measures of failure in conformance—for Taguchi and anybody else—are defect rates in the mill and, once a product is in the hands of the client, the incidence of service calls. But these measures fail other deviations from standard, like misspelled labels or shoddy construction, that practice non pb to service or repair. In service businesses, measures of conformance normally focus on accuracy and timeliness and include counts of processing errors, unanticipated delays, and other frequent mistakes.

v Immovability

A measure of product life, durability has both economic and technical dimensions. Technically, durability can exist defined as the amount of utilise one gets from a product earlier information technology deteriorates. After then many hours of use, the filament of a light bulb burns upward and the bulb must exist replaced. Repair is impossible. Economists call such products "one-hoss shays" (afterwards the carriage in the Oliver Wendell Holmes poem that was designed past the deacon to last a hundred years, and whose parts broke downward simultaneously at the stop of the century).

In other cases, consumers must weigh the expected cost, in both dollars and personal inconvenience, of future repairs against the investment and operating expenses of a newer, more reliable model. Durability, then, may be defined every bit the amount of apply i gets from a product before it breaks downwards and replacement is preferable to continued repair.

This approach to immovability has two important implications. First, it suggests that immovability and reliability are closely linked. A product that often fails is probable to be scrapped earlier than one that is more reliable; repair costs will be correspondingly higher and the purchase of a competitive brand volition look that much more desirable. Considering of this linkage, companies sometimes attempt to reassure customers by offering lifetime guarantees on their products, as 3M has done with its videocassettes. Second, this approach implies that immovability figures should be interpreted with care. An increment in production life may non be the upshot of technical improvements or the use of longer-lived materials. Rather, the underlying economical environs simply may have changed.

For example, the expected life of an machine rose during the last decade—it now averages 14 years—mainly because rising gasoline prices and a weak economic system reduced the average number of miles driven per year. Yet, durability varies widely among brands. In 1981, estimated product lives for major home appliances ranged from 9.9 years (Westinghouse) to xiii.2 years (Frigidaire) for refrigerators, 5.viii years (Gibson) to 18 years (Maytag) for apparel washers, 6.6 years (Montgomery Ward) to xiii.five years (Maytag) for dryers, and 6 years (Sears) to 17 years (Kirby) for vacuum cleaners.three This wide dispersion suggests that immovability is a potentially fertile area for further quality differentiation.

half-dozen Serviceability

A sixth dimension of quality is serviceability, or the speed, courtesy, competence, and ease of repair. Consumers are concerned not just about a product breaking down but also well-nigh the time before service is restored, the timeliness with which service appointments are kept, the nature of dealings with service personnel, and the frequency with which service calls or repairs fail to correct outstanding problems. In those cases where issues are non immediately resolved and complaints are filed, a company'southward complaint-handling procedures are also likely to affect customers' ultimate evaluation of product and service quality.

Some of these variables reflect differing personal standards of acceptable service. Others tin be measured quite objectively. Responsiveness is typically measured by the hateful time to repair, while technical competence is reflected in the incidence of multiple service calls required to correct a particular problem. Because most consumers equate rapid repair and reduced downtime with higher quality, these elements of serviceability are less bailiwick to personal interpretation than are those involving evaluations of courtesy or standards of professional behavior.

Even reactions to reanimation, however, can exist quite complex. In certain environments, rapid response becomes disquisitional simply later certain thresholds have been reached. During harvest season, farmers by and large accept downtime of i to six hours on harvesting equipment, such as combines, with little resistance. Every bit downtime increases, they get anxious; across eight hours of downtime they go frantic and frequently get to nifty lengths to keep harvesting even if it means purchasing or leasing boosted equipment. In markets like this, superior service tin be a powerful selling tool. Caterpillar guarantees delivery of repair parts anywhere in the globe within 48 hours; a competitor offers the free loan of subcontract equipment during critical periods should its customers' machines break downwards.

Customers may remain dissatisfied fifty-fifty after completion of repairs. How these complaints are handled is of import to a company'southward reputation for quality and service. Eventually, profitability is probable to be affected also. A 1976 consumer survey establish that among households that initiated complaints to resolve problems, more than twoscore% were non satisfied with the results. Understandably, the degree of satisfaction with complaint resolution closely correlated with consumers' willingness to repurchase the offending brands.iv

Companies differ widely in their approaches to complaint handling and in the importance they attach to this element of serviceability. Some do their best to resolve complaints; others utilize legal gimmicks, the silent treatment, and like ploys to brushoff dissatisfied customers. Recently, Full general Electrical, Pillsbury, Procter & Take a chance, Polaroid, Whirlpool, Johnson & Johnson, and other companies have sought to preempt consumer dissatisfaction past installing toll-costless telephone hot lines to their customer relations departments.

7 Aesthetics

The final ii dimensions of quality are the near subjective. Aesthetics—how a product looks, feels, sounds, tastes, or smells—is clearly a matter of personal judgment and a reflection of individual preference. Yet, there announced to exist some patterns in consumers' rankings of products on the basis of gustation. A recent written report of quality in 33 food categories, for example, plant that high quality was well-nigh often associated with "rich and full flavor, tastes natural, tastes fresh, good aroma, and looks flavory."5

The aesthetics dimension differs from subjective criteria pertaining to "performance"—the quiet car engine, say—in that aesthetic choices are not nearly universal. Not all people prefer "rich and full" season or even agree on what it ways. Companies therefore have to search for a niche. On this dimension of quality, it is impossible to please everyone.

viii Perceived Quality

Consumers do not always have complete information well-nigh a product'southward or service's attributes; indirect measures may be their merely ground for comparing brands. A product's durability, for example, can seldom exist observed directly; it ordinarily must be inferred from various tangible and intangible aspects of the production. In such circumstances, images, ad, and brand names—inferences nigh quality rather than the reality itself—tin be critical. For this reason, both Honda—which makes cars in Marysville, Ohio—and Sony—which builds color televisions in San Diego—have been reluctant to publicize that their products are "made in America."

Reputation is the primary stuff of perceived quality. Its power comes from an unstated analogy: that the quality of products today is similar to the quality of products yesterday, or the quality of goods in a new product line is similar to the quality of a visitor's established products. In the early 1980s, Maytag introduced a new line of dishwashers. Needless to say, salespeople immediately emphasized the product's reliability—not yet proven—because of the reputation of Maytag'south apparel washers and dryers.

Competing on Quality

This completes the list of the eight dimensions of quality. The most traditional notions—conformance and reliability—remain important, but they are subsumed within a broader strategic framework. A company's offset claiming is to utilize this framework to explore the opportunities it has to distinguish its products from some other company'south wares.

The quality of an automobile tire may reflect its tread-wear charge per unit, handling, traction in unsafe driving weather condition, rolling resistance (i.e., bear upon on gas mileage), dissonance levels, resistance to punctures, or advent. High-quality furniture may be distinguished by its compatible finish, an absence of surface flaws, reinforced frames, comfort, or superior pattern. Fifty-fifty the quality of a less tangible product like computer software can be evaluated in multiple dimensions. These dimensions include reliability, ease of maintenance, friction match with users' needs, integrity (the extent to which unauthorized access can be controlled), and portability (the ease with which a program tin be transferred from i hardware or software environment to another).

A company need non pursue all eight dimensions simultaneously. In fact, that is seldom possible unless it intends to charge unreasonably high prices. Technological limitations may impose a further constraint. In some cases, a production or service tin be improved in one dimension of quality just if it becomes worse in another. Cray Research, a manufacturer of supercomputers, has faced particularly difficult choices of this sort. According to the company'southward chairman, if a supercomputer doesn't neglect every month or and then, it probably wasn't built for maximum speed; in pursuit of higher speed, Cray has deliberately sacrificed reliability.

There are other trade-offs. Consider the following:

  • In entering U.South. markets, Japanese manufacturers oft emphasize their products' reliability and conformance while downplaying options and features. The superior "fits and finishes" and depression repair rates of Japanese cars are well known; less frequently recognized are their poor safety records and low resistance to corrosion.
  • Tandem Computers has based its business on superior reliability. For figurer users that find down-time intolerable, like telephone companies and utilities, Tandem has devised a fail-prophylactic system: ii processors working in parallel and linked by software that shifts responsibleness betwixt the two if an important component or subsystem fails. The issue, in an industry already well-known for quality products, has been spectacular corporate growth. In 1984, after less than 10 years in business, Tandem's annual sales topped $500 million.
  • Not long ago, New York's Chemical Bank upgraded its services for collecting payments for corporations. Managers had starting time conducted a user survey indicating that what customers wanted most was rapid response to queries about account status. Subsequently information technology installed a computerized arrangement to answer customers' calls, Chemical, which banking consumers had ranked fourth in quality in the industry, jumped to first.
  • In the piano concern, Steinway & Sons has long been the quality leader. Its instruments are known for their fifty-fifty voicing (the evenness of character and timbre in each of the 88 notes on the keyboard), the sweetness of their registers, the duration of their tone, their long lives, and fifty-fifty their fine cabinet work. Each piano is built past hand and is distinctive in sound and style. Despite these advantages, Steinway recently has been challenged by Yamaha, a Japanese manufacturer that has built a strong reputation for quality in a relatively brusk fourth dimension. Yamaha has washed so by emphasizing reliability and conformance, two quality dimensions that are depression on Steinway'southward listing.

These examples confirm that companies can pursue a selective quality niche. In fact, they may have no other choice, especially if competitors have established reputations for a certain kind of excellence. Few products rank high on all 8 dimensions of quality. Those that practise—Cross pens, Rolex watches, Rolls-Royce automobiles—require consumers to pay the cost of skilled workmanship.

Strategic Errors

A final give-and-take, not nigh strategic opportunities, but about the worst strategic mistakes. The beginning is direct confrontation with an industry'south leader. Equally with Yamaha vs. Steinway, information technology is far preferable to nullify the leader's advantage in a particular niche while avoiding the take a chance of retaliation. Moreover, a common error is to innovate dimensions of quality that are unimportant to consumers. When deregulation unlocked the market place for residential telephones, a number of manufacturers, including AT&T, assumed that customers equated quality with a broad range of expensive features. They were soon proven wrong. Fancy telephones sold poorly while durable, reliable, and easy-to-operate sets gained large market shares.

Shoddy marketplace research often results in neglect of quality dimensions that are critical to consumers. Using outdated surveys, car companies overlooked how of import reliability and conformance were condign in the 1970s; ironically, these companies failed consumers on the very dimensions that were key targets of traditional approaches to quality control.

It is often a fault to stick with quondam quality measures when the external environs has changed. A major telecommunication company had ever evaluated its quality past measuring timeliness—the corporeality of fourth dimension information technology took to provide a punch tone, to connect a call, or to be connected to an operator. On these measures it performed well. More sophisticated market surveys, conducted in anticipation of the industry's deregulation, found that consumers were not really concerned about call connectedness time; consumers assumed that this would exist more or less acceptable. They were more concerned with the clarity of manual and the caste of static on the line. On these measures, the visitor establish it was well backside its competitors.

In an industry similar semiconductor manufacturing equipment, Japanese machines generally crave less set-up time; they break down less frequently and have few issues meeting their specified performance levels. These are precisely the traits desired by about buyers. Still, U.S. equipment tin can exercise more than. Equally i U.South. plant manager put it: "Our equipment is more advanced, but Japanese equipment is more developed."

Quality measures may exist inadequate in less obvious ways. Some measures are besides limited; they fail to capture aspects of quality that are important for competitive success. Singapore International Airlines, a carrier with a reputation for first-class service, saw its marketplace share decline in the early 1980s. The company dismissed quality bug as the cause of its difficulties because information on service complaints showed steady improvement during the catamenia. Only afterward, after SIA solicited consumer responses, did managers see the weakness of their former measures. Relative declines in service had indeed been responsible for the loss of market share. Complaint counts had failed to register problems considering the proportion of passengers who wrote complaint letters was small—they were primarily Europeans and U.S. citizens rather than Asians, the largest percent of SIA passengers. SIA also had failed to capture data about its competitors' service improvements.

The pervasiveness of these errors is difficult to determine. Anecdotal bear witness suggests that many U.S. companies lack hard data and are thus more than vulnerable than they need be. One survey establish that 65% of executives thought that consumers could readily name—without help—a good quality brand in a large-ticket category similar major domicile appliances. But when the question was actually posed to consumers, only xvi% could proper name a brand for pocket-sized appliances, and only 23% for big appliances.half-dozen Are U.S. executives that ill-informed most consumers' perceptions? The answer is non likely to be reassuring.

Managers have to stop thinking near quality merely as a narrow endeavour to gain control of the product process, and start thinking more than rigorously about consumers' needs and preferences. Quality is not simply a problem to be solved; information technology is a competitive opportunity.

ane. James F. Haplin, Zip Defects (New York: McGraw-Hill, 1966), p. 15.

2. This framework first appeared, in a preliminary grade, in my article "What Does 'Production Quality' Actually Mean?" Sloan Management Review, Autumn 1984.

three. Roger B. Yepsen, Jr., ed., The Durability Factor (Emmaus, Penn: Rodale Press, 1982), p. 190.

4. TARP, Consumer Complaint Handling in America: Final Report (Springfield, Va.: National Technical Information Service, U.S. Department of Commerce, 1979).

5. P. Greg Bonner and Richard Nelson, "Production Attributes and Perceived Quality: Foods," in Perceived Quality, ed. Jacob Jacoby and Jerry C. Olson (Lexington, Mass.: Lexington Books, D.C. Heath, 1985), p. 71.

six. Consumer Network, Inc., Brand Quality Perceptions (Philadelphia: Consumer Network, August 1983), p. 17 ad 50–51.

A version of this article appeared in the November 1987 issue of Harvard Business Review.