Decision-Support Systems (DSS)

Advances in technology, represented by more powerful and more user-friendly capabilities for data retrieval, database management, modeling, and graphics, afford nontechnical users an opportunity for relatively effective, ad hoc use of computers to support a variety of management-related functions. In this context, the "conference room of the future" has been the subject of considerable discussion--what role computer hardware and software will play in assisting decision-makers on a "real time" basis. Decision-support systems--like information management systems before them (and electronic data processing systems that preceded IMS)--represent a new stage in the "computer revolution." These emerging systems are supported partly by technological advances and partly by a long-standing conviction that such capabilities are possible.

DSS Defined

According to the proponents of decision-support systems, the ultimate mission of the computer should be to interact effectively with management so as to influence decisions on a day-to-day basis. Software packages currently available do incorporate powerful technical tools that may make it possible to realize the potential suggested by the notion of "computerized decision support." However, as one manager of a DSS service has observed:

In short, there are no easy or magic solutions that will create the kind of man-machine interface necessary for true decision support.

A primary objective of DSS is to provide decision support for prob-lems within an organization that are continually changing--problems that often have more than one "right" answer. Some computer professional do not feel comfortable with such relatively unstructured problems. More conventional methods of programming seek to "freeze" the specifications of a problem as soon as possible, so that the programmer can "build the solution" (the system) in relative isolation from the problem.

Decision-support systems, however, must be built on an interactive approach which may never have the problem specifications "frozen." As Ralph Sprague has noted:

Management problems often are relatively short-lived. Therefore, more traditional methods of building relatively large management information systems to deal with such problems may result in the delivery of "too much, too late." Substantial resource commitments are required to develop and maintain an elaborate data base which provides inputs into a sophisticated simulation model. If the problem that management must address are "moving targets," the response time of the IMS may be too long to provide useful answers.

Consideration must be given to several important organizational issues in the implementation of a decision-support system. DSS repre-sents a major break with the tradition whereby individuals and groups participate in the decision-making process based on "ownership" of (access to) certain data and information.

Armed with user-friendly technology, managers are laying stronger claim to available data and information--with or without the blessings of those in the organization responsible for the gathering and recording of this information.

The actual organizational impacts of DSS often are not self-evident. The relation between decision support and decision making cannot be considered clear-cut. As Steven Alter has stated:

It is not surprising in light of such uncertain payoffs that much of the pioneering work in decision-support research has been dedicated to understanding the intricacies of just how managers go about making decisions. The surface of such understanding is just begun to be scratched. Current research does suggest, however, that DSS can make important contributions in improving the decision-making process. Studies have shown that decision-support systems can lead to better communications among managers. These improvements, in turn, can contribute to a more unified approach to problem solving by providing a broader consensus as to goals and objectives and underlying assumptions concerning problems confronting the organization.

The Right Problems/People/Tools/Process

Certain basic conditions must be met if a DSS is to have the desired impact on the decision-making process of an organization: (a) the right problems must be addressed; (b) the right people must participate in the development of the decision support system; (c) the right tools must be used; and (d) the process must be able to evolve as decision situations and technology change. [5] In attempting to meet these conditions, often- conflicting interests must be balanced in terms of the available techno-logy, the cost of systems development and maintenance, and the ever-present organizational issues of data ownership and inherent prerogatives to participate in the decision-making process.

The Right Problems. Should the DSS be geared to address well-defined, specific problems, or should it be designed as a flexible system with wide-ranging applications? Sprague and Carlson assert that:

On the other hand, since decision support often is difficult to cost-justify, at the very least, the DSS should address the specific problem situations which the top officials of the organization deem to be most important. The DSS should be demand-driven rather than supply-generated. That is to say, the demand for decision support must come from top-level management rather than being "force-fed" on the basis of available data.

The Right People. Participants in the development of the DSS must have a general understanding of management principles, as well as the technical skills to solve problems as they arise. Lacking an appreciation for technical considerations, management can be "sold" on the purchase of extensive hardware or software systems which may have very little immediate use in the decision-making process of the organization. On the other hand, management can be unduly influenced by cost consideration regarding equipment purchases which may not serve the real decision needs of the organization.

The Right Tools. An effective way to reduce the "burden" on the technical experts is to bring in technical tools that are as easy to use as possible. There is a price to be paid for such "friendly" systems, however. Recent developments in artificial intelligence are on the frontier of this "friendliness." Artificial intelligence is distinguished from more conventional computer intelligence by its ability to "understand" natural English, so that users need not learn special languages or elaborate sets of commands in order to use the system.

A price must be paid for such "friendly" systems, however, beyond the purchase cost of the software. The easier a system is to use, the heavier the load it tends to place on computing hardware in terms of machine cycles. A separate computer often may be required to support the DSS. As an alternative, some organizations are switching from large mainframe systems to micro-systems, supported by PC hardware. While such micro-systems are slower in terms of processing time, the hardware costs can represent a significant trade-off when compared to the "care and feeding" of a typical mainframe.

This approach is not without its own risks, however. While desk-top, personal computers can provide significant analytical tools, the decentralization of computational capacity exacerbates the need to manage information more carefully. With the proliferation of micros, it is possible that the organization will miss out totally on one of the primary advantages afforded by DSS--more effective communications and the sharing of assumptions regarding problems confronting the organization. A shared resource on a mainframe system encourages and supports such communications; distributing the resource among micros might not.

The Right Process. A DSS is a dynamic mechanism, capable of continual evolution in new and often unanticipated ways as problems evolve. Needless to say, such an "evolution" can put considerable stress on an organization, which, in turn, can stymie the momentum in support of DSS. This problem often arises when procedures and controls have to be considered to bring the demands placed in the DSS into more manageable bounds. It is important--particularly during the early stages of development--for user support stay ahead of user demands. Often the tendency is to spread access to the system as broadly as possible--to promote the adoption of the software by providing hardware to as many people in the organization as possible. When the communication links become over-loaded, however, and users incur significant delays in response time (or difficulties in "logging on" to the system), then users frequently abandon the system and return to their prior approaches to making decisions.

DSS is a process. Many proponents argue that it is inconsistent to think of it as a deliverable product. Nevertheless, users do not want to have to learn new commands every few weeks in order to access the system. For these users--who may well be in the majority in any organization--it may be appropriate to provide a DSS as a relatively stable product. Major changes in "language" or in presentation formats may be very disconcerting to those users who view the system as a "tool" and not as a "vocation." At the same time, a select group of users might be treated as "iconoclasts," by being exposed to the latest versions of software as they become available. As more is known about the system, it may be perfectly natural for a DSS to stabilize. As Alter suggests: "What's referred to as a management information system might very well be able to support decisions if it's designed well." [8]

The development process of an IMS often is viewed as unwieldy from the standpoint of getting useful decision-support systems designed and built quickly. The conventional wisdom concerning DSS is to eschew the more formal systems analysis and design procedures, so as to avoid inhibiting the process of managerial learning and systems evolution. As Moore and Chang point out, however:

For all the risks, uncertainties, and conflicts it poses, the prospect of computerized decision support is nevertheless an exciting one. No matter how good the system, however, a DSS will not miraculously transform bad decision-makers into good ones. Good decision-makers are meticulous about facts as well as being able to develop and apply intuitive insights. They have a feel for the "big picture" and are able to delegate responsibilities so they do not "micro manage." They have the capacity to distinguish between genuine merit and unreasonable bias. The fundamental objective of a DSS is to enhance these attributes of good decision-making and not to devalue them by substituting quantity of data for quality of information.

Key Indicator Approaches

Among more effective application of the principles and techniques of DSS have been the development of key indicator approaches, such as critical success factors (CSF) and key performance indicators (KPI). Rochart has defined critical success factors as:

Rochart and others have used the CSF approach to design management information and decision support systems for various business. Boynton and Zmud argue that "CSFs provide a focal point for directing a computer-based information system development effort" by pinpointing key areas that require the attention of management. [11] The CSF approach has been advocated by Leidecker and Bruno as a useful technique for strategic planning. [12] Peat Marwick developed a CSF model for higher education in 1988 which lists 67 critical success factors measured on a yearly basis. This model, designed to be used by senior administrators, emphasizes the need to "compress information so that managers can focus their attention on high priorities in making and assessing decisions." [13]

M.G. Dolence has proposed the development of key performance indicators (KPI)--a detailed list of measurements to monitor and evaluate management strategies. [14] Key performance indicators "are numbers that can be used to indicate the effectiveness and efficiency of strategies and tactics." Whereas CSFs are preconditions for the success of a strategy, KPIs "help maintain a sharp focus on what must be measured."

An effort to combine these approaches into a hybrid was initiated by officials at the University of Miami as key success indices (KSI). The university sought to identify indices that could be monitored on a month basis rather than annually. While CSFs usually are limited to a relatively small number of factors and include "soft data" as well as external data, the KSI model is built on a much longer list (more than 120 items) but is limited to numeric data from institutional sources. Most applications of CSFs and KPIs rely heavily on computerized decision support system or executive decision assessment programs. The senior administrators at the University of Miami were not interested in using a computer system to access the key success indices. Rather, they chose to use monthly oral reports to focus on those key areas or issues that are either crucial at that time of the year or that have been flagged as different from the previous year. The purpose of the monthly report is to monitor the health of the institution rather than to directly support strategic planning efforts.

Key to all of these approaches is the development of a consistent format in which the data and analyses are presented. At the outset, organizational officials who are seen as the key users of these data should be interviewed to gather suggestions as to which indices to include in the system and how best to report these indicators. A list of topics and a presentation format should be agreed upon, but some flexibility should be afforded during the initial iterations to add to or refine the indices as new topics are suggested by the review of these data. Year-to-date figures often provide the most useful basis for comparisons, but monthly figures, comparisons with budgeted amounts, and year-end totals may also be important in monitoring of certain activities. Targeted estimates, projections, and extrapolations of data to identify trends may also be appropriate.

Time spent up-front in the design of the IMS report format to facilitate data entry and analysis and to ensure quick turnaround in the access/printing of the report is effort well invested. The data are likely to be drawn from a number of sources within the organization and some manipulation of these data may be necessary to ensure that comparisons will be valid and consistent. A pre-printed data-collection form may assist in the gathering of these data. This form should include the data reported by the unit in several previous time periods and provide space for comments regarding any notable changes in the latest data entries when compared to the entries for the previous time periods. When the changes are deemed to be significant, representatives from the areas of the organizations responsible for the activities may be called upon to make a further presentation to explain these data.

Periodic meetings (e.g., monthly) to discuss the key indicators is an important feature of this approach. The data should be provided a few days before these meetings with an executive summary of the key issues to be discussed. The participants can then focus on those items most pertinent to their areas of responsibility and should be prepare to comment on and discuss problems and trends that are evident from their perspectives. It is likely that some of the data which were deemed appropriate for inclusion as key indicators do not exist or are not readily available in the format desired. Where monthly data have not been collected in the past, it may be necessary to reconstruct such data or at least start collecting it in order to have the necessary data points from which to draw comparisons. When data are not consistent from one year to the next (for example, because of a change in data categories), it may be necessary to recompute (or "crosswalk") the prior year's data to make them more comparable. In some cases, the units that provide the data may feel that the presentation format should be modified, and some negotiations may be necessary to arrive at an agreed upon format which both satisfies users needs and meets the perceptions of the source authorities.

Two or more units within an organization may track the same data and may provide different analyses and even conflicting information based on these data. In many instances, these different perspectives are useful, provided that the assumptions on which the data analyses are based are clearly identified and understood. In some cases, it may be necessary to agree upon one data set over another to avoid misunderstanding and confusion among the information users.

Particular effort should be made from the outset to maintain the accuracy of the data to ensure the credibility of the reports from the MIS. Presentations of the data in the meetings should be focused and to the point to maintain everyone's attention. The main purpose of this approach is to raise questions at an early stage before problems get "out of control" and to alert senior management to significant trends that need to be factored into future decision-making. Major changes in an index should prompt questions and may lead to changes in policies or procedures.

The IMS encourages officials to focus on the same information and helps to educate senior managers concerning areas of the organization outside their direct responsibility. Those who attend the periodic meetings should become more attuned to areas of concern for the overall organization. Offices providing data should become more aware of their accountability and should develop a sense of participation by providing not only data but answers to important questions in area for which they have responsibilities. As with many of these management techniques, the process of developing the system may provide valuable contributions to the overall well-being of the organization.

Endnotes

[1] Samuel Solomon as cited in: Martin Lasden, "Computer-Aided Decision-Making," Computer Decisions, 14:11 (November, 1982), 157.

[2] Ralph H. Sprague, as cited in: Lasden, op. cit., p. 157.

[3] Alan Walter Steiss, Public Budgeting and Management (Lexington, Mass.: Lexington Books--D.C. Heath Co., 1972), p. 79.

[4] Steven L. Alter, Decision Support Systems: Current Practices and Continuing Challenges (Boston, Mass.: Addison Wesley, 1980).

[5] Lasden, op. cit., pp. 160, 162, 168.

[6] Steiss, op. cit., pp. 79-80

[7] Ralph Sprague and Eric Carlson, Building Effective Decision Support Systems (New York: Prentice Hall Inc., 1982).

[8] Alter, op. cit.

[9] Jeffery Moore and Michael Chang, in Building Decision-Support Systems, edited by John L. Bennett (Boston, Mass.: Addison Wesley, 1983), p.

[10] J. F. Rochart, Harvard Business Review (March-April, 1979).

[11] A.C. Boynton and R.W. Zmud, Sloan Management Review (Summer, 1984).

[12] J.K. Leidecker and A.V. Bruno,

[13] Peat Marwick,

[14] M.G. Dolence, Planning for Higher Education (1989).

Continue Text

Return to Summary