RESEARCH INTERESTS

Building Information Modeling

I would like to continue the work started in my dissertation, which concerns the representation of architecturally-oriented "protean elements" (e.g., walls, floors, columns, rooms, etc.), their characteristics, and some aspects of their behavior. Similar representations are used in what is now known as Building information Modeling (BIM). The elements developed in my dissertation are intended to closely follow the mental representations used by architects, be easily changed (hence the term "protean"), de-emphasize "model correctness" requirements, and be appropriate for both visualization and performing various analyses (structural, thermal, lighting, etc.).

A scheme for representing such elements is explored in my dissertation, and a test implementation was constructed, to test certain key issues. In recent years BIM software has taken the concept much farther. Still, several sub-topics deserve further exploration:


Conceptual-Level vs. Product-Level Modeling

There is a widely recognized need to include product-level data in a Building Information Model. It is necessary because an architectural designs cannot be fully specified without going to such a level of detail, and because data about off-the-shelf-products is necessary for many sorts of analyses. Inclusion of product-level data facilitates more complete specification of designs with better coordination between plans/models, details, and many analyses.

However, there is also a need for a model to include "conceptual-level" elements, that correspond to conceptual architectural elements like walls, rooms, and floors, that do not correspond in a 1-to-1 manner with components that can be bought "off-the-shelf." Such elements are needed due to the important role they play in the mental representation of designs as they are developed. The elements are also needed because no architect really wants to explicitly specify every stud, joist hanger, brick, and bolt in a building. They are also needed because they embody certain functional roles that are relevant for certain analyses. For instance, a thermal analysis relies a great deal on properties associated with walls and rooms, even if the properties of a wall are ultimately based on the particular component used.

Resolving this paradox is complex. Conceptual- and product-level elements frequently occupy the same spatial coordinates. There are questions about how to produce product-level components of a conceptual element in a semi-automatic manner. There are questions about how product-level components should respond to conceptual-level changes, and how conceptual-level elements should respond to product-level changes (which is far more complicated).

Product modeling has been driven, in part, by manufacturers of building products like steel members or precast concrete elements. Such products are relatively discrete and self-contained. They can be added to a design individually or in an array without much difficulty. However, this is not true of all building components. Few architects want to explicitly specify the placement of each stud or brick in a design; they deal with elements like walls and leave the exact placement of the components to the contractor, carpenters, and masons. So to some extent, manufacturers of these materials are excluded from having their products included in building models. I believe that these manufacturers have an interest in seeing advances in the area of conceptual-level modeling, and I hope to approach them regarding funding or joint research projects.

Mental Representation and Visual Imagery

The capacity of verbal short-term memory is reasonably well established. We can remember about 7 plus or minus 2 words, digits, or "chunks;" or about 2 seconds worth of speech. But what about visual short-term memory, which we use when imagining a scene, or performing other visually-oriented tasks? What kind of information is stored in this short-term visual memory, how is it organized, and how much of it can be stored there at any given moment?

Kosslyn and Pomerantz (1977) argued that mental images were in some sense "visual." They argued that mental images were not stored as something like a photo, but rather, that mental images were composed of partially-processed visual "chunks," like the arms, legs, head, and torso of a person. This fits well with the findings by Verstijnen, Hennessy, Leeuween, Hamel, and Goldschmidt (1998), indicating that mental images can not easily be reconfigured (to reflect emergent shapes, or to depict a different configuration of the same elements). In a similar vein, Biederman (1987) found evidence that people recognize forms based on configurations of simple extruded (or tapered) forms, which he called "geons." Similarly, inspired by DeGroot (1965; 1966) and Chase and Simon (1973), Akin (1986) found evidence that imagery in architects includes "visual chunks"--like steps, corners, and walls--as unitary elements in their short term memory.

Yet, this is likely not all there is to it. Such research does not really address relations between components of an image, like relative positions, points of attachment, symmetry, etc., or higher-level "chunks" like entire people or rooms, or overall building shapes. Do these count as "visual chunks"? What about the capacity of visual short-term memory? How many "visual chunks" we can remember or work with at one time? Do relations between chunks somehow figure into this capacity? How is visual information that does not easily fit into chunks stored? I would be interested in collaborating with other design researchers or psychologists to investigate such topics.

Providing Appropriate Tools

As an architecture student, I often heard--both from instructors and from students--that "architects don't create buildings, architects create drawings." I believe this view has been implicit in the creation of most computer aids for design, and I believe that those aids have suffered as a result--as have the architects who have used such software.

Until recently, computer aids for design have been based on a premise that architecture, engineering disciplines, and other professions were similar in that all involved making drawings. As a result, CAD software vendors provided tools for making drawings--out of points, lines, arcs, splines, and so forth.

But this premise is wrong. The drawings (and models) that architects create are just means to an end, and that end is the creation of architecture. The cognitive activities of architects, as revealed in protocols from architects at work and from books on architectural design, deal for the most part with architectural matters, not graphical ones. Architects might point to drawings, but more often than not, they're thinking and talking about architecture when they do so.

This resulted in what psychologist and user interface expert Donald Norman would probably call "getting the mappings wrong"--mapping between intentions and possible actions, actions and actual effects, actual effects and perceived effects, and perceived effects and expectations. In traditional CAD software, user intentions, like changing the width of a door, must be mapped into commands for moving, trimming, stretching, and/or otherwise modifying graphical entities. In other cases, the software provides information in a form that is difficult for an architect to process. A triangulated mesh, for example, might fit the form of terrain more accurately than a contour model (of the sort that can be made from chipboard or using Form*Z), but it is not as informative for an architect. It is difficult to look at it and gauge actual elevation of the ground surface, or the direction of water runoff.

But the problems with software tools are procedural as well as representational. Software often is better suited for depicting finished designs than for helping in the development of unfinished ones. It is usually better suited for work by a single user, than for collaborative work by numerous individuals. Neither multiple people trying to manipulate the same information (e.g., several people trying to edit the same model), nor multiple people trying to manipulate different information (e.g., one person trying to extract energy-related data from a construction drawing created by another person) is supported very well.

The tools are getting better. Commercial BIM software based on building components is starting to supplant CAD software based on lines and othe graphic entities. Sharing of information is also starting to be addressed. But there is still far to go. An understanding of cognition, the methods and social dynamics of design, and data needed for architectural applications can lead to development of more appropriate tools for design.

Representations for Genetic Algorithms

I remain skeptical of the abilities of generative systems (rule-based systems, case-based systems, genetic algorithms, etc.) to be able to design buildings with a quality comparable to what human designers can produce, except within very specific and well-constrained situations. Nonetheless, it is difficult to avoid being fascinated by genetic algorithms. There's just something "cool" about systems that solve problems by producing myriads of random potential solutions, and preserving, "mutating," or "breeding," the better ones until good solutions emerge.

Yet, using such an approach for architectural design presupposes some sort of representation of architecture that can be manipulated in a genetic manner. "Genes" must somehow map to architectural data, or must map to scripts that somehow specify how to produce architectural data. Determining such a mapping is fundamental to any application of genetic algorithms. Yet it is a difficult problem. If we mapped genes to AutoCAD entities and their attributes, for instance, we would probably not produce anything remotely resembling an architectural drawing or model. We would likely produce a mess of random graphic (or solid) elements, which would not even be recognizable (or evaluatable) as architecture. Even if we mapped genes to elements of architectural form, like walls, columns, and floors, it would be difficult to produce anything recognizable or evaluatable as architecture.

Looking at how biological genes "represent" biological form and ontology might be a source of valuable inspiration. Certain biological mutations, like the appearance of an extra digit or vertebrae, or the growth of legs where there would normally be antennae, might be analogous to certain desirable architectural mutations, like the appearance of an extra window or column, or the growth of a conference room where there would normally be an office.

One aspect of genetics and developmental biology that might prove to be a source of inspiration is the role of protein "factors" in a lifeform's development. I have only just begun to learn about factors, but if I understand them correctly, they help a developing lifeform develop the right parts in the right places by acting as a sort of coordinate system that is used in the expressing of genes. As a lifeform develops, its genes trigger the production of certain proteins ("factors"), which form gradients of protein concentrations in the development environment. The concentration of these factors in a cell's area can affect which genes get "expressed" in that cell, e.g., whether it follows genetic instructions describing how to develop into a liver cell or a skin cell.

It would be interesting to conduct an experiment to see if a genetic algorithm could generate architectural form in some manner analogous to the way biological genes trigger biological form.


Back to curriculum vitae for Scott E. Johnson
Back to home page for Scott E. Johnson
Last update: May 3, 2009
Scott E. Johnson (sven@umich.edu)