MENU

Sherif El-Tawil, PhD, PE

Professor and Associate Chair
Department of Civil & Environmental Engineering | The University of Michigan

Virtual Reality in Finite Element Analysis

Virtual Reality in Finite Element Analysis

Advances in Virtual Reality (VR) software and hardware have permitted a leap forward in visualization of FE data. The ease of navigation in the VR environment offers faster and more intuitive interpretation of FE data and allows even non-specialized users to appreciate and get a better understanding of the results. The objective of this project is to develop new means by which to employ VR for interpretation of large finite element data sets. Click here for software.

The Finite Element (FE) method is a hugely popular computational simulation technique. Since its emergence in the late 1940's as a method for structural analysis, it has continually developed until it has become a sophisticated generic simulation method with application in many engineering and scientific disciplines. As with any numerical technique, however, FE simulations produce information that is proportional to the size of the numerical model. For time dependent simulations or iterative analyses that involve multiple simulation steps, the amount of information produced can be overwhelming.

Visualization techniques have also evolved significantly in the past several decades and have become indispensable for interpreting finite element output data. The earliest FE-related visualization tools channeled FE data files into plotting programs, which plotted meshes along with various other types of spatial information. The next generation of FE visualization tools incorporated command-based graphical user interfaces (GUIs), where a user could type in commands to manipulate a graphical representation of their data. Continual refinement of GUIs has lead to present day 3D post-processing tools that rely on menu-driven, point-and-click functionality. However, most currently available FEM post-processors are still cumbersome to operate because of limited capability to navigate and interact with the model.

A particularly effective way by which to develop cost-effective, open-source training tools is to use readily available open standards and non-proprietary software such as the Virtual Reality Modeling Language (VRML). VRML is the first international standard of its kind for the description of 3D scene data. Since VRML was developed as an Internet format, VRML viewers are plugins that are freely available for popular internet browsers. This implies that VRML can serve as the basis for a true cross-platform standard. The capabilities of VRML can be enhanced through Java Applets (JA), which control the virtual environment through an External Authoring Interface (EAI).

The objective of this research effort is to develop a VRML-based postprocessor (FEMvrml) enhanced with a JA through an EAI to aid in the interpretation of FE results for structural analysis applications. The proposed system is suitable for use in training or education and is similar to other existing systems in that VRML is used to create the visualization environment. The system differs from other systems in its ability to consider part-based geometry and strategies for dealing with elements that have been deleted in inelastic simulations. A JA-based Play Tool Box is introduced to increase interactivity and functionality.

Structure of Developed Software: FEMvrml runs within a container written in Visual C++ and is comprised of two parts. As shown in Figure 1, the first part is designated "Data Processing". Its role is to read in FE simulation results and translate it into a suitable VRML geometric format. A major part of a typical VRML file is a hierarchical 3D description of a scene, called the scene graph. Its elements are nodes, of which more than 50 types are defined. Each node contains a set of fields that describe predefined data. The second part of the FEMvrml container is designated "Shell Display" in Figure 1. Its primary role is to load and display the web-compatible environment wherein the VRML browser and JA are embedded. The file generated by the conversion code can also be loaded into any standard Internet browser. Events may be sent and received from the JA to the world and can therefore be used to change node attributes. Additionally, Java callback methods may be registered to run specific code when an event occurs in the VRML model. The EAI provides a flexible approach to link VRML with Java.

Figure 1: User interface in shell display.

Figure 1: User interface in shell display.

Conversion Of Simulation Results Into VRML: Input and output data for the general purpose simulation code LS-DYNA are synthesized into the required VRML format. Input information is read from the input file, while output data is obtained from the displacement history file and message files produced by the software.

Controlling the Animation: The overall schematic structure of the nodes employed in FEMvrml and how the EAI transmits JA events back and forth to the appropriate nodes is shown in Figure 2. The JA controls animation and visualization and is comprised of three parts, Play Tool Box, Interactive Display Box and Sensor Switch. The Play Tool Box controls the animation process and permits users to start, stop, pause, reverse, slow or speed up the animation. The Interactive Display Box controls the visibility of each part. Individual parts can be made visible or invisible to aid in navigation and in understanding the simulation results. Parts of interest can be selected from a menu in the JA or they can clicked on - through the TouchSensor - in the VRML browser. However, parts that house deleted elements cannot be handled in this manner. A reset button is available to make all parts visible again in case the user gets disoriented. Parts sensed by the TouchSensor can create problems when navigating with a 2D mouse. For example, an inadvertent mouse click can remove a part without the user intending to do so. To alleviate this issue, the user can activate the sensor on/off switch box to turn on or off the touch sensors.

Figure 2: Implementation with VRML nodes.

Figure 2: Implementation with VRML nodes.

Applications: An example is provided to demonstrate the capabilities of FemVRML. Fig. 3a shows the finite element model which is comprised of 42,425 nodal points, 39,339 elements and 58 parts. As a result of extensive element deletion, an additional 242 parts are created to represent elements that disappear at specific times. Five time steps are selected to represent the animation. Data processing time for converting the finite element results into VRML is approximately is approximately 8.0 CPU minutes, while loading time for display is also less than 5.0 seconds. Steps from the collapse simulation are displayed in Figure 3b through 3d. Figure 4 shows a different viewpoint in which the user is positioned at ground level near a corner of the building.

(a)

(a)

(b)

(b)

c)

(c)

(d)

(d)

Figure 3: 8-story building collapse example.

(a)

(a)

(b)

(b)

c)

(c)

Figure 4: Zoom-in near ground of collapsed building.