Match the Kind of Simulation With the Example Continuous Simulation Discrete Event Simulation
Continuous Simulation
Continuous Simulation
Stanislaw Raczynski , in Encyclopedia of Information Systems, 2003
I. Introduction
Roughly speaking, continuous simulatio. is one of the two main fields of computer simulation and modeling, the other being discrete event simulation. Continuous models include those of concentrated parameter system. and distributed parameter system. The former group of models includes those for which the power of the set of all possible states (or, more precisely, the number of classes of equivalence of inputs) is equal to the power of the set of real numbers, and the latter refers to systems for which that set is greater than the set of reals. These classes of dynamic systems are described in more detail in the next section. The most common mathematical tools for continuous modeling and simulation are the ordinary differential equations (ODEs) and the partial differential equations (PDEs).
First of all, we must remember that in the digital computer nothing is continuous, so the process of using continuous simulation with this hardware is an illusion. Historically, the first (and only) devices that did realize continuous simulation were the analog computers. Those machines are able to simulate truly continuous and parallel processes. The development of digital machines made it necessary to look for new numerical methods and their implementations in order to get good approximations for the solution of both ordinary and partial differential equations. This aim has been achieved to some extent, so we have access to quite good software tools for continuous simulation.
In the present article some of the main algorithms are discussed, like the methods of Euler, Runge–Kutta, multistep, predictor-corrector, Richardson extrapolation, midpoint for the ODEs, and the main finite difference and finite element methods for the PDEs.
To illustrate the very elemental reason why continuous simulation on a digital computer is only an imperfect approximation of the real system dynamics, consider a simple model of an integrator. This is a continuous device that receives an input signal and provides the output as the integral of the input. The differential equation that describes the device is
(1)
where u. is the input and x. is the output. The obvious and most simple algorithm that can be applied on a digital computer is to discretize the time variable and advance the time from 0 to the desired final time in small intervals h. The iterative formula can be eqn
(2)
given the initial condition x(0). This is a simple "rectangle rule" that approximates the area below the curve u(t.) using a series of rectangles. The result is always charged with certain error. From the mathematical point of view this algorithm is quite good for regular input signals, because the error tends to zero when h. approaches zero, so we can obtain any required accuracy.
Suppose now that our task is to simulate the integrator over the time interval [0,1] with u = const = 1. We want to implement the above algorithm on a computer on which real numbers are stored to a resolution of eight significant digits. To achieve high accuracy of the simulation we execute the corresponding program of Eq. (2) several times, with h approaching zero. One can expect that the error will also approach zero. Unfortunately, this is not the case. Observe, that if h < 0.000000001, the result of the sum operation at the right-hand side of Eq. (2) is equal to x(t) instead of x(t) + hu(t) because of the arithmetic resolution of the computer. So, the error does not tend to zero when h becomes small, and the final result may be zero instead of one (integral of 1 over [0,1]). This example is rather primitive, but it shows the important fact that we cannot construct a series of digital simulations of a continuous problem that tends to the exact solution—at least theoretically. Of course, we have a huge number of numerical methods that guarantee sufficiently small errors and are used with good results, but we must be careful with any numerical algorithm and be aware of the requirements regarding the simulated signals to avoid serious methodological errors. A simple fact that we always must take into account is that in a digital computer real numbers does not exist, and are always represented as their rough approximations.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B0122272404000186
Simulation Analysis
Paul J. Fortier , Howard E. Michel , in Computer Systems Performance Evaluation and Prediction, 2003
8.3.2 Continuous modeling
Continuous simulations deal with the modeling of physical events (processes, behaviors, conditions) that can be described by some set of continuously changing dependent variables. These in turn are incorporated into differential, or difference, equations that describe the physical process. For example, we may wish to determine the rate of change of speed of a falling object shot from a catapult (see Figure 8.1) and its distance, R, from the catapault. Neglecting wind resistance, the equations for this are as follows. The velocity, v, at any time is found as:
Figure 8.1. Projectile motion.
(8.1)
and the distance in the x direction is:
(8.2)
These quantities can be formulated into equations that can be modeled in a continuous language to determine their state at any period of time t.
Using these state equations, we can build state-based changed simulations that provide us with the means to trigger on certain occurrences. For example, in these equations we may wish to trigger an event (shoot back when Vy is equal to 0). That is, when the projectile is not climbing any more and it has reached its maximum height, fire back. In this event the equation may look like this:
(8.3)
Another example of this type of triggering is shown in Figure 8.2. In this example, two continuous formulas are being computed over time; when their results are equivalent (crossover event), schedule some other event to occur. This type of operation allows us to trigger new computations or adjust values of present ones based on the relationship of continuous equations with each other.
Figure 8.2. Continuous variable plot.
Using combinations of self-triggers and comparative triggers (less than, greater than, equal to, etc.) we can construct ever more involved simulations of complex systems. The main job of a simulator in this type of simulation model is to develop a set of equations that define the dynamics of the system under study and determine how they are to interact.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9781555582609500084
Simulation Languages
Edward J. Dudewicz , Zaven A. Karian , in Encyclopedia of Information Systems, 2003
III.A. A Continuous Modeling Environment—CSMP
Since the mathematical foundation for simulations of continuous systems is based on the simultaneous solution of differential equations, continuous simulation languages are designed to facilitate this process. If the system of differential equations can be solved through analytical means, then it is unnecessary to develop a simulation. Consequently, continuous simulation languages are designed to provide numerical solutions to differential equations, making them useful when analytic solutions are not available but also making them susceptible to errors due to limitations of computational precision. To show the rudiments of how a continuous simulation language functions, we choose a very simple problem, with a readily available analytic solution, and use a particular language (CSMP) to simulate the situation described by the problem.
Suppose that from a height of 1400 feet, an object is thrown vertically up with a velocity of 250 ft/sec and we wish to track the position (relative to ground level) of the object for 20 seconds. Mathematically, if we let a(t), v(t), and p(t) be the acceleration, velocity, and position of the object at time t, respectively, then
(1)
Moreover, we have the boundary conditions v(0) = 250 ft/sec and p(0) = 1400 ft. The solution of with condition v(0) = 250 gives v(t) = −32t + 250 and the solution of with condition p(0) = 1400 gives s(t) = −16t 2 + 250t + 1400. A plot of v(t) and s(t) (see Fig. 1 where the parabolic curve represents p(t)) gives us a detailed view of the velocity and position of the moving object at various times.
Figure 1. Velocity and position of moving object.
If the two differential equations in (1) had not been simple to solve, or if their solution was not available in closed form, we would resort to a simulation of the motion. A CSMP program that simulates this motion is given in Fig. 2.
Figure 2. CSMP program to simulate motion.
The program of Fig. 2, because of its simplicity, is mostly self-explanatory. Statements that begin with "*" are comments and through the CONSTANT statement any number of constants can be specified; in the case of this program, the gravitational constant, G, the initial velocity, V0, and the initial position, P0 are defined in line 3 of Fig. 2. The next three statements form the core of the program that solves the differential equation given in Eq. (1). The INTGRL commands perform numerical integrations to obtain successive values of velocity and position at appropriate times. These times are
where DELT and FINTIM are defined to be 0.01 and 20 on the subsequent TIMER statement. Following the establishment of initial values, these specifications lead to 2000 computational iterations at times
The PRINT statement causes the production of a three-column output consisting of time (always included), VEL, and POS. The value of 1 specified for PRDEL is the time increment that controls program output. In this case, excluding headers, there will be 21 output lines for times 0, 1, 2, …, 20. The TITLE statement places a heading on the program output.
The output associated with the program of Fig. 2 is given in Fig. 3. It is clear that this tabular output reflects the graphic description of the motion given in Fig. 1 (e.g., maximum height is attained near t = 8 and the object hits the ground at t = 20). An unusual aspect of this output is that there are no errors due to the approximations that result from numerical integrations. Simulation of this model requires the integration of the constant function a(t) = − 32 and the linear function v(t) = − 32t + 250 and almost all numerical integration techniques (trapezoidal rule, Simpson's rule, Runge-Kutta, etc.) deal with linear functions without producing errors.
Figure 3. Output of program given in Fig. 2.
Obviously, as a programming language, CSMP and similar languages are far more powerful and flexible than this example shows. What they all have in common is the ability to solve simultaneous differential equations and in most cases the user is allowed considerable flexibility in choosing from a variety of numerical integration techniques and obtaining a variety of numeric and graphic output.
The simple example that we just considered does not give much of the details associated with the construction of CSMP models. For additional examples we refer the reader to Speckhart and Green and for a comprehensive discussion of CSMP we suggest the 1985 IBM publication. Among other languages for the simulation of continuous systems we mention ACSL (Advanced Continuous Simulation Language), DYNAMO (DYNAmic MOdeling), NDTRAN (Notre Dame TRANslator), and STELLA, and refer the reader to Chapter 9 of Aburdene.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B012227240400160X
Sustainability of Products, Processes and Supply Chains
Hangzhou Wang , ... Jinsong Zhao , in Computer Aided Chemical Engineering, 2015
6.8 Conclusion
We propose a novel biochemical fermentation design method that permits constrained bifurcations and oscillations. The method is applied to typical fermentation by the ethanol producer Z. mobilis. Experimentally and in simulations, continuous Z. mobilis fermentation undergoes periodic fluctuations in biomass, product, and substrate concentrations under certain fermentation conditions. These oscillations have previously been attributed to Hopf singularities in the fermentation process. Because they reduce the fermentation yield, system stability, and controllability (and thereby the product quality), they must be suppressed by a suitable approach. The proposed design method first locates all Hopf singularities in this dynamic process, while varying the operating parameter values. It then simulates the oscillatory behavior to formulate the relationship between the amplitude/period of the oscillating product concentration and the operating conditions. Finally, it incorporates the oscillatory effects into an optimization model, and calculates the optimal result. The new method accepts solutions close to Hopf points, provided that the oscillatory behavior remains within an acceptable level. This approach differs from previously published methods, which maintain the operating point far from the Hopf curves. The new method offers a deep analysis of the dynamic behavior, and may improve the productivity, product quality, and stability of dynamic fermentation processes. The method is also extendible to other fermentation processes developing sustained oscillatory phenomena.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780444634726000069
Software Architectures and Tools for Computer Aided Process Engineering
D. Hocking , ... S. Sama , in Computer Aided Chemical Engineering, 2002
3.3.1.1 General approaches to simulation
Simulation tools have broadly followed one of the following approaches:
- •
-
Some of them are simulation-specific programming languages, such as Simnon from SSPA Maritime Consulting AB (Goteborg, Sweden) or Advanced Continuous Simulation Language (ACSL) from MGA software (Concord, Mass.).
- •
-
Other tools are graphical tools where the user builds and connects blocks through the inputs and outputs of each block. The type of information flowing through these connections is not pre- defined but is defined in each case. Some examples of this type are Simulink from Mathworks or VsSim from Visual Solutions.
- •
-
Other tools are domain specific simulators such as HYSYS from Hyprotech 3 , Aspen plus from Aspen Technology 4 or ProII from Simulation Sciences 5 , in the field of process simulation. These type of tools is specifically tailored for a certain domain: process simulation. These specificity leads to ease of use, albeit at the expense of a certain loss of generality.
The field of process simulation is to a large extent dominated by this latter type of software tools. Since the early works in the computer application to chemical process by L. Lapidus 6 (1962), E.M. Rosen 7 (1962), Rvicz and Norman 8 (1964) and other authors, over the past thirty years a significant advance was made in these area and several software packages have been developed. HYSYS 3 , HYSIM 3 , ASPEN 9 , PROII 5 , SpeedUp 10 DIVA 11 , gPROMS 12 , ABACUSS 13 are just some examples of products designed to free engineers from the burden of having to deal with numerical algorithms and allow them to focus instead in the model formulation and application.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/S1570794602800104
Customizing SysML for Specific Domains
Sanford Friedenthal , ... Rick Steiner , in A Practical Guide to SysML (Third Edition), 2015
15.7 Applying Stereotypes when Building a Model
Once a user model has a profile applied to it, the stereotypes from the profile may be applied to model elements within that user model. How stereotypes are used depends on whether the intended purpose of the profile is a domain-specific language or a source of ancillary data and rules to support a particular aspect of the model. Although nothing in the specification of a profile differentiates the two cases, often tool vendors will add custom support tailored to the intended use when building the profile.
For a given stereotype, its extension relationships define the model elements that it can validly extend, subject to the model element satisfying any additional constraints that the stereotype specifies. A model element may have any number of valid stereotypes applied to it, in which case it must satisfy the constraints of each stereotype.
Although the intention of the SysML graphical notation for stereotypes—and the intention of many tool vendor implementations of profiles—is to hide these details and to provide a visualization that matches the modeler's expectation, the mechanics of how stereotypes are applied is worthy of some explanation. When a stereotype is applied to a model element in the user model (i.e., a metaclass instance), an instance of the stereotype is created in the user model and is related to the model element. Once an instance of the stereotype exists, the modeler can then add values for the stereotype's properties to the instance. An instance of a stereotype cannot exist without a related metaclass instance to extend, and therefore when a model element is deleted, all its related stereotype instances are also deleted.
Subject to these basic rules, how the modeler actually applies stereotypes is often governed by a modeling tool, based on the intended use of the stereotype. For example, the tool may create an instance of the stereotype and an instance of the base metaclass at the same time, or it may allow the modeler to create a model element first and then add and potentially remove the stereotype as separate actions.
Information from a stereotype is shown as part of the symbol of the model element to which it is applied or in a callout attached to the symbol. A stereotyped model element is shown with the name of the stereotype in guillemets (e.g., «stereotypeName»), followed by the name of the model element. The stereotype name may be capitalized and may contain spaces in its definition. However, the convention in SysML is for the stereotype name to be shown as a single word using camel case (the first letter of the name is lowercase, while the first letter of the second and subsequent words in the name are capitalized) when applied to a model element in a user model.
If a model element is represented by a node symbol (e.g., rectangle), the stereotype name is shown in the name compartment of the symbol. If the model element is represented by a path symbol (e.g., a line), the stereotype name is shown in a label next to the line and near the name of the element. Stereotype keywords can also be shown for elements in compartments before the element name.
If a model element has more than one stereotype applied, by default each stereotype name is shown on a separate line in a name compartment. If no stereotype properties are shown, multiple stereotype names can appear in a comma-separated list within one set of guillemets. See Figure 15.16 for an example of the application of multiple stereotypes. Whenever stereotypes are applied to a model element whose symbol normally has a keyword, its standard keyword is displayed before/above the stereotype keywords. The properties for a stereotype may be displayed in braces after the stereotype label or, if the symbol supports compartments, in a separate compartment with the stereotype name as the compartment label.
A stereotyped model element may also be shown with a special image that is part of the stereotype definition. For node symbols, that image may appear in the top right corner of the symbol, in which case it is often shown instead of the stereotype keyword. Alternatively, the image may replace the entire symbol.
Figure 15.12 shows some of the elements in the Flow Simulation Elements model library. They all have the Flow Simulation Element stereotype applied so that their version and compatibility properties can be specified. In this case Derivative and Integrator are only compatible with continuous simulations; the rest are compatible with discrete and continuous simulations. They all have version "7.5" except the Signal Generator, which has version "7.6." Note that because the underlying model elements are all activities, the keyword «activity» is shown, as described in Chapter 9, Section 9.12. These elements can be used in the construction of flow-based simulations.
FIGURE 15.12. Defining a library of flow-based simulation elements using stereotypes to add simulation details.
The activity diagram in Figure 15.13 shows a simulation model of the motion of the Moving Thing block, first shown in Figure 15.6, using continuous semantics (the «continuous» keyword is elided in the figure). The activity Motion Simulation is the classifier behavior of Moving Thing, so the model shows what happens to it over its lifetime. The simulation calculates the values of acceleration, velocity, and distance over time. The algorithm first calculates the acceleration from the mass of the object (inherited from Physical Thing) and the force applied. It then integrates the acceleration to get the velocity. Finally, it integrates the sum of the velocity due to acceleration and the initial velocity to get the distance traveled, which is stored in data store distance (the initial state of the integrator activity is 0, so the initial value for distance is 0). The current values of acceleration and velocity from the simulation are used to update the relevant properties of a Moving Thing. In this simulation model, time is implicit to the calculation and is not shown.
FIGURE 15.13. Using flow-based simulation stereotypes and library elements in the definition of a simulation.
Three probes are used over time to display the values of acceleration, velocity, and distance. The first two values are obtained via probes on object flows, and the third by a probe on a data store.
Figure 15.14 shows Motion Simulation as an activity hierarchy (note that the adjunct keyword described in Chapter 9, Section 9.12.1 is not shown in this figure). This view is useful because it shows the properties of the simulation elements. Motion Simulation and its children in the activity hierarchy satisfy all the constraints imposed by the stereotypes Flow-Based Simulation and Flow Simulation Element, as defined in Figure 15.9:
FIGURE 15.14. Block definition diagram showing the activity hierarchy for Motion Simulation.
- •
-
All the invoked activities of Motion Simulation are stereotyped by Flow Simulation Element.
- •
-
All the invoked activities have version numbers at least as high as Motion Simulation itself.
- •
-
The ode45 solver is appropriate for a variable step continuous simulation.
- •
-
Motion Simulation is a continuous simulation, so both discrete and continuous Flow Simulation Elements are allowed.
- •
-
Data store distance is typed by the value type m (meters).
Instead of showing the keyword «flowBasedSimulation» for Motion Simulation, this figure shows the stereotype's image in the top right corner of the symbol. The image is part of the stereotype's definition and is stored as part of the profile.
15.7.1 Specializing Model Elements with Applied Stereotypes
A potential area of confusion is the effect of specializing a classifier, such as a block, that has a stereotype applied to it in the user model. Applying a stereotype to a classifier does not imply that the stereotype is applied to subclasses of the classifier. If this is desired, the stereotype definition should include a constraint to ensure that the stereotype is applied to each subclass of the classifier that the stereotype is applied.
Even when a constraint forces subclasses to have the same stereotype as their superclasses, they do not inherit values for stereotype properties. If this is desired, the stereotype should include an additional constraint that every subclass has the stereotype applied and also inherits the values of the stereotype's properties.
Figure 15.15 and Figure 15.16 describe an example in which neither the applied stereotypes nor the values of their properties are inherited. Figure 15.15 shows two stereotypes from the profile Quality Assurance. The stereotype Audited Item, which extends the metaclass Classifier and can be applied to blocks among other model elements, is used when a classifier has been audited for quality—typically, when it reaches a certain level of maturity. It has properties to capture the audit date, the auditor, and the quality level that may take values from low to high. The stereotype Configured Item contains properties that must be applied to every classifier, hence the presence of the {required} property.
FIGURE 15.15. Definitions of two stereotypes used as part of quality assurance on a model.
FIGURE 15.16. Application of quality-assurance stereotypes to two blocks, one of which specializes the other.
Figure 15.16 shows the Audited Item and Configured Item stereotypes in use. In this case the block General Block has been audited and so has values for audit date, auditor, and quality level. Its subclass Specialized Block is still in early design, so it has not yet been audited. It clearly does not make sense to assume that just because General Block has the Audited Item stereotype applied to it that Specialized Block will also have this stereotype applied.
Even when a stereotype, such as Configured Item, is required and therefore applied to all blocks, it clearly is not the case that the configuration properties of a block (e.g., General Block) will be inherited by a subclass like Specialized Block. The information stored in the properties of Configured Item is specific to the model element to which it is applied.
Note that General Block has two stereotypes applied to it, demonstrating one of the notations that can be used where multiple stereotypes are applied. The keywords representing the two applied stereotypes both appear separated by a comma inside a single set of guillemets. The properties of the two stereotypes appear in separate compartments, labeled using the keyword of their owning stereotype.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128002025000151
Software Process Simulation
David L. Olson , in Encyclopedia of Information Systems, 2003
III.D.2. Systems Dynamics
The perspective of systems dynamics allows more modeling of the complexity of the waterfall process. The basic sequence of planned activity is a core, leading from specification to design to coding to testing and on to project completion. This is the path that would occur if everything went according to plan. The spreadsheet model presented in Fig. 2 would consider variance in duration, but would assume only this path. However, there are possibilities of the need for changes and corrections at all phases of this model. Things could break down during the specification phase if agreement on project features cannot be obtained. During the design codes phase, it might become apparent that the specifications developed by the user are either infeasible or unrealistic. This would lead to sending the project back for respecification. Such problems could be reviewed by the funding authority, who would need to decide among the options of revising the specifications, paying for a more expensive project, or canceling the project. During the implemented codes phase, defects identified would need to be recycled back to the design code phase (and might call for recycling, in turn, back to the specified codes phase, again querying the funding authority for authorization of additional budget). During testing it is quite possible that bugs will be uncovered, calling for rework and recycling back to the implemented codes phase (and possibly further back). Systems dynamics simulations modeling the dynamics similar to those presented by this problem have been applied in many applications. They are continuous simulations, measuring the levels of effort required as well as times required to complete phases. Note that in our example many elements of possible complexity have been left out.
Systems dynamics provides a means to map the dynamic interactions of a system. For instance, a map of the waterfall model with its feedback looping among and between components might be as given in Fig. 4. Systems dynamics models employ a continuous system of differential equations as their meta-model of stocks (for example, of defects to be detected) and flows (for example, lines of code completed).
Figure 4. Systems dynamic map of a waterfall model.
Systems dynamics models focus on simulating the flow of activities. They are continuous models, and the change in states is the focus of interest. Figure 4 shows the basic flow of a project, with potential relooping when defects are encountered. Here we model defects flowing one step back, but recognize that if a defect is identified in implementing code, returning to designing code can result in further looping back to code specification. The primary benefit of systems dynamics models is the ability to include factors affecting flow, such as the scope of the project affecting the rate of work accomplished in the specification phase. Likewise, defects identified at testing are accumulated as a variable and are fed back to implemented codes. Any loop is a bottleneck in the system. For example, if testing keeps finding defects, this prolongs the development process as shown in Fig. 5. Therefore, it clearly would be important to lower the defect rate in prior processes or the time required to correct defects. These parameters govern this systems dynamics model, and estimating accuracy would depend on experience and learning rates. The systems dynamics model requires input rates. Table 1 gives the input rates considered in this model.
Figure 5. Systems dynamics model work accomplished over time.
Table I. Systems Dynamics Model Input Parameter Values
| Parameters | Values | Units |
|---|---|---|
| Scope | 10,000 | LOC |
| Specification time | 40 | Hours |
| Design time | 200 | Hours |
| Implementation time | 280 | Hours |
| Testing time | 80 | Hours |
| Respecification time | 40 | Hours |
| Redesign time | 40 | Hours |
| Reimplementation time | 120 | Hours |
| Retesting time | 80 | Hours |
| Defect ratio at design | 20 | Percentage |
| Defect ratio at implementation | 30 | Percentage |
| Defect ratio at testing | 10 | Percentage |
In this case, we assumed a project with 10,000 lines of code. The durations of the four basic waterfall activities are the same as in the Monte Carlo model given earlier. Here we used normal distributions in the software package VENSIM (which did not support lognormal distributions), although in principle log-normal distributions could be applied. The amount of time for relooping activities are given, as well as the percentage of defects detected at each phase (in terms of lines of code).
Running the simulation yielded the following output. Figure 5 shows the lines of code (LOC) accomplished over time.
Other flows could also be monitored. Figure 6, for instance, shows the experience of LOC with defects detected during testing.
Figure 6. Systems dynamics model output—defects detected during testing.
Figure 7 shows another continuous measure, the rate of LOC tested per hour.
Figure 7. Testing rate in LOC per hour.
Systems dynamics has proven very effective in modeling software processes where production rates in terms of LOC (or in function points) as well as defect detection rates are important. There are other aspects of interest better captured by discrete event simulation, to be presented next. Even so, systems dynamics is often applied in hybrid models along with discrete event simulation.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B0122272404001635
Computational Issues in Intelligent Control: Discrete-Event and Hybrid Systems
XENOFON D. KOUTSOUKOS , PANOS J. ANTSAKLIS , in Soft Computing and Intelligent Systems, 2000
6.1 Parallel Discrete-Event Simulation
We have investigated the advantages of this approach for parallel discrete-event simulation (PDES). It should be clear that, in our view of intelligent control, PDES is an essential part in the design of intelligent control applications. Our intention is not to study new techniques for PDES, but rather to show that results that have appeared in the literature can be incorporated in the application development using the proposed parallel architecture. Discrete-event simulations are very useful for the evaluation of an intelligent control system at a level of abstraction where discrete-event system models or event-based control of hybrid systems [78] are used. Discrete-event system representations in intelligent control have been also used in [86 ]. A discrete-event simulation model assumes that the system being simulated changes state only at discrete points in simulated time. When we choose to model a real-world system using discrete-event simulation, we give up the ability to capture a degree of detail that can only be described as smooth continuous change. In return, we get simplicity that allows us to capture important features of interest that are too complex to capture with continuous simulations.
Discrete-event simulations have been studied in [22, 24, 46, 55] and typically require significant computational effort. A discrete-event simulation discretizes the observation of the simulated system at event occurrence instants. When executed sequentially, a discrete-event simulation repeatedly processes the occurrence of events in simulated time, often called virtual time, by maintaining a time-ordered event list, holding time-stamped events scheduled to occur in the future, and using a (global) clock indicating the current time and state variables defining the current state of the system. A simulation engine drives the simulation by continuously taking the first event out of the event list (i.e., the one with the lowest time-stamp), simulating the effect of the event by changing the state variables and scheduling new events in the event list. This is performed until some predefined end-time is reached, or until there are no further events to occur. The objective of parallel discrete-event simulations is to accelerate the execution of simulations using P processors. The parallelism in discrete-event simulations can be exploited at different levels. At the function level, the execution time of the simulation is reduced due to the distribution of the subroutines, constituting a simulation experiment, to the available processors. At the component level the simulation model is decomposed into submodels to reflect the inherent model parallelism. Model parallelism exploitation at the next lower level, the event level, aims at a distribution of single events among processors for concurrent execution. The event list can be a centralized data structure maintained by a master processor. A higher degree of parallelism can be exploited in strategies that allow the concurrent simulation of events with different time stamps. In this scheme, each node maintains its own decentralized event list. Schemes following this idea require protocols for local synchronization, which in turn may cause increased communication costs.
The main idea for all simulation strategies at the event level is to partition the discrete-event model into a set of communicating logical processes (LPs). The objective is to exploit the parallelism inherent among the model components with the concurrent execution of the logical processes. A parallel discrete-event simulation can be viewed as a collection of communicating and synchronizing simulations of submodels.
Using the proposed parallel architecture, most of the difficulties in parallel discrete-event simulation can be addressed very efficiently. Consider the case when timed Petri net are used for discrete-event simulation. The performance of the simulation depends on how the partition of the overall system into logical processes captures the inherited parallelism of the involved processes. To achieve high performance, automated PDES must measure workload at run-time, and perform dynamic remapping when needed; for example dynamic remapping algorithms have been proposed in [56] to address load imbalancing. It was discussed above that the nodes of the Petri net can be viewed as mobile objects connected using mobile pointers. The initial partition of the Petri net model results in a distribution of mobile objects to different processors. Several dynamic remapping algorithms can be implemented with migration policies of the mobile objects so that the designer will not have to keep track of the location of the objects. The additional communication overhead due to the remote service requests has been measured in certain applications and is only 7–10%.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780126464900500068
GIS Applications for Environment and Resources
Brian Deal , ... Youshan Zhuang , in Comprehensive Geographic Information Systems, 2018
2.19.4.1.2 Models
In the following, we introduce the functionality and dynamics of each model involved in our model integration example. Then we introduce our approach of model integration. The models utilized in this analysis are existing models that have already been usefully applied.
Land-use Modeling. Demographic output and the future demand for space are derived from the econometric model described below. This output is fed into a dynamic spatial LUC model. LUC model simulates future LUC and its consequences using a modified cellular automata approach where cells evolve over a surface defined by biophysical factors such as hydrology, soil, geology, and landforms, and socio-economic factors such as administrative boundaries, census districts, and planning areas. Fundamentally, the LUC model is defined by two major parts: (1) a dynamic LUC model (at a 30 × 30 m resolution), which is driven by a set of submodels that describe the local causality of LUC and allows the creation of what-if scenarios and (2) impact assessment models that use these LUC scenarios in order to analyze the impacts generated by these changes. One example of such model is the University of Illinois LEAM. The approach enables loose and tightly coupled links with other models that might operate at a different spatial scale (Deal and Schunk, 2004; Pallathucheril and Deal, 2007). LEAM has been loosely coupled with economic forecasting models (the Chicago Regional Econometric Model—CREIM), bidirectional travel demand models in both in Chicago (Deal et al., 2013), water quality models (Choi and Deal, 2008), water quantity models (Sun et al., 2009), and social cost models (Deal and Pallathucheril, 2008). LEAM has previously been applied in Chicago, Stockholm, and Washington DC.
Hydrometeorological/Hydraulic Modeling. There are two potential procedures for hydrologic/hydraulic modeling. One is for river channels at the watershed scale using the Variable Infiltration Capacity model to calculate river discharge for continuous flow processes and HEC-RAS, analyzing one-dimensional river/channel hydraulics (for example, flow stage and velocity), each coupled with an event-based model, HEC-HMS (USACE, 1998), that simulates rainfall-runoff processes (Liang et al., 1994, 1996; Cherkauer and Lettenmaier, 2003 ). Urban water movement (at the sewershed level with sewer overflow and urban floods) will be modeled using SWMM, a 1D, unsteady hydrology (rainfall-runoff), and hydraulic model of open-channel and closed-conduit systems typically used for single event or continuous simulation across a variety of scales, pervious and impervious surfaces, and engineered drainage infrastructures ( EPA, 2013; Cantone and Schmidt, 2011).
For hydrometeorological model, we employ the General Circulation Model (GCM) for climate modeling, which simulates the response of global circulation to large-scale forces (synoptic scale systems), and the RCM to account for subGCM grid scale forcings (for example, local circulations, complex topographical features, and land cover inhomogeneity), which gives long-term climate prediction. Statistical downscaling or dynamic downscaling approach will be used to produce data more suitable for hydrological models (Bárdossy and Pegram, 2011).
Virtual Water Trade Flows. The concept of virtual water trade flows builds on the idea of Allan (1993, 1994, 1998) and the interregional input-output framework developed by Leontief (1953, 1956, 1970). In essence, it determines how much virtual water is embodied in the production of goods and services made in a region, and traces whether that water (that is, associated commodities and services) is consumed locally or externally (through export to the rest of its country or the world). The same idea applies to the imports a region makes so that one can determine whether it is a net importer or exporter of virtual water. If found to be a net exporter, significant changes in water policy would likely be required from the region to ensure that best economic and ecological use is made of this scarce resource.
Catchment Water Quality Modeling. Water quality is a critical issue in water policy. Both point source and nonpoint source pollution should be accounted for. The Hydrological Predictions for the Environment model is a spatially semidistributed model, which can be used to explicitly account for the spatially distributed pollution sources including total suspended solids, nitrate, and phosphate (Donnelly et al., 2016; Lindström et al., 2010). However, it does not consider the social–economic factors. To address this, we will apply the Catchment Land Use for Environmental Sustainability model developed by NIWA, which is a GIS-based modeling system capable of assessing the effects of LUC on water quality and socio-economic indicators (Semadeni-Davies and May, 2014). The model has been coupled with OVERSEER, the Soil Plant Atmosphere System Model, and the Spatial Regional Regression on Watershed Attributes model. Land-use scenarios and socio-economic model inputs will be generated by the LEAM model and the economic and policy analysis model, described in subsection Model Integration.
Economic and Policy Analysis Models. We integrate an input-output modeling framework with a demographic component that helps make up a regional econometric model used for impact analysis and forecasting. Examples of details of the system can be found in Israilevich et al. (1997) and its application to Chicago (CREIM) in Kim et al. (2015). The model provides information on production, income, and employment for several sectors, population cohorts, migration, and ultimately water demand data for use in subsequent models. This annual model, with a current forecasting horizon, is complemented by shorter-term indices that mimic leading indicators and business cycles, thus providing the opportunity to integrate analysis over the shorter and longer terms. The economic model is synthesized to include water demand, flood damage assessments, and the costs of water degradation. It also feeds a market-based, dynamic optimization model that derives optimal adaptive flood management and pollution policy.
Synthesized models estimate the damages associated with simulated flood events where the value of damages depends on how frequent severe flooding is, and how much economic activity is present in the affected areas. The cost of water pollution is estimated by the treatment/replacement cost of polluted water with clean water. We will use standard benefit transfer methods (Brouwer and Bateman, 2005), applying the results of previous analyzes to estimate the values of changes in flood and pollution control regulation.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780124095489096536
Continuous simulation for design flood estimation—a review
W. Boughton , O. Droop , in Environmental Modelling & Software, 2003
Continuous simulation is unlikely to be used for the design of minor works in the foreseeable future. Event-based methods using generalised rainfall statistics will continue to be the main approach to design of minor works whose cost and consequences of failure will not justify the effort of data preparation and calibration of a continuous simulation system. In the longer term, continuous simulation could be used to produce generalised flood statistics for a range of catchment sizes and characteristics in locations where a need exists, and this might replace some event-based methods. There is a lot of water yet to pass under the bridge (to use a good hydrological analogy) before that point is reached.
Read full article
URL:
https://www.sciencedirect.com/science/article/pii/S1364815203000045
Source: https://www.sciencedirect.com/topics/computer-science/continuous-simulation
Post a Comment for "Match the Kind of Simulation With the Example Continuous Simulation Discrete Event Simulation"