- 1Department of Information and Communication Technology, University of Agder, Grimstad, Norway
- 2Department of Nordic and Media Studies, University of Agder, Kristiansand, Norway
- 3Department of Information Systems, University of Agder, Kristiansand, Norway
- 4Department of Informatics, University of Oslo, Oslo, Norway
There are many different notions of models in different areas of science that are often not aligned, making it difficult to discuss them across disciplines. In this study, we look at the differences between physical models and mental models as well as the difference between static and dynamic models. Semiotics provides a philosophical underpinning by explaining meaning-making. This allows for identifying a common ground between models in different areas. We use examples from natural sciences and linguistics to illustrate different approaches and concepts and to find commonalities. This study distinguishes between systems, models, and descriptions of models. This distinction allows us to understand the commonalities of mental and physical models in different areas.
1. Introduction
For the most part, scientific fields have developed in isolation from each other. Recently, the rise of interdisciplinarity has promised the diffusion of knowledge from one field to the other (Bammer, 2013). This is very crucial as it gives the opportunity for philosophically inclined fields to pass their insights to more instrumental fields and increase their societal impact. In our study, we combine understandings from two different fields: computer science and semiotics. Even though computer science and software engineering have developed a theoretical basis, most researchers focus on practical aspects. On the contrary, semiotics is mostly a theoretical field that focuses on exploring everything that makes meaning and signal for something else. We believe that it is very beneficial to use the rich theoretical perspective of semiotics to understand models and their applications in model-driven software and system development better.
Models are used in many places, both in everyday life and in different scientific contexts (Vynnycky and White, 2010). It has been discussed within several disciplines how models can be understood. A common theme to define models across disciplines is as representations of reality (Vynnycky and White, 2010; Chamizo, 2011; Grüne-Yanoff and Mäki, 2014; Brughmans et al., 2019), while sometimes they are referred to as something more than just a representation (Taber, 2017) or as abstractions of reality (Heemskerk et al., 2003; Friedman et al., 2008). There has also been work on the understanding of common aspects of models across disciplines (Thalheim and Nissen, 2015).
This study is cross-disciplinary and looks at the differences between static and dynamic models and between mental and physical models. These differences appear as major differences between natural science models (including engineering models) and linguistics models. Therefore, this is a good starting point to look at for unifying models across disciplines.
A static model is a model without state changes. It always stays in one state, and it is not important how this state was produced. A static model has a structure, which we can explore and observe. In principle, the model structure can be vast. We have to keep in mind that a static model does not evolve, but it stays as it is. A dynamic model also has a model structure, but in this case, things happen in the model, and the state changes. The rate of change can be large (years or millennia) or small (minutes, seconds, or less). We consider that the general structure of the model does not change even though the elements and properties of the model change.
Ideas about what static and dynamic models mean vary in different studies and possibly depend on the scientific background. For instance, in a study where AI was applied for medical purposes, dynamics meant the inclusion of real-time variables (Mistry and Koyner, 2021). In another study on flood models, static referred to the models that calculated tide heights using only the hydraulic connections among locations while dynamic models included more processes (Ramirez et al., 2016). Finally, in chemical engineering and the study of chemical plants, static means a steady state, and dynamic implies processes that create changing states with the potential of ending in a steady state (Ingham et al., 2007).
A physical model is a model that is existing in the physical world and that can be observed. We consider also digital models, which also exist physically. This way, physical models can be scale models, dolls, computer simulations, or even living persons. A mental model is sometimes called an abstract model. It refers to models composed of thoughts and ideas. All thoughts and ideas also have a physical background, but for mental models, the focus is on the connection between the thoughts and ideas.
We will also discuss the concept of a model description, which is not a model itself, but a description of it. This may sound strange at first, but we will argue for it. A description is an artifact that describes something, that we are interested in. A model description describes a model. This could be as simple as a description of how to build a bridge, or an architectural drawing of a building, or a simulation program that can be executed in order to simulate a situation. In the case of the drawing, the use of other colors or scales would have led to another description of the same model.
Obviously, models are connected to reality. For a complete understanding of models, we need to understand how they relate to reality. An important tool for that is the concept of a system.
We proceed with this article by diving into (dynamic) systems in Section 2, before we look into static systems in Section 3. Based on systems, we discuss models in Section 4 including static and dynamic models. Thereafter, we introduce descriptions in Section 5 and how they connect to models and systems, before exploring the issue of communication in Section 6. We discuss our approach in Section 7 and conclude in Section 8.
2. Systems
Before jumping into models, we first want to explore the way we look at reality. We discuss reality and introduce systems as a way to relate to reality. The tool used in this context is called perspective. We exemplify perspective with physical, digital, and mental systems.
2.1. Reality
Building on phenomenological theory (Husserl, 1960), we hold that the world exists, but we only have access to it through our senses and our imagination. What we observe are phenomena that we abstract into concepts, see Figure 1. This approach can be called object-oriented when we consider that phenomena and concepts are represented by objects and classes.
Building on the epistemological position called social constructivism (e.g., Berger and Luckmann 1967; Kjeldstadli 1997; Falkenberg et al. 1998), we distinguish between the external world, the social world (which we ourselves have created), our perception and conceptualization of the external and the social world, and the semiotic expressions (representations) we use to communicate around these perceptions and conceptualizations. In this study, the difference between the external world and the social world is irrelevant, but we will consider physical realities, digital realities, and mental realities. Communication and the expressions used for this will be discussed when we look at descriptions in Section 5 and communication in Section 6.
From reality, we only see what our senses and our imagination provide, see again Figure 1. Our perception limits what we can see of reality, which is further limited by the concepts we use, see e.g., Roberson et al. (2006). Moreover, our purpose of looking at reality introduces another kind of limitation, providing focus and bias. In this study, we use the term “perspective” for the collected filter (sense limitations, concepts, and purpose) that is used to look at reality, see also Guarino et al. (2019).
Definition 1 (Perspective). A perspective is a filter applied to reality by our perception and a structure imposed on reality using our concepts. In other words, a perspective is an interpretation of reality using our concepts.
A perspective (called “conception” in FRISCO Bjeković et al., 2014) can be thought of as a pair of glasses that is used to observe reality. It includes the limits of our perception and our focus coming from the purpose of looking at reality. Please note that the purpose is often connected to models (Thalheim and Nissen, 2015), while in this study, purpose is attached to the perspective, which is then later used for models. In general, it is possible to employ different perspectives on the same part of reality, both to reduce the complexity of reality and to view reality related to different purposes. The psychological and social process of constructing a perspective is beyond the scope of the study, and this is handled in Schütz (1962); Kress and Van Leeuwen (2006), and Barrett (2017).
Figure 2 shows how the sheep of reality can be seen through different perspectives: from the perspective of a (simplified) taxonomy of organisms, from the perspective of skeletal anatomy, or from the perspective as part of an ecosystem with main elements being sheep, grass, and wolves.
Figure 2. Sheep seen as using three different perspectives (Sheep skeleton from Museum of Veterinary Anatomy FMVZ USP, Creative Commons BY-SA 4.0).
In the literature, perspective is often implicit, see Apostel (1960) and Rothenberg et al. (1989). In this way, the discussion starts already with the objects and their properties. The advantage of an explicit perspective is the possibility to catch the purpose already when looking at reality even before models are considered.
2.2. Physical systems
As we cannot work directly with reality, we use the term “system” to identify our observed part of reality together with the applied perspective. This involves that there are boundaries of a system, telling us what is inside and what is outside the system. Moreover, a system has parts (called objects) that have relations to each other. The objects may be existing entities like animals and rocks, and they may be created entities like tools or machines. Objects might have properties, which have measurable values. Both objects and their properties are given by our perspective. It is implied by the idea of a system that a system is evolving because reality is evolving all the time. This can lead to the following definition of a system, see also Fischer et al. (2020).
Definition 2 (System). A system is a potentially changing set of objects and their properties. These objects interact with each other and with entities in the environment of the system, resulting in changes of the objects and properties. In this way, a system is a set of possible progressions of its objects, where each progression is a set of object configurations that exist at different time points. The objects and their properties are parts of reality observed using a perspective.
As an example, let us look at a wolf–sheep ecosystem. We consider an area in nature that is bounded in some way, maybe by some mountains. Inside the area, we find wolves, sheep, and grass. Our perspective ignores all other elements of the ecosystem like butterflies, soil, and weather. We only consider wolves, sheep, and grass, where wolves and sheep have a location while the grass is given by its height. In such a system, we can track the wolves and sheep and also the amount of grass, and we can observe their development over time as shown in Figure 3. Please observe how our perspective removes all details of reality that are irrelevant and only counts the number of wolves, sheep, and the total amount of grass.
2.3. Digital systems
We can consider another system, a digital system, where the objects in the system are digital entities. We are still interested in wolves and sheep, but now, these are digital wolves and digital sheep, as shown in Figure 4, see also Wilensky (1997). We see a green patch of grass and we see the current position of the sheep and wolves. The different shades of green indicate the status of the grass. We consider the situation in Figure 4 to be the starting situation for the digital wolf–sheep system. As we are at the starting stage, nothing has happened so far.
After this starting situation, the system state changes, and Figure 5 shows a later situation. We see the current situation including some graphical history information on the left side, which shows the development of the wolf and sheep population over time. In addition, there are some buttons to control the system, allowing the system to be reset and to run in different configurations. This amount of control is often not available in physical systems, but digital systems are still a special kind of physical system.
It might seem as if there is no perspective used in this case, but that is not true. Please observe that the actual symbol for sheep and for wolves is irrelevant, and we only observe the number of wolves and sheep and their position. Moreover, even though each patch of grass has a lot of green pixels, only their joint color is relevant for the status of that patch.
An even deeper level of reality that is missing in our perspective is that there is some running computer program and some internal representation of the situation consisting of bits and bytes and, in this case, forming objects with attributes and values. The bits and bytes are again constructed out of electrical potentials, and we could go even further there. All these are ignored due to our perspective.
2.4. Mental systems
Even before the advent of computers, a knowledgeable and experienced person would be able to predict the development of wolves and sheep. They would use a mental system (sometimes called a mental model, see e.g., Johnson-Laird, 1983) of mental wolves and mental sheep. In such a mental system, the contained objects are thoughts1. It is possible to simulate this system as well and to run some experiments in this system. The results may be a bit more vague, but there could be rules such as for every 6 years, there is a peak of sheep.
In a similar way, for an experienced programmer, it is possible to predict the outcome of the digital wolf–sheep system by running it in her brain. This would also produce a mental wolf–sheep system with the possibility of running experiments.
Mental systems can provide a prediction of future systems. For example, the designer of the digital wolf–sheep system might have started with an idea (a mental system) of the user interface for the future digital system. Such a mental system will be still somewhat blurry such as a blurry version of Figure 5. It will guide the development of the future digital system. This is a prediction, and it constitutes a mental system.
This way, a mental system is a simulation (or imagination) of reality in the mind of a person (e.g., Schütz, 1962). It has objects and properties and evolves over time. A mental system will often allow a great deal of control over the evolution of the system, such that different options can be explored, and the system can be stopped and restarted at different points in time. It might also be possible to consider several options at the same time.
Obviously, a mental system is based on the concepts available in the mind of the person it is placed in, such that we have the same perspective as we have observed for the real systems. In other words, we cannot think about something we do not have concepts about.
There can be different possible perspectives available depending on the current focus, such that a picture of a sheep can trigger different perspectives (different mental systems) in different persons. In Figure 6, this is illustrated with one person, perhaps a biologist, employing a mental system based on the sheep's embedding into an ecosystem, and another, perhaps a cook, with a mental system based on cooking and preparing sheep meat.
Figure 6. A picture of a sheep can trigger different mental systems, for example, one based on ecosystems and another based on cooking and eating.
3. Static systems and snapshots
We will now look into systems that do not change, such that the system state is constant over the lifetime of the system. Please remember that a system is based on a perspective that has a selective aspect and that could mean that the changing elements of reality are not considered. As an example, think of the mountain in the wolf–sheep system. If we only look at the mountains, they might stay constant over a long period of time. Physics tells us that even mountains are composed of atoms with electrons that are moving all of the time. If our perspective is on a higher level than atoms, then this is not visible in our system.
In the same sense, the earth is moving all the time within the solar system. If our perspective is smaller than the solar system, this change is again not visible. Finally, over long periods of time, even mountains change by erosion or continental drift. If our system time scale is smaller than that or the precision is low, we will not see these changes. We therefore conclude that the perspective can make a difference between static and dynamic systems. This means that the purpose of the system determines whether the dynamicity of the system is relevant.
Formally, a static system is different from a single system state, which we also call a snapshot. A snapshot shows the structure of the system and the properties of its parts at a given point of time. A static system is then a long sequence of the same snapshots. As the snapshots are equal, we often unify them with the system itself.
When we look again at the wolf–sheep system, we can imagine that an evolution of the system can lead to the situation shown in Figure 7. Now, the wolves have caught all the sheep before they all themselves died of hunger. From this point onward, the system is a static system as no change is happening in the system any longer. This means, if we started the system at this point of time, it would start out as a static system. Admittedly, we are stretching the term of a system a bit here, but we have to remember that there are still patches of grass, so there are some structures in the system. Please note that the actual changes in the system are relevant for the distinction between static and dynamic and not the possibilities. If no changes happen, then the system is static.
Figure 8 shows another kind of stability in a system. Now, there are no more wolves, and the sheep eat more or less all available grass. The situation is stable. In the diagrams, the graphs of the wolves, sheep, and grasses will stay horizontal. Even then, the system is not static as the sheep change their positions all the time. A slight change of perspective—looking only at the total number of sheep and not at their position—will turn this system into a fully static system.
At first, static systems might not seem interesting, but they are in fact of considerable value because they can capture ground truths about the world. Let us take a more abstract perspective of the wolf–sheep system, in that we only consider the existence of wolves, sheep, and grasses. A snapshot is then such that there are wolves, sheeps, and grasses in the valley. This creates a static system, as over a long time, the state will be the same with wolves, sheep, and grass being present. In a similar way, a concept is a static truth. Essentially, we watch out for concepts and general truths in order to create static systems working as background knowledge for the dynamic systems we create.
There is a close correspondence between static systems and the perspective because the concepts we employ to look at the world, form a static mental system. Our mental concepts allow us to consider parts of reality as fixed such that we can focus on the things that are changing. Look for example at Figure 9, which presents a (static) mental system connecting concepts of sheep, grass, wolves, and eating. Please observe that the concepts are static and how they abstract elements of reality, thereby contributing to possible perspectives of reality. In this way, mental concept systems can also provide a background to create systems, see also the discussion in Section 7.1.
Static systems are therefore very present in science. For example, all formulas in physics and chemistry are static systems. Similar static systems are available in basically all scientific areas. Figure 10 shows a taxonomy that could be part of a biological mental system of concepts related to the classification of organisms. In a similar way, static connections between concepts are available everywhere around us. You might want to recheck Figure 1, which is a depiction of some concepts relevant to this very article.
It can be argued that those systems are not totally static because these theories are changing, but for this study, we consider them as static systems. This is again an example of perspective making a difference between static and dynamic systems.
The current (heliocentric) understanding of our solar system has been static for a long time now, while when we extend the time frame 2000 years into the past, then we can see that the understanding (i.e., the mental system) is dynamic because then a geocentric understanding was considered correct, see Figure 11.
Figure 11. Models and descriptions of the geocentric and the heliocentric view (Image credits: Top left: Tony Freeth; Top right: Armagh Observatory).
In this context, a static geocentric understanding means that the planets, the moon, and the sun are placed around the earth in their own orbits, see the bottom left of Figure 11.2 This is a static system even though the underlying process is dynamic because the planets move. We can also consider a dynamic system that is related to the geocentric understanding of the solar system, see for example the antikythera at the top left of Figure 11.3 This shows that the system we look at can be static or dynamic depending on our perspective which can include dynamic aspects or not.
4. Models
When we consider the physical wolf–sheep system, the digital wolf–sheep system, and the mental wolf-sheep system, we observe that there is a certain match among the three systems. Somehow, these systems capture the same essence. We capture the match between systems with the concept of “model” as follows.
Definition 3 (Model). A model is a system that is in the model-of relationship to a referent system, existing or planned, where the model-of relationship means that the model is analogous to the referent system.
Comparing two systems implies that they employ the same or a similar perspective (see Section 2) to be comparable. Otherwise, they cannot be in the model-of relationship. The analogy between the two systems is given by the perspective chosen, which again has a close relation to the purpose of the model. In our case, Figure 9 provides a shared perspective for all three wolf–sheep systems. Based on this observation, we can say that the digital wolf–sheep system is a model of the mental wolf–sheep system, which again is a model of the physical wolf–sheep system.
It might seem that relying only on analogy for the model-of relationship is too restricted when compared to other definitions, as Thalheim (2011), Bjeković et al. (2014), and Thalheim and Nissen (2015). However, we cover some more aspects as well in different ways. A model is a system, which has a perspective based on its purpose. Because the perspective restricts already the referent system, we do not need the model to be more focused. The analogy would also need to come with a justification as observed in Thalheim and Nissen (2015). Well-formedness and language do not relate to the model itself but to its description and are handled in Section 5.
It is useful to consider models and systems to be similar, see Apostel (1960), and Falkenberg et al. (1998), as this allows transitivity for the model-of relationship. Interestingly, FRISCO (Falkenberg et al., 1998) defines systems as special cases of models and not the other way around as we do. See Section 7.1 for a more detailed discussion.
Sometimes, a model also applies a stronger focus than the original system, such that it abstracts elements and properties that were available in the original system. In our wolf–sheep case, the focus is already given by the perspective employed, such that all three systems are on the same level of abstraction. This means that we can also inverse the model-of relationship for our systems saying that the physical wolf–sheep system is a model of the digital wolf–sheep system, which again is a model of the mental wolf–sheep system. Which system we select as the original and which as the model is decided by the purpose of the modeling.
At this place, we want to make a distinction between mental models and formal models. Figure 10 can be considered to be a picture of a mental model containing a taxonomy, i.e., a hierarchical classification of concepts. However, a taxonomy can also be a formal way to organize and index knowledge based on an agreement. For example, we can depict Figure 10 in the Unified Modeling Language (UML, OMG Editor, 2017), see Figure 12.
Figure 12. The taxonomy of Figure 10 as a UML class diagram.
There are two ways to look at Figure 12. We can consider the underlying mental system of some person as the starting point and see the UML diagram as depicting this mental model. However, as UML has formal semantics, it is more appropriate to consider the system generated from the UML description, as we will discuss in Section 5. This system can be a digital system or a mental system depending on whether a tool is employed or not. The generated system can now be compared with the original mental system (the concepts and their classification in the brain). This means that the formal system depicted in Figure 12 can be a model of the mental system, as depicted in Figure 10.
We illustrate the modeling process in Figure 13. The starting point is reality, which is perceived and filtered using a perspective. In this case, we start with physical reality containing sheep, wolves, and grasses seen in an ecosystem perspective. Thereafter, a digital system is constructed which is similar to the physical system, also containing sheep, wolves, and grasses. Comparing the two systems shows that the digital system is a model of the physical system.
We have to remember that both the digital system and the physical system are parts of reality seen using a perspective. The model-of connection will normally be lost when we use a different perspective.
Throughout this process, a mental model of the physical wolf–sheep system was also involved, which enabled the construction of the digital wolf–sheep system. Finally, during the construction of the digital wolf–sheep system, a mental model of the digital system was used to guide the construction of the digital system.
We illustrate the idea of a model again with the solar system. We use a perspective of the sun, the moon, the planets, and the earth, which gives us a physical system which we call O (original system). This is a system and not the reality as is, see also the discussion in Section 2. We consider two different models O, with M1 being geocentric, see the top left of Figure 11, and M2 being heliocentric, see the top right of Figure 11.4 Both models match very well compared with the observations of the actual solar system viewed from Earth (our referent system) when the purpose is to identify the positions of the elements of the system.
5. Descriptions
There is one more element that has to be taken into account, which are descriptions, called denotations in FRISCO (Falkenberg et al., 1998). More often than not, we are not dealing with the models themselves but with descriptions of them. Already the fact that we can use figures in this article indicates that these are not the systems and models themselves but descriptions thereof. In fact, even the very text we are writing is a description of the content we want to provide. Therefore, we define descriptions of systems as follows.
Definition 4 ((System) Description). A system description is a set of statements about a system, given in some language. Normally, we expect the description to be complete such that a system description is a well-formed description of the structure of the system together with the possible dynamic development of its objects and properties. Conversely, a description prescribes (or implies or defines) one or more systems, depending on its level of precision and correctness.
In general, descriptions can come in various forms, for example, text, pictures, diagrams, formulas, or icons. Sometimes, descriptions are called representations or signs because they represent something else or stand for something else. We use the term “description” in this article and discuss how it relates to representations in Section 7. The main difference between systems and descriptions is therefore that descriptions describe (or create or refer to) systems. When a description is incomplete, whether on purpose or because it is under development, it will typically prescribe several systems. Sometimes, two descriptions can describe the same system. An example of this is programming code that is formatted differently without changing the semantics of the code.
Let us start with our wolf–sheep example and consider the word “sheep.” This word denotes a category of objects in the real world, which is somehow defined. There are clearly some cases that might bring up discussion, for example, the white objects in Figure 8. Are these also sheep? Whatever the answer is to this question, we can see that there is a difference between the idea (or concept) of sheep and the word (or description) of sheep. The distinction between a description and the real item is famously highlighted in the painting "The Treachery of Images" by René Magritte, see Figure 14, which shows a pipe, and the text “Ceci n'est pas une pipe,” French for “This is not a pipe.” It is indeed not a pipe, but it is a visual description of a pipe in the form of a painting.
Figure 14. This is not a pipe (Magritte) collected from Wikipedia (2022).
Continuing with the wolf–sheep example, there is a considerable amount of code (written in the language NetLogo) describing the digital wolf–sheep system, see Figure 15. This code is not the digital system itself, it just describes the system. The system, as shown in Figure 5, is given by the running code. The running code is coming alive on a computer by an interpreter of the code. In our context, a description is a set of statements about the system, or some other form of semiotic representation referring to it. There are also meta-descriptions making statements about the description of the system. This could be for example statements like the following three sentences. The description is written in German. The photograph is 13 cm wide. The text is written in blue. In this article, we only look into descriptions of systems and do not consider descriptions of descriptions.
As of our previous discussion, we have concepts that form our systems and models, while we use descriptions (among them words) to describe the models. Descriptions can be formal, for example, NetLogo code, or informal, for example, a sketch on a napkin. Model descriptions can be descriptive, which means that they describe an existing or imagined system, such as a sketch or 3D model of a building; or prescriptive, which means that they state how the system should be when it is created. Examples of prescriptive systems are architectural drawings and computer programs. A descriptive description can be formal or informal depending on the intended use, while prescriptive descriptions usually are formal.
When we look at the examples we had so far, we see how descriptions are always involved. Figure 4 shows a description of a situation in the digital wolf–sheep system. Incidentally, the same figure could also show a description of a real wolf–sheep system, where each patch of grass is monitored with the height of grass, and each wolf and each sheep have position sensors that let us depict their current position on the display. This means that the same description can describe several different systems or system states. Conversely, different descriptions can prescribe the same system(s) if the meaning does not care for these differences, for example, different comments in programming code.
How does the description create a system? This is connected to the language that is used to describe the system, see Figure 16. The meaning is defined on the language level (L), where for each description, it is defined what they mean. This meaning can still have several alternatives, which are options to choose from. Instantiation then creates a system out of the possible alternatives. Meaning provides the set of possible systems, and instantiation selects one of them. In this way, we distinguish the time when the description is created (between L and D, often called design time) from the time when the description is used to create a system (between D and S, often called run time).
It is very common to disregard the difference between descriptions and systems. This is done by implicitly considering the description to be the same as its meaning, see Apostel (1960), Rothenberg et al. (1989), and Guarino et al. (2019). This works in simple cases, but as we see in Figure 16, there is not always a one-to-one connection between description and implied system. Interestingly, Guarino et al. (2019) have a strong focus on semantics without separating descriptions from systems.
Figure 5 is a more advanced description of the digital wolf–sheep system. Apart from showing the current state on the left side, it also shows several essential system properties on the left side. There are also buttons to start and stop the system. Furthermore, there is a history of the system in terms of the number of wolves, sheep, and grass.
We use this history (also shown in Figure 3) to come back to our discussion of static and dynamic systems. This history has both static and dynamic aspects. We can look at the history as a figure, and this is a static artifact. It also appears to be static when we stop the system. Of course, it is dynamic while the system evolves as the diagram is extended to the right with increasing time.
Interestingly, the diagram—even in its static form—is a description of a complete run, and therefore, we can extract the run out of it (at a certain level of abstraction). As the diagram includes the time dimension, we can trace the numbers of wolves, sheep, and the amount of grass for each time point from the diagram, thereby essentially constructing a run of a simplified wolf–sheep system. In this way, Figure 3 is a static description (even a prescription) of a dynamic system. In our understanding, all descriptions are static and do not change while we use them, see also Figure 16.
We continue our example with the solar system using the description at the bottom right of Figure 11. This description induces a mental system that can be used to understand the movements of the elements of the solar system. It can also be used to build a physical model like the one in the top right of Figure 11. There will also be a mental system induced by the physical model at the top right of Figure 11, which will probably be very similar to the one induced by the description at the bottom right of Figure 11.
Now, we can connect the descriptions with the models as shown in Figure 17. There are two steps between the description and the original system. First, the meaning is used to get the system out of the description. Now, we have two systems: one system coming from the description—we call it model in the diagram. The other is the original system. These two systems are related with the model-of relationship, i.e., they are analogous in some way.
6. Communication and semiotics
We have tried to avoid the whole issue of communication, and we have introduced systems and a relation between systems, which we call model-of. All these different systems exist independently of each other and independently of whether we talk about them or not. This is clear for the physical and the digital systems as they run without any person being present. The mental system seems to be different because it is created with social interaction. However, when it first is established in your brain, it does not need interaction, so you can think of wolves and sheep without being in contact with anyone else. In this way, the mental system exists independently of communication. However, communication is needed to create the concepts that shape the mental system.
This is even more visible when we look at descriptions. Of course, computers can turn code (descriptions) into running systems without human interpretation, but in general, descriptions and their meaning are deeply rooted in human interaction and communication.
This brings us to the field of semiotics which is about meaning-making. To connect our study with semiotics, we start with an explanation of the concepts in semiotics before explaining how they relate to our study.
Semiotics looks at the relations between the world, the sign, and the (human) users of the sign. A representation is something that stands for something else. A representation is thus a sign or an organized collection of signs. A sign consists of an expression (what we can sense) and a content (what the sign refers to/represents) (Saussure, 1974). The systematic connection between expression and content is called a code. To understand German, you must know the code for German. To interpret a chart, you need to understand the code that connects lines, planes, and colors to a particular content. While a photograph is interpreted from a naturalistic coding system (the expression looks like the object it refers to), a diagram is interpreted from an abstract coding system (where relations, patterns, and numerical values are given a graphic expression) (see Kress and Van Leeuwen, 2006 about coding systems).
In this article, we emphasize the difference between the expression (called description) and the content (called system) of the sign, which is brought about by the code (called meaning) of the language.
Semiotics is mostly interested in communication between (human) users, and therefore, all representations point to some mental system, i.e., our notions (perspectives, concepts, and imaginaries) of the world—notions that are partly common and partly individual. Therefore, we never know for sure whether we interpret a picture or a diagram in exactly the same way as another person. But the more similar our cultural background is (e.g., education), the greater the probability of an equal interpretation. The similarity of interpretation is not only related to the cultural background of the users but also to the actual situation of use, where the context enables the alignment of interpretations (see also the discussion of communication later in this section.)
According to Charles S. Peirce, the sign, which he called representamen, has a relation to the object that it refers to (Hartshorne and Weiss, 1932). The human interpretation of the sign, he calls the interpretant.
A sign, or representamen, is something which stands to somebody for something in some respect or capacity. It addresses somebody that is, creates in the mind of that person an equivalent sign, or perhaps a more developed sign. That sign which it creates I call the interpretant of the first sign.
The Danish semiotician Jrgen Dines Johansen (1979: p. 151) depicts Peirce's semiology as we see in Figure 18.
Figure 18. A model of the semiology of Charles Sanders Peirce, re-drawn from Johansen (1979).
A sign is an abstract entity (like the word “horse” is) that can be realized (expressed) in many ways. A realized version of a sign is a result of choices that take place in a process of design and production: the thickness of the lines, the choice of colors, etc. The implied system from the description in Figure 18 would be the same system if the lines are drawn thicker and the plane is colored in yellow. In semiotics, the term “description” is used for the details, relationships, and functions of the realized version of the sign, which means they are descriptions of the expression and not descriptions of the content, see also Section 7.3.
Peirce distinguishes between “the mediate object”—which exists independent of human imagination—and “the immediate object”—which is the object as experienced by humans. In Peirce's semiology, there is a connection between the sign and the mediate object although it is only the immediate, experienced object that the sign refers directly to. This connection is the same in our study. The sign refers to a mental system via the meaning, while the mental system can have a model-of relationship to reality given by a physical system. This way, a sign is a description of a (mental) system, which in itself can be a model of some other system.
In Figure 19, we highlight how Peirce's semiology model relates to our approach to modeling, with Peirce's model to the left and our view to the right.
Figure 19. The connection between Peirce's model of semiology and our understanding. Empty boxes/clouds with dashed lines represent elements not covered in that model.
Following Peirce's perspective on the sign as a means for mediation, we can look at a model of human communication. Shannon and Weavers provided a classic transmission model of communication. In this model, the message was seen as a “package” that was transmitted from a sender to a receiver in the form of a text (Shannon and Weaver, 1949).
Later communication models focus stronger on the interaction between the sign, the users of the sign, and the world, inspired by Peirce's model. Among the most well-known models is Karl Bühler's triangle model from 1934 (Bühler, 1934). Both models, Shannon and Weaver's linear transmission model and Bühler's triangle model, are combined in the model shown in Figure 20 from Engebretsen (2001).
Figure 20. Model that combines a linear and a triangle model of communication (Engebretsen, 2001: p. 26).
In the figure, the arrows represent the direction of signals, while all lines show relations relevant to the interpretation of these signals. As with the previous models in semiotics, Figure 20 provides an expression (a description) of a (mental) system of concepts and their connection. This mental system is the content of the model.
The system described in Figure 20 is a model of communication because it relates to our understanding of human communication processes in reality. The model itself is static, in the sense that the elements it depicts, and the relations between them are static. But the system it refers to is dynamic, in the sense that the effect of the communication process is dependent on many factors, e.g., how the receiver sees the world (the topic of the text) and the sender, in addition to how (s)he interprets the text itself.
How is this now connected to meaning? In linguistics, semantics belongs to grammar, together with morphology (how to make words) and syntax (how to make sentences). It refers to a formal system of signification: rose is a sub-group of flower, warm is the opposite of cold, etc. Semantics does not include context nor does it care about individual interpretation and association. A similar understanding is present in technology, where formal languages have (formal) semantics, and this does not care about individual interpretation.
For technology, meaning stops here, but in human communication, individual understanding is relevant. Thereafter, we talk about meaning and not semantics. What a sentence means to an individual language user in a specific context is a result of the combination of many factors, among them semantics, context, personal experience, and intentions.
Although the process of communication and meaning-making is complex, as indicated above, the focus in this study is on the part of the process where a person forms a mental system based on a verbal or non-verbal description of a system. Communication is much involved in the process of creating the individual mechanisms to assign meaning, but in this study, we only look at the result of the meaning-making and not at the way the meaning-making is developed.
7. Discussion
In this section, we discuss the connection between mental models and perspective. After that, we look at how UML understands the concepts explained in this study. Thereafter, we look into the difference between representations and descriptions. Finally, we discuss whether there are dynamic descriptions and whether the distinction between static and dynamic models is meaningful at all.
7.1. Perspective vs. mental model
From the presentation so far, it might be unclear what the relation is between perspective and mental model, as both are in the head of a person. We use perspective to provide the context of a system, i.e., the parts of reality we want to focus on (and the other parts that we leave out). In this study, we look at reality in a passive mode, and in that, we only observe what is happening. This is the reason why FRISCO (Falkenberg et al., 1998) identifies models to be conceptions. We do not directly connect to reality but rather to our perspective of it. We call this system, while FRISCO calls it model.
The perspective shapes what is possible to observe in reality and might even provide a basis to communicate what is observed. In this sense, the observation leads to a set of data points that are given based on the concepts of the observer. In this way, the concepts and the purpose shape the way we look at reality, i.e., they shape the physical system we use. In this context, concepts are only used for (passive) classification.
Modern research indicates that concepts do only exist in relation to other concepts (Barsalou et al., 2018) but this still allows the static idea of concepts to guide the perception of reality (thereby creating systems) and enable the construction of mental models.
After we have created or determined a physical system, we have many observations, and we are able to detect connections between the data points. In this way, we might be able to predict certain outcomes or understand causality. Adding these connections to the concepts turns them into an active mental system, which has a system state and includes dynamic changes. This mental system can help us to handle the reality that is captured in the system.
The reason that the mental system could help us to handle reality is that it corresponds to reality, which means that it is a model of the physical system we look at. This model informs our actions by predicting events. The mental model of the physical system exists in our mind independent of any processes of (immediate) communication, meaning that the concepts build a framework that is used for observation and which can be extended with rules and connections when we want to simulate reality.
Now, we see that there is a difference between a (physical) system, which is the (passive) interpretation of reality using conceptions, and a mental model, which is an active mental system that can be compared with the real system. In the context of FRISCO, both are called models, with the idea that both are mental entities. However, many systems are real, and also, many models are real, and it is only the model-of connection between them, which is mental. The most important issue with models is that they model something else, and with FRISCO, they point to reality, which is somewhat outside the framework. We want models to relate to something inside the framework, namely, systems. Therefore, we consider models to be special cases of systems, having a model-of relationship to their referent. The boundedness of systems in FRISCO applies to both systems and models for us.
Using a description of a mental model (as words, drawings, diagrams etc), it is possible to discuss this mental model with other people and align this or their models. As we discussed before, this needs a certain overlap in the related concepts, i.e., our perspectives need to be aligned. Otherwise, there is some danger that the description of the mental system leads to different mental systems in different people, which might lead to the result that the description does not describe a shared reality and does not work as a tool for human interaction and cooperation.
In communication, each linguistic term is a concept in the brain by means of its expression (e.g., its phonetics in the context of verbal communication) (Hofstadter, 2013). The concepts in the brain are representations in the brain (Slaney and Racine, 2011), which translates into networks of neurons (Barrett, 2017). So each linguistic term is a concept in the brain but the content of a word or of a collection of words translates to another concept in the brain by the way of the code related to the word.
As Gelman (2009) discussed, we set up our conceptual system as children—mostly based on our experiences. We do that by creating summaries of our experiences and finding commonalities between them. Later, as adults, we continue to develop our concepts by adjusting our neural connections rather than by creating new concepts. In this way, our concepts are tied to our experiences. At the same time, we use our concepts to build our mental models or, in other words, make sense of our world (Hofstadter, 2013). Thus, our understanding of the world influences our perspective of the world. Since our mental models rely on our concepts, as a consequence, our mental models relate to our perspective.
7.2. UML
UML (OMG Editor, 2017) is defined by the object management group (OMG). The UML language is embedded in the four-layer architecture as defined by OMG:
In this architecture, models are formulated in modeling languages defined by meta-models. Models are then used to create objects. For readers more familiar with programming languages, we have included how the architecture applies to grammars, including program, grammar and EBNF along with the UML model, UML metamodel (defining the language UML), and MOF as a subset of UML in which UML is defined.
Looking at the OMG architecture above, it becomes obvious that systems as discussed in Section 2 are composed of objects and therefore belong to layer M0. This applies to physical systems in the same way as to mental systems and digital systems. Some of these systems might be models of other systems, as discussed in Section 4. This means that models also belong to layer M0.
Layer M1 does then include “UML models,” which are collections of UML diagrams (class diagrams, object diagrams, interaction diagrams, ...). As we discussed in Section 5, diagrams and other expressions are not the models themselves, but they are descriptions of the models. This relates to the distinction between the expression of a sign (a description) and the content of a sign (the model). Therefore, we prefer to call the entities on layer M1 for “model descriptions.” Their content is given by the meaning as provided by the language they are written in.
This understanding is also supported by the engineering use of UML models (which are descriptions). Experiments with systems, like testing and simulations, are experiments with an execution, not with descriptions. For such experiments, a model description (UML model) must be interpreted or executed, and the result will be a system consisting of objects according to the classes described in the model description.
Validation is the process of finding out whether a system has the right model-of relation to an existing or planned real system. For establishing this model-of relation, it might help to look at system states of the model and of the reference system. System states can be shown using system state descriptions, and UML provides object diagrams containing instance specifications for this purpose. Object diagrams are also UML models, i.e., descriptions, and instance specifications are, as the term indicates, part of descriptions. This consideration shows that system states are placed on the layer M0, the same place where systems are situated. Object diagrams are descriptions and are therefore placed on the layer M1. Their content is given by their meaning and connects them with the system states on layer M0.
7.3. Description vs. representation
The term “description” used in this study could be misunderstood in semiotics, and the term “representation” could be a better choice. In this subsection, we want to discuss why we have chosen “description” and how representations come into the picture.
A description is generally understood as a set of statements about something else; the Oxford languages dictionary states as “a spoken or written account of a person, object, or event”; Collins dictionary states as “a description of someone or something is an account which explains what they are or what they look like”; Merriam-Webster states as “a statement or account giving the characteristics of someone or something : a descriptive statement or account”; and the Cambridge dictionary states as “something that tells you what something or someone is like.” This means that a description of a system is a set of statements about this system, as also indicated in OMG Editor (2017).
A representation is something that stands for something else; the Oxford languages dictionary states as “the description or portrayal of someone or something in a particular way”; Collins dictionary states as “you can describe a picture, model, or statue of a person or thing as a representation of them”; Merriam-Webster states as “one that represents such as an artistic likeness or image, a statement or account made to influence opinion or action ...,” where represent means “to bring clearly before the mind / to serve as a sign or symbol of / to serve as the counterpart or image of / to take the place of in some respect / to describe as having a specified character or quality”; and the Cambridge dictionary states as “the way that someone or something is shown or described / a sign, picture, model, etc. of something.” This means that a representation of a system is a sign or a set of signs denoting the system.
In semiotics, the sign has an expression and a content which are connected by a code. Therefore, it is natural to speak of a representation because the expression represents the content. It gets more difficult when the description is composed of several signs, and the content is a dynamic system. Now, the use of the term “representation” is not quite as natural. In addition, computer science has another use of representation, as in the term “representation” of say integer on a machine, in terms of bits and bytes, which makes the use of the term “representation” awkward.
While “representation” is denoting the sign in semiotics, the term “description” is used about signs, i.e., we describe signs or their expression. This means that a description (from semiotics) would be a meta-description (description of a description) in our context. As an example, a partial description of a system (an expression) is the sentence “occasionally, wolves eat sheep.” A meta-description would be “the sentence has four words.”
Another argument to be considered is that we already have a relationship that represents something else, which is the model-of relationship. This is also the reason that the original system is called “referent system.” In this way, “represent” is often used as synonym for “model.” We would typically say that in a system being a model of, e.g., a library, there will be objects that represent (model) the real books, and there will be loan objects representing loans. Connecting this with Figure 17, the vertical connection would be called “describe,” while the horizontal relation would be called “refer,” see also Figure 19.
From a broader view, representation intuitively sounds more formal and complete. While a description from the popular understanding can be complete and formal, a description can also be less useful or even completely irrelevant, e.g., “a sheep usually has more than 1 leg,” “a wolf consists of wolf-stuff,” and “grass is long and thin.” A representation, on the contrary, sounds like it carries some kind of “obligation” to represent something in a useful way.
From these arguments, we have concluded to use the term “description” for this article by being fully aware that in some contexts, this could be misunderstood.
7.4. Dynamic descriptions
In section 5, we have claimed that descriptions are static. This works very well when thinking of descriptions as texts, drawings, or photos. However, there are other modes of description that are less clear in their placement between static and dynamic.
Let us start with the case of animated charts. They are descriptions, and they change. This might mean that they are dynamic descriptions. On the contrary, they are also static in the sense that they are created once, and they are not changed in their binary form—only their presentation changes.
In media studies, we learn that radio (talk and music), TV, and film are dynamic media, while print-based media are static. Talk is dynamic (develops over time), while writing is static. Whether the reception and use unfold in time or not, it does not change this. Web-based media is thus a convergence of dynamic and static media. At this point, we might sense that this distinction is tricky to get crystal clear because it seems that the receiver is involved in the distinction.
To open up our minds, we look at some more examples. As a first example, we consider static drawings which are perceived to be dynamic (called optical illusions, see Kitaoka and Ashida, 2003). We could claim them to be static, but they appear dynamic to the viewer. Second, considering an ordinary book, it might seem static. However, nobody can read a book at once, normally you must read through it (sequentially or in some other way). Again, the experience is very dynamic. It gets even worse when it is an eBook, as now the book still is a static file, but the eBook reader presents a dynamic screen to you. You could call this dynamic, but it is very similar to the case of the printed book.
On the contrary, we can use a DVD as a third example. There is a complete movie on the DVD. The movie is considered to be dynamic, while the DVD appears very much like a static object. We can extend this example and connect it to the previous example of a book. When we look at a regular book, which is read by a blind person, then there is enabling technology that reads the book to the blind person. A similar scenario is just reading a book to someone else. The book is static, but the audio from the reading is dynamic.
A final dimension in this context is the dynamics in the creation process of the description. A book, a DVD, or an animated chart are created in a stepwise process, and the current state might not be the final one. This means we have dynamics on several levels, which makes issues even more complex. This might indicate that static vs. dynamic systems is a dichotomy hard to defend as exclusive categories and merely a fruitful starting point for elaboration and theoretical reflection.
Looking at the examples, it turns out that we already have a tool to sort out this ambiguity, and it is called systems. We remember that a system is a part of reality observed using a perspective. In our examples, we did not fix the perspective, and thereby, we did not fix the system either. So what we find is that we can consider a DVD as static depending on our perspective. With the same argument, we can consider a DVD as dynamic depending on our perspective. The DVD itself is physical reality and cannot be used in our considerations before we apply a perspective.
8. Conclusions
We have found several elements of importance for models as shown in Figure 19. First, we distinguish between reality and systems, where systems are parts of reality observed using a perspective. Depending on the chosen perspective, we can observe systems that do or do not change, i.e., dynamic or static. A dynamic system is then a system that has several different system states over time, while a static system always has the same system state at each time point.
This easily leads to models, where a model is just a system that is analogous to another system, called a referent system (original). The similarity is typically brought about by using a matching perspective for the two systems.
Finally, it is important to consider descriptions of systems, which are different from the systems themselves. A description of a system is a collection of statements about a system, shaped by the language it is presented in (words, images etc). Descriptions of systems are necessary in order to communicate about systems. The descriptions lead to systems by their implicit or explicit meaning, which is coming from the language that is used for the description. The implied system can be compared to another system establishing the model-of relationship. The descriptions in themselves are not the models, but the descriptions of the models.
These elements provide a comprehensive framework to understand models in diverse domains.
Data availability statement
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.
Author contributions
All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Footnotes
1. ^There is a difference between what is mental (and individual) and what is social (and shared). In this study, both are subsumed in the concept of a mental system.
2. ^The diagram is a figure of the heavenly bodies—An illustration of the Ptolemaic geocentric system by Portuguese cosmographer and cartographer Bartolomeu Velho, 1568 (Bibliothèque Nationale, Paris).
3. ^The picture shows an "exploded" view of a 2021 computer model of the Antikythera mechanism, showing how it might have worked.
4. ^The heliocentric model is a mechanical orrery by Gilkerson, in the Armagh Observatory. Credit: star.arm.ac.uk.
References
Apostel, L. (1960). Towards the formal study of models in the non-formal sciences. Synthese 12, 125–161. doi: 10.1007/BF00485092
Bammer, G. (2013). Disciplining Interdisciplinarity: Integration and Implementation Sciences for Researching Complex Real-World Problems. Canberra, ACT: ANU Press.
Barrett, L. (2017). How Emotions Are Made: The Secret Life of the Brain (Houghton Mifflin Harcourt).
Barsalou, L. W., Dutriaux, L., and Scheepers, C. (2018). Moving beyond the distinction between concrete and abstract concepts. Philos. Trans. R. Soc. B 373, 2017144. doi: 10.1098/rstb.2017.0144
Bjeković, M, Proper, H. A., and Sottet, J.-S. (2014). “Embracing pragmatics,” in Conceptual Modeling, eds E. Yu, G. Dobbie, M. Jarke, and S. Purao (Cham: Springer International Publishing), 431–444.
Brughmans, T., Hanson, J., Mandich, M., Romanowska, I., Rubio-Campillo, X., Carrignon, S., et al. (2019). Formal modelling approaches to complexity science in roman studies: a manifesto. Theor. Roman Archaeol. J. 2, 1–19. doi: 10.16995/traj.367
Bühler, K. (1934). 1990. Theory of Language: The Representational Function of Language. Translated by Donald Fraser Goodwin. Amsterdam: John Benjamins Publishing Company.
Chamizo, J. A. (2011). A new definition of models and modeling in chemistry's teaching. Sci. Educ. 22, 1613–1632. doi: 10.1007/s11191-011-9407-7
Engebretsen, M. (2001). Nyheten som Hypertekst: Tekstuelle Aspekter Ved Møtet Mellom en Gammel Sjanger og ny Teknologi. Kristiansand: IJ-forlaget.
Falkenberg, E., Hesse, W., Lindgreen, P., Nilsson, B., Han Oei, J., Rolland, C., et al. (1998). “FRISCO: A framework of information system concepts: the FRISCO report (WEB edition),” in International Federation for Information Processing (IFIP).
Fischer, J., Møller-Pedersen, B., and Prinz, A. (2020). “Real models are really on m0-or how to make programmers use modeling,” in Proceedings of the 8th International Conference on Model-Driven Engineering and Software Development-MODELSWARD (Valletta: INSTICC, SciTePress), 307–318.
Friedman, L., Friedman, H., and Pollack, S. (2008). The role of modeling in scientific disciplines: a taxonomy. Rev. Bus. 29, 61–67.
Gelman, S. A. (2009). Learning from others: children's construction of concepts. Annu. Rev. Psychol. 60, 115–140. doi: 10.1146/annurev.psych.59.103006.093659
Grüne-Yanoff, T., and Mäki, U. (2014). Introduction: Interdisciplinary model exchanges. Stud. History Philos. Sci. A 48, 52–59. doi: 10.1016/j.shpsa.2014.08.001
Guarino, N., Guizzardi, G., and Mylopoulos, J. (2019). “On the philosophical foundations of conceptual models,” in Information Modelling and Knowledge Bases XXXI (Amsterdam), 1–15.
Hartshorne, C., and Weiss, P. (1932). Collected Papers of Charles Sanders Peirce, Volumes I and II, Principles of Philosophy and Elements of Logic. Cambridge, MA: Harvard University Press.
Heemskerk, M., Wilson, K., and Pavao-Zuckerman, M. (2003). Conceptual models as tools for communication across disciplines. Conservat. Ecol. 7, 308. doi: 10.5751/ES-00554-070308
Hofstadter, D. R. (2013). Surfaces and essences : Analogy as the Fuel and Fire of Thinking. New York, NY: Basic Books.
Husserl, E. (1960). Cartesian Meditations: An Introduction to Phenomenology (d. Cairns, Trans.). The Hague: Martinus Nijhoff 62.
Ingham, J., Dunn, I. J., Heinzle, E., Přenosil, J. E., and Snape, J. B. (2007). Chemical Engineering Dynamics: An Introduction to Modelling and Computer Simulation. Weinheim: Wiley-VCH.
Johansen, J. (1979). “Sign concepts/semiosis/meaning”, in Johansen J. D. and Nöjgaard M. (eds.) Danish semiotics. København: Munksgaard.
Johnson-Laird, P. (1983). Mental Models: Toward a Cognitive Science of Language, Inference and Consciousness. Cambridge, UK: Cambridge University Press.
Kitaoka, A., and Ashida, H. (2003). Phenomenal characteristics of the peripheral drift illusion. Vision 15, 261–262. doi: 10.24636/vision.15.4_261
Kjeldstadli, K. (1997). “Det fengslende ordet”, in Kjeldstadlie, K., Myhre, J. and Pryser, T. (eds.) Valg og vitenskap: Festskrift til Sivert Langholm. Oslo: Den Norske Historiske Forening.
Kress, G., and Van Leeuwen, T. (2006). Reading Images: The Grammar of Visual Design. New York, NY: Routledge.
Mistry, N. S., and Koyner, J. L. (2021). Artificial intelligence in acute kidney injury: from static to dynamic models. Adv. Chronic Kidney Dis. 28, 74–82. doi: 10.1053/j.ackd.2021.03.002
OMG Editor (2017). Unified Modeling Language: Infrastructure Version 2.5.1 (OMG Document Formal/2017-12-05). OMG Document. Published by Object Management Group. Available online at: http://www.omg.org
Ramirez, J. A., Lichter, M., Coulthard, T. J., and Skinner, C. (2016). Hyper-resolution mapping of regional storm surge and tide flooding: comparison of static and dynamic models. Natural Hazards 82, 571–590. doi: 10.1007/s11069-016-2198-z
Roberson, D., Davidoff, J., Davies, I., and Shapiro, L. (2006). “Progress in colour studies, volume II,” in Psychological Aspects, Chapter Colour categories and category acquisition in Himba and English (Amsterdam; Philadelphia, PA: John Benjamins Publishing Company), 159–172.
Rothenberg, J., Corporation, R., and Agency, U. S. D. A. R. P. (1989). The Nature of Modeling. A Rand note. Santa Monica, CA: Rand.
Saussure, F. D. (1974). Course in General Linguistics. Glasgow: Collins. Ashburn, VA: Charles and Sechehaye; Albert.
Schütz, A. (1962). Collected Papers, Volume I: The Problems of Social Reality, Chapter On Multiple Realities. Ashburn, VA: Nijhoff, Haag.
Shannon, C. E., and Weaver, W. (1949). The Mathematical Theory of Communication. Urbana, IL: University of Illinois Press.
Slaney, K. L., and Racine, T. P. (2011). On the ambiguity of concept use in psychology: is the concept “concept” a useful concept? J. Theor. Philos. Psychol. 31, 73–89. doi: 10.1037/a0022077
Taber, K. S. (2017). Science Education: An International Course Companion, chapter Models and Modelling in Science and Science Education. Rotterdam: SensePublishers.
Thalheim, B. (2011). “Chapter the theory of conceptual models, the theory of conceptual modelling and foundations of conceptual modelling,” in Handbook of Conceptual Modeling: Theory, Practice, and Research Challenges (Berlin; Heidelberg: Springer Berlin Heidelberg), 543–577.
Thalheim, B., and Nissen, I. (2015). “Wissenschaft und Kunst der Modellierung: Kieler Zugang zur Definition, Nutzung und Zukunft,” in Deutsche Bibliothek der Wissenschaften: Philosophische Analyse (Berlin: De Gruyter).
Vynnycky, E., and White, R. (2010). An Introduction to Infectious Disease Modelling. Oxford: OUP Oxford.
Keywords: model, system, description, meaning, snapshot, static, dynamic
Citation: Prinz A, Engebretsen M, Gjøsæter T, Møller-Pedersen B and Xanthopoulou TD (2023) Models, systems, and descriptions–A cross-disciplinary reflection on models. Front. Comput. Sci. 5:1031807. doi: 10.3389/fcomp.2023.1031807
Received: 30 August 2022; Accepted: 14 February 2023;
Published: 05 April 2023.
Edited by:
Heinrich C. Mayr, University of Klagenfurt, AustriaReviewed by:
Henderik Proper, Vienna University of Technology, AustriaAndrás J. Molnár, Computer and Automation Research Institute (MTA), Hungary
Copyright © 2023 Prinz, Engebretsen, Gjøsæter, Møller-Pedersen and Xanthopoulou. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Andreas Prinz, YW5kcmVhcy5wcmlueiYjeDAwMDQwO3VpYS5ubw==