Skip to main content

OPINION article

Front. Psychol., 11 August 2022
Sec. Theoretical and Philosophical Psychology
This article is part of the Research Topic Credition - An Interdisciplinary Approach to the Nature of Beliefs and Believing View all 42 articles

Deontic-doxastic belief revision and linear system model

  • 1Faculty of Information Systems and Applied Computer Sciences, University of Bamberg, Bavaria, Germany
  • 2Starr King School, Oakland, CA, United States

Introduction

The article presents a doxastic-nested-deontic formalization of epistemic deontology (Feldman, 2000; Forrai, 2021) for static and dynamic belief revision, in AGM theory (and extensions) and Dynamic Epistemic Logic, respectively. The article also introduces a linear system model for beliefs1.

Doxastic and deontic logics axiomatize propositions about beliefs (“it is believed that”) and prescriptions (“it is obligatory that”), respectively. They belong to the family of modal logics.

Static and dynamic belief revisions follow from adding conflicting information to a belief database: in the static setting the doxastic value of the information is fixed (revision is non-iterated); in the dynamic setting information can be revised (revision can be iterated). In light of this, static belief revision might seem incompatible with belief update (Katsuno and Mendelzon, 1992) since update deals with information change (Seitz et al., 2018). This position has been variously challenged (Friedman and Halpern, 1994; Peppas and Williams, 1995; Gabbay, 1999; Aucher, 2004). Belief revision theories do relate to models for database update (Val and Shoham, 1994; Williams, 1997; Ditmarsch et al., 2008).

The article's outputs address: doxastic voluntarism; a paradox in strong epistemic deontology; the specificity of religious beliefs (Oviedo and Szocik, 2020).

Static belief revision

Beliefs are elements of a set B (Alchourrón et al., 1985; Gärdenfors, 1988) over which three relations are defined: (1) logical consequence “⊢” (Gärdenfors, 1984; Alchourrón et al., 1985); (2) epistemic entrenchment “≼” (Gärdenfors and Makinson, 1988), and (3) spheres inclusion “≥” (Grove, 1988).

Concerning 1, elements of B are logical consequences of other elements (e.g., believing that tomorrow will rain follows from believing in the reliability of weather forecasts).

B=Cn(B)={α:B  α}

(Huber, 2013; Hansson, 2022).

In case of a new information ϑ contradicting some elements of B, B is revised (“∗”) by clearing B from all elements contradicted by ϑ, and adding ϑ:

Bϑ=Cn(B-¬ϑ{ϑ})

(Levi, 1977).

Concerning 2, entrenchment is a preorder on B (Peppas, 2008) based on belief firmness: the more a belief is entrenched, the more it costs to give it up. This applies also to logical consequences; thus 1 and 2 are related: α ⊢ β → α ≼ β (Dominance postulate). Belief revision deals with clearing B from anything that is less or equally entrenched than all elements contradicted by ϑ, and adding ϑ.

Bϑ=Cn({ψ B:¬ϑψ}{ϑ}).

Revision consists in the “minimal mutilation” (Rott, 2000; Leitgeb, 2010) of B [keeping as much old beliefs as possible (Ditmarsch et al., 2008)], and the addition of ϑ.

Concerning 3, worlds w in which elements from B are true are placed on spheres ordered per inclusion. Given a Kripke model M, [B]M={wWM: M,wφ φB}. Inclusion can be grasped as plausibility order (Peppas, 2008): the most plausible possible worlds are located on spheres with the least radius. Thus, 3 is related to 2: φ ≼ ψ ≅ [φ]M ≥ [ψ]M. Considering [ϑ]M={wWM: M,wϑ}, agent a's belief in φ conditioned on ϑ (“Baϑφ”) is true in the minimal-radius spheres (i.e., most plausible worlds) in which ϑ is also true. By simplifying Baltag and Renne, 2016:

M,wBaϑφBϑmina([ϑ]M)[φ]M.

This corresponds to making ϕ a safe belief (Baltag et al., 2008):

M,wBaϑφBϑM,waφ. 

Thus, the formula for epistemic deontology of belief revision in a static setting corresponds to:

((aϑxBa:ϑx)) O(ϕBa, ϕaφ). 

Since Baϑφ is a doxastic conditional, ◇O is a conditioned obligation presupposing that agent a has at least a safe belief on ϑ, and that ϑ is in a negative relation with at least one element of B. The formula represents the duty to increase the epistemic degree of set B: it formalizes Kant's “Sapere aude!” (Kant, 2013).

The nested formula applies to negative doxastic voluntarism (NDV), the idea that we have control not over belief formation but over belief withdrawal (Rott, 2017). The formula translates “belief withdrawal” into “epistemic-degree-increase duty,” and it associates the notion of “negative control” to the whole spectrum of duty realizations, including duty non-realization; thus, voluntarism pertains also to the refusal of epistemic degree increase. Moreover, since the duty is conditional, the formula expands NDV to include (or even presuppose) belief expansion (□aϑ).

Dynamic belief revision

In static belief revision, information ϑ is included in the revised set (Success postulate: ϑB*ϑ). Thus, static revision assumes the epistemic value of ϑ to be unchangeable. This is problematic, e.g., in the case of Moore sentences involving higher-order beliefs (Baltag et al., 2008). To amend this, ϑ shall be considered susceptible of revision too. Research in dynamic belief revision distinguishes at least three epistemic degrees of ϑ (van Benthem, 2007; Baltag and Smets, 2009; Baltag et al., 2014): (1) ϑ is “hard information” issued from an infallible source: it is neither revisable nor revocable; (2) ϑ is “soft information” from a fallible, yet highly reliable source; (3) ϑ is “soft information” from a barely trusted source (truthfulness can be easily given up).

To these three doxastic degrees correspond three types of dynamic belief revision:

1. Radical revision [!ϑ]: it eliminates all ¬ϑ-worlds and the previous plausibility order is preserved between the remaining worlds.

2. Lexicographic (radical) revision [⇑ϑ]: all ϑ-worlds are made more plausible than ¬ϑ-words, and the rest of the order is unchanged.

3. Conservative (neutral) revision [↑ϑ]: the most plausible ϑ-words become the most plausible worlds overall, and all rest is unchanged.

Thus, the formula for the epistemic deontology of belief revision in a dynamic setting corresponds to (the lexicographic formula; van Benthem, 2011 is a generalization of the conservative one):

/O([!ϑ]Baφ(ϑBaϑ([!ϑ]φ))M,w[ ϑ]BaφM  ϑ,wBaφ[ϑ]Baφ)

The deontic operator might be not conditioned since the doxastic conditions for revision are within the obligation. This would introduce to a strong epistemic deontology: under no condition a belief is allowed to be held if no sufficient evidence supports it.

This leads to a paradox in strong epistemic deontology. Let's assume two scenarios: 1. the revision process halts, 2. it does not halt. In 1, the revision halts because a belief has received sufficient evidence to be no longer revisable. Thus, it is not even a (safe) belief: it is infallible and indefeasible knowledge resisting any information (true or false) (Baltag and Smets, 2008). In 2, the reiterated halt delay means that the collection of evidence never ends: the belief is never allowed to be held. Thus, from a strong epistemic deontology, no belief is ever legitimate, regardless of the doxastic degree of it: either a belief is transformed into knowledge, or it is never sufficiently justified. Hence, the paradox: believing is always wrong for the fact itself of believing.

The deontic encapsulation of dynamic belief revision might address this paradox by including not only belief revision, but also information ϑ in the deontic environment. The duty of belief revision is not unconditioned, but conditioned by the duty of evaluating the object itself of duty (collecting ϑ) either positively or negatively. This might include the rejection or neglection of information ϑ as forms of epistemic deontology satisfaction.

Linear system model

The aforementioned theories conceive beliefs as elements of a set. This set model imposes at least three requirements: (1) The elements of a belief set must be orderable according to some (pre)orders; (2) The belief set must be somehow coherent, and belief revision corresponds to the maximal preservation of this coherency; (3) A new information is needed, which contradicts at least one belief. The weight of the revision work is proportional to the number of beliefs connected to the new information, and to the negativity of this connection.

Do these three requirements apply to all belief set revisions? If we take the case of religious beliefs, then: (1) An ordering relation implies a comparability between the set elements which at its turn implies a homogeneity of the elements' epistemic bases. However, religious beliefs cover different epistemic spheres: metaphysical, moral, aesthetic, pragmatic, etc. It's not clear how beliefs referring to such different epistemic spheres can be fully comparable. (2) The issue of theodicy is evidence against the (at least prima facie) coherency of religious beliefs since theodicy tries to address the incompatibility between the belief in divine omnipotence and the belief in divine justice. (3) The revision of a religious set may start not only from external information, but also from introspection, i.e., the internal evaluation and investigation of one's faith.

Thus, I propose an alternative model, in which beliefs are elements of a system of linear equations. This linear system model has at least two advantages compared to the set model:

1. Bottom-up organization. In the set model, a belief's relevance depends on its being an element of a set, i.e., the belief characteristics are deduced from the set definition. This is why in the set model beliefs constitute a coherent unity and are ordainable: conditions 1 and 2 follow from the application of the set model to beliefs (and their revision). In the linear system model, the solution of the system is given by the linear equations (the elements) constituting the system. Thus, the belief's characteristics precede (and not follow from) the system including them: rather than selecting beliefs in light of a certain model (a certain definition of belief set), the model is constructed and constantly readjusted in light of the elements we aim to investigate. This bottom-up organization respects the epistemic “matter” by building the model upon this matter.

2. Representation of belief stratification. Beliefs are stratified vertically and horizontally. The vertical stratification is the succession of beliefs, represented by the order of the equations in the linear system; this succession is not necessarily a preorder since the equations' order does not change the system's solution. However, the vertical stratification has a procedural function: it eases the substitution of the variables that are gradually known. Moreover, the system might allocate different epistemic spheres in different vertical strata, thus not overlapping epistemically distinct beliefs. The horizontal stratification is the composition of a belief as a sum of sub-beliefs: each sub-belief is a part of the greater belief, and their order in the summation corresponds to their relevance within the whole belief. For example, the belief in the 10 commandments is composed by the sub-beliefs in all single commandments, each sub-belief doxastically introducing the successive.

This linear system model is:

R={a1,11+a1,22+a1,33++a1,nn=w1am,11+am,22+am,33++am,nn=wm
                             R=i=1mj=1nai,jj=i=1mwi 

A system R represents the vertical stratification of m beliefs: it is the intersection of m polynomial equations with n variables. In each equation, the coefficient ai, j is the content of a belief or sub-belief, e.g., an equation with ten coefficients might represent the belief in the ten commandments. The variable □j is the doxastic value associated to the belief content in position j. The doxastic value is the same for all coefficients in position j since it follows the horizontal stratification: a sub-belief in j is the doxastic “step” to reach the sub-beliefs in positions k > j. The constant term wi expresses the possible world plausibility of the entire polynomial.

This model permits a more “economic” belief revision. In the set model (for both static and dynamic scenarios), revision consists in a modification of the set structure: a subset is eliminated or displaced in light of new information. In the linear system model, the elimination of a belief (equation) does not necessarily affect the system: the condition is for the number of equations to be at least equal to the number of doxastic values; one can obtain this by readjusting some coefficients (sub-beliefs), e.g., expunging the filioque belief without touching other religious-metaphysical beliefs, but maybe modifying some religious-aesthetic beliefs.

This model also permits a simpler procedure to compare different belief systems. For example, an orthodox and a non-orthodox Christian might have belief systems (resp. R1 and R2) which differ for the third equation (non-filioque in R1, in bold), but are identical for the rest.

R1={a1,11+a1,22+a1,33++a1,1010=w1b2,11=w2c3,11+02+c3,33 = w3
R2={a1,11+a1,22+a1,33++a1,1010=w1b2,11=w2c3,11+c3,22+c3,33 = w3

Matrix form is even clearer:

R1=[a1,1a1,2a1,3a1,10b2,1000c3,10c3,30]R2=[a1,1a1,2a1,3a1,10b2,1000c3,1c3,2c3,30]

The linear system model presents an intuitive approach to synthetically grasp a relationship between belief systems. Thus, the model might better capture the limits and extent of ecumenical and interreligious dialogues.

Discussion

Aspects of future investigation include: establishing the doxastic-nested-deontic grammar; assessing the approach it provides to doxastic voluntarism; presenting a deontic investigation of the epistemic deontology paradox; deepening the potentialities and weaknesses of the linear system model for beliefs; exploring belief translatability from the linear model to the set model and vice-versa.

Author contributions

The author confirms being the sole contributor of this work and has approved it for publication.

Funding

This paper was funded by Dr. Rüdiger Seitz, via the Volkswagen Foundation, Siemens Healthineers, and the Betz Foundation.

Acknowledgments

I thank Martin Aleksandrov, Christoph Benzmüller, Alessio Pirastu, and the reviewer for valuable comments on a previous version of this article.

Conflict of interest

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Footnotes

1. ^The limits of this article do not allow consideration of other belief revision theories – e.g., ranking theory (Spohn, 1988, 2012; Huber, 2006, 2021) and Bayesian model (Brown et al., 2019) – nor discussion of AGM theory being an idealization of actual human doxastic agents in light of the logical, epistemological, and empirical simplifications involved in AGM (Wassermann, 1999; Berto, 2019). However, this idealization is useful to formalize belief revision (Hansson, 2022), thus paving the way to models more adherent to real doxastic situations, such as Dynamic Epistemic Logic (Section 3) and the linear system model (Section 4). Any adherence is nevertheless affected by the distinction between model and modeled object. AGM's clarity and logical and computational versatility (Delgrande et al., 2013; Spurkeland et al., 2013) make it a good candidate to introduce the doxastic-nested-deontic grammar in this article.

References

Alchourrón, C., Gärdenfors, P., and Makinson, D. (1985). On the logic of theory change: partial meet contraction and revision functions. J. Symbolic Logic 50, 510–530. doi: 10.2307/2274239

CrossRef Full Text | Google Scholar

Aucher, G. (2004). “A combined system for update logic and belief revision,” in Pacific Rim International Workshop on Multi-Agents (Berlin: Springer), 1–17.

Google Scholar

Baltag, A., Fiutek, V., and Smets, S. (2014). “DDL as an “Internalization” of dynamic belief revision,” in Krister Segerberg on Logic of Action, ed R. Trypuz (Berlin: Springer), 253–280.

Google Scholar

Baltag, A., and Renne, B. (2016). “Dynamic epistemic logic,” in The Stanford Encyclopedia of Philosophy, Eds E. N. Zalta. Available online at: https://plato.stanford.edu/archives/win2016/entries/dynamic-epistemic/ (accessed July 26, 2022).

Google Scholar

Baltag, A., and Smets, S. (2008). “A qualitative theory of dynamic interactive belief revision,” in TLG 3: Logic and the Foundations of Game and Decision Theory, Eds G. Bonanno, W. van der Hoek, and M. Wooldridge (Amsterdam: Amsterdam University Press), 11–58.

Google Scholar

Baltag, A., and Smets, S. (2009). “Group belief dynamics under iterated revision: fixed points and cycles of joint upgrades,” in Proceedings of Theoretical Aspects of Rationality and Knowledge TARK (New York: Association for Computing Machinery), 41–50.

Google Scholar

Baltag, A., van Ditmarsch, H. P., and Moss, L. S. (2008). “Epistemic logic and information update,” in Philosophy of Information, Eds P. Adriaans and J. van Benthem (Cambridge, MA: MIT Press), 361–456.

Berto, F. (2019). Simple hyperintensional belief revision. Erkenntnis 84, 559–575. doi: 10.1007/s10670-018-9971-1

PubMed Abstract | CrossRef Full Text | Google Scholar

Brown, W., Gyenis, Z., and Rédei, M. (2019). The modal logic of bayesian belief revision. J. Philos. Logic 48, 809–824. doi: 10.1007/s10992-018-9495-9

CrossRef Full Text | Google Scholar

Delgrande, J., Peppas, P., and Woltran, S. (2013). “AGM-style belief revision of logic programs under answer set semantics,” in Logic Programming and Nonmonotonic Reasoning. LPNMR 2013, Eds P. Cabalar and T. C. Son (Berlin: Springer), 264–276.

Google Scholar

Ditmarsch, H., Hoek, W., and Kooi, B. (2008). Dynamic Epistemic Logic. Berlin: Springer.

Google Scholar

Feldman, R. (2000). The ethics of belief. Philos. Phenomenol. Res. 60, 667–695. doi: 10.2307/2653823

CrossRef Full Text | Google Scholar

Forrai, G. (2021). Doxastic deontology and cognitive competence. Erkenntnis 86, 687–714. doi: 10.1007/s10670-019-00126-1

CrossRef Full Text | Google Scholar

Friedman, N., and Halpern, J. Y. (1994). “A knowledge-based framework for belief change, part II: revision and update,” in Proceedings of the Fourth International Conference on Principles of Knowledge Representation and Reasoning (KR'94) (San Francisco, CA, USA: Morgan Kaufmann Publishers), 190–201.

Google Scholar

Gabbay, D. M. (1999). “Compromise update and revision: a position paper,” in Dynamic Worlds, eds R. Pareschi and B. Fronhöfer (Dordrecht: Springer), 111–148.

Google Scholar

Gärdenfors, P. (1984). Epistemic importance and minimal changes of belief. Australas. J. Philos. 62, 136–157. doi: 10.1080/00048408412341331

CrossRef Full Text | Google Scholar

Gärdenfors, P. (1988). Knowledge in Flux: Modeling the Dynamics of Epistemic States. Cambridge, MA: The MIT press.

Google Scholar

Gärdenfors, P., and Makinson, D. (1988). “Revisions of knowledge systems using epistemic entrenchment,” in Proceedings of Theoretical Aspects of Reasoning About Knowledge (San Francisco (CA): Morgan Kaufmann Publishers), 83–95.

Google Scholar

Grove, A. (1988). Two modellings for theory change. J. Philos. Logic 17,157–170. doi: 10.1007/BF00247909

CrossRef Full Text | Google Scholar

Hansson, S. O. (2022). “Logic of belief revision,” in The Stanford Encyclopedia of Philosophy, Ed E. N. Zalta. Available online at: https://plato.stanford.edu/archives/spr2022/entries/logic-belief-revision/ (accessed July 26, 2022).

Google Scholar

Huber, F. (2006). Ranking functions and rankings on languages. Artif. Intell. 170, 462–471. doi: 10.1016/j.artint.2005.10.016

PubMed Abstract | CrossRef Full Text | Google Scholar

Huber, F. (2013). Belief revision I: the AGM theory. Philos. Compass 8, 604–612. doi: 10.1111/phc3.12048

CrossRef Full Text | Google Scholar

Huber, F. (2021). Belief and Counterfactuals: A Study in Means-End Philosophy. Oxford: Oxford University Press.

Google Scholar

Kant, I. (2013). An Answer to the Question: “What is Enlightenment?”. London: Penguin Books.

Google Scholar

Katsuno, H., and Mendelzon, A. (1992). “On the difference between updating a knowledge base and revising it,” in Belief Revision, Ed P. Gärdenfors (Cambridge: Cambridge University Press), 183–203.

Leitgeb, H. (2010). On the Ramsey Test without triviality. Notre Dame J. Formal Logic 51, 21–54. doi: 10.1215/00294527-2010-003

CrossRef Full Text | Google Scholar

Levi, I. (1977). Subjunctives, dispositions and chances. Synthese 34, 423–455. doi: 10.1007/BF00485649

CrossRef Full Text | Google Scholar

Oviedo, L., and Szocik, K. (2020). Religious – And Other Beliefs: How Much Specificity? Sage Open. [Epub ahead of print].

Google Scholar

Peppas, P. (2008). “Belief revision,” in Handbook of Knowledge Representation, Eds F. van Harmelen, V. Lifschitz, and B. Porter (Amsterdam: Elsevier), 317–359.

Google Scholar

Peppas, P., and Williams, M. A. (1995). Constructive modelings for theory change. Notre Dame J. Formal Logic 36, 120–133. doi: 10.1305/ndjfl/1040308831

CrossRef Full Text | Google Scholar

Rott, H. (2000). Two dogmas of belief revision. J. Philos. 97, 503–522. doi: 10.2307/2678489

CrossRef Full Text | Google Scholar

Rott, H. (2017). Negative Doxastic Voluntarism and the concept of belief. Synthese 194, 2695–2720. doi: 10.1007/s11229-016-1032-1

CrossRef Full Text | Google Scholar

Seitz, R. J., Paloutzian, R. F., and Angel, H. F. (2018). From believing to belief: a general theoretical model. J. Cogn. Neurosci. 30, 1254–1264. doi: 10.1162/jocn_a_01292

PubMed Abstract | CrossRef Full Text | Google Scholar

Spohn, W. (1988). “Ordinal conditional functions: a dynamic theory of epistemic states,” in Causation in Decision, Belief Change, and Statistics II, Eds W. L. Harper and B. Skyrms (Dordrecht: Kluwer), 105–134.

Google Scholar

Spohn, W. (2012). The Laws of Belief: Ranking Theory and Its Philosophical Applications. Oxford: Oxford University Press.

Google Scholar

Spurkeland, J. S., Jensen, A. S., and Villadsen, J. (2013). Belief revision in the GOAL agent programming language. Int. Scholarly Res. Notices 2013:632319. doi: 10.1155/2013/632319

CrossRef Full Text | Google Scholar

Val, A. D., and Shoham, Y. (1994). A unified view of belief revision and update. J. Logic Comput. 4, 797–810. doi: 10.1093/logcom/4.5.797

CrossRef Full Text | Google Scholar

van Benthem, J. (2007). Dynamic logic for belief revision. J. Appl. Non Classical Logics 17, 129–155. doi: 10.3166/jancl.17.129-155

CrossRef Full Text | Google Scholar

van Benthem, J. (2011). Logical Dynamics of Information and Interaction. Cambridge: Cambridge University Press.

Google Scholar

Wassermann, R. (1999). Resource bounded belief revision. Erkenntnis 50, 429–446. doi: 10.1023/A:1005565603303

CrossRef Full Text | Google Scholar

Williams, M.-A. (1997). “Belief revision as database update,” in Proceedings Intelligent Information Systems, 410–414.

Google Scholar

Keywords: belief revision, Dynamic Epistemic Logic, AGM theory, doxastic voluntarism, deontic logic, doxastic logic, belief models, logic of belief revision

Citation: Vestrucci A (2022) Deontic-doxastic belief revision and linear system model. Front. Psychol. 13:948330. doi: 10.3389/fpsyg.2022.948330

Received: 19 May 2022; Accepted: 19 July 2022;
Published: 11 August 2022.

Edited by:

Rüdiger J. Seitz, Heinrich Heine University of Düsseldorf, Germany

Reviewed by:

David Kronemyer, University of California, Los Angeles, United States

Copyright © 2022 Vestrucci. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Andrea Vestrucci, avestrucci@sksm.edu

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.