Skip to main content

METHODS article

Front. Artif. Intell., 21 December 2023
Sec. Machine Learning and Artificial Intelligence
This article is part of the Research Topic Symmetry as a Guiding Principle in Artificial and Brain Neural Networks, Volume II View all 7 articles

A topological model for partial equivariance in deep learning and data analysis

  • 1Department of Mathematics, University of Bologna, Bologna, Italy
  • 2Department of Electrical, Electronic, and Information Engineering (DEI) and WiLab-National Laboratory for Wireless Communications, National Inter-University Consortium for Telecommunications (CNIT), University of Bologna, Bologna, Italy
  • 3Department of Mathematics, Royal Institute of Technology (KTH), Stockholm, Sweden

In this article, we propose a topological model to encode partial equivariance in neural networks. To this end, we introduce a class of operators, called P-GENEOs, that change data expressed by measurements, respecting the action of certain sets of transformations, in a non-expansive way. If the set of transformations acting is a group, we obtain the so-called GENEOs. We then study the spaces of measurements, whose domains are subjected to the action of certain self-maps and the space of P-GENEOs between these spaces. We define pseudo-metrics on them and show some properties of the resulting spaces. In particular, we show how such spaces have convenient approximation and convexity properties.

1 Introduction

Over the past decade, several geometric techniques have been incorporated into Deep Learning (DL), giving rise to the new field of Geometric Deep Learning (GDL) (Cohen and Welling, 2016; Masci et al., 2016; Bronstein et al., 2017). This geometric approach to deep learning is exploited with a dual purpose. On one hand, geometry provides a common mathematical framework to study neural network architectures. On the other hand, a geometric bias, based on prior knowledge of the data set, can be incorporated into DL models. In this second case, GDL models take advantage of the symmetries imposed by an observer, which encode and elaborate the data. The general blueprint of many deep learning architectures is modeled by group equivariance to encode such properties. If we consider measurements on a data set and a group encoding their symmetries, i.e., transformations taking admissible measurements (for example, rotation or translation of an image), the group equivariance is the property guaranteeing that such symmetries are preserved after applying an operator (e.g., a layer in a neural network) on the observed data. In particular, let us assume that the input measurements Φ, the output measurements Ψ and, respectively, their symmetry groups G and H are given. Then the agent F: Φ → Ψ is T-equivariant if F(φg) = F(φ)T(g), for any φ in Φ and any g in G, where T is a group homomorphism from G to H. In the theory of Group Equivariant Non-Expansive Operators (GENEOs) (Camporesi et al., 2018; Bergomi et al., 2019; Cascarano et al., 2021; Bocchi et al., 2022, 2023; Conti et al., 2022; Frosini et al., 2023; Micheletti, 2023), as in many other GDL models, the collection of all symmetries is represented by a group, but in some applications, the group axioms do not necessarily hold since real-world data rarely follow strict mathematical symmetries due to noise, incompleteness, or symmetry-breaking features. As an example, we can consider a data set that contains images of digits and the group of rotations as the group acting on it. Rotating an image of the digit “6” by a straight angle returns an image that the user would most likely interpret as “9”. At the same time, we may want to be able to rotate the digit “6” by small angles while preserving its meaning (see Figure 1).

Figure 1
www.frontiersin.org

Figure 1. Example of a symmetry breaking feature. Applying a rotation g of π/4, the digit “6” preserves its meaning (left). The rotation g4 of π is, instead, not admissible, since it transforms the digit “6” into the digit “9” (right).

It is then desirable to extend the theory of GENEOs by relaxing the hypotheses on the sets of transformations. The main aim of this article is to give a generalization of the results obtained for GENEOs to a new mathematical framework, where the property of equivariance is maintained only for some transformations of the measurements, encoding a partial equivariance with respect to the action of the group of all transformations. To this end, we introduce the concept of Partial Group Equivariant Non-Expansive Operator (P-GENEO).

In this new model, there are some substantial differences with respect to the theory of GENEOs:

1. The user chooses two sets of measurements in input: the one containing the original measurements and another set that encloses the admissible variations of such measurements, defined in the same domain. For example, in the case where the function that represents the digit “6” is being observed, we define an initial space that contains this function and another space that contains certain small rotations of “6” but excludes all the others.

2. Instead of considering a group of transformations, we consider a set containing only those that do not change the meaning of our data, i.e., only those associating with each original measurement another one inside the set of its admissible variations. Therefore, by choosing the initial spaces, the user defines also which transformations of the data set, given by right composition, are admissible and which ones are not.

3. We define partial GENEOs, or P-GENEOs, as a generalization of GENEOs. P-GENEOs are operators that respect the two sets of measurements in input and the set of transformations relating them. The term “partial” refers to the fact that the set of transformations does not necessarily need to be a group.

With these assumptions in mind, we will extend the results proven in the study by Bergomi et al. (2019) and Quercioli (2021a) for GENEOs. We will define suitable pseudo-metrics on the spaces of measurements, the set of transformations, and the set of non-expansive operators. Grounding on their induced topological structures, we prove compactness and convexity of the space of P-GENEOs under the assumption that the function spaces are compact and convex. These are useful properties from a computational point of view. For example, compactness guarantees that the space can be approximated by a finite set. Moreover, convexity allows us to take the convex combination of P-GENEOs in order to generate new ones.

2 Related work

The main motivation for our study is that observed data rarely follow strict mathematical symmetries. This may be due, for example, to the presence of noise in data measurements. The idea of relaxing the hypothesis of equivariance in GDL and data analysis is not novel, as it is shown by the recent increase in the number of publications in this area (see, for example, Weiler and Cesa, 2019; Finzi et al., 2021; Romero and Lohit, 2022; van der Ouderaa et al., 2022; Wang et al., 2022; Chachlski et al., 2023).

We identify two main ways to transform data via operators that are not strictly equivariant due to the lack of strict symmetries of the measurements. On one hand, one could define approximately equivariant operator. These are operators for which equivariance holds up to small perturbation. In this case, given two groups, G and H acting on the spaces of measurements Φ and Ψ, respectively, and a homomorphism between them, T: GH, we say that F: Φ → Ψ is ε-equivariant if, for any gG and φ ∈ Φ, ||F(φg) − F(φ)T(g)||ε. Alternatively, when defining operators transforming the measurements of certain data sets, equivariance may be substituted by partial equivariance. In this case, equivariance is guaranteed for a subset of the groups acting on the space of measurements, with no guarantees for this subset to be a subgroup. Among the previously cited articles about relaxing the property of equivariance in DL, the approach by Finzi et al. (2021) is closer to an approximate equivariance model. Here, the authors use a Bayesian approach to introduce an inductive bias in their network that is sensitive to approximate symmetry. The authors of Romero and Lohit (2022) utilize a partial equivariance approach, where a probability distribution is defined and associated with each group convolutional layer of the architecture, and the parameters defining it are either learnt, to achieve equivariance, or partially learnt, to achieve partial equivariance. The importance of choosing equivariance with respect to different acting groups on each layer of the CNN was actually first observed in the study by Weiler and Cesa (2019) for the group of Euclidean isometries in ℝ2.

The point of view of this article is closer to the latter. Our P-GENEOs are indeed operators that preserve the action of certain sets ruling the admissibility of the transformations of the measurements of our data sets. Moreover, non-expansiveness plays a crucial role in our model. This is, in fact, the feature allowing us to obtain compactness and approximability in the space of operators, distinguishing our model from the existing literature on equivariant machine learning.

3 Mathematical setting

3.1 Data sets and operations

Consider a set X and the normed vector space (bX,·), where bX is the space of all bounded real-valued functions on X and ∥·∥ is the usual uniform norm, i.e., for any fbX, f:=supxX|f(x)|. On the set X, the space of transformations is given by elements of Aut(X), i.e., the group of bijections from X to itself. Then, we can consider the right group action R defined as follows (we represent composition as a juxtaposition of functions):

R:bX×Aut(X)bX,    (φ,s)φs.

Remark 3.1. For every s ∈ Aut(X), the map Rs:bXbX, with Rs(φ):=φs preserves the distances. In fact, for any φ1,φ2bX, by bijectivity of s, we have that

Rs(φ1)-Rs(φ2)=supxX|φ1s(x)-φ2s(x)|                                               =supyX|φ1(y)-φ2(y)|                                               =φ1-φ2.

In our model, our data sets are represented as two sets Φ and Φ′ of bounded real-valued measurements on X. In particular, X represents the space where the measurements can be made, Φ is the space of permissible measurements, and Φ′ is a space which Φ can be transformed into, without changing the interpretation of its measurements after a transformation is applied. In other words, we want to be able to apply some admissible transformations on the space X so that the resulting changes in the measurements in Φ are contained in the space Φ′. Thus, in our model, we consider operations on X in the following way:

Definition 3.2. A (Φ, Φ′)-operation is an element s of Aut(X) such that, for any measurement φ ∈ Φ, the composition φs belongs to Φ′. The set of all (Φ, Φ′) operations is denoted by AutΦ,Φ(X).

Remark 3.3. We can observe that the identity function idX is an element of AutΦ,Φ(X) if Φ ⊆ Φ′.

For any sAutΦ,Φ(X), the restriction to Φ×AutΦ,Φ(X) of the map Rs takes values in Φ′ since Rs(φ):=φsΦ for any φ ∈ Φ. We can consider the restriction of the map R (for simplicity, we will continue to use the same symbol to denote this restriction):

R:Φ×AutΦ,Φ(X)Φ,    (φ,s)φs

where R(φ,s)=Rs(φ), for every sAutΦ,Φ(X) and every φ ∈ Φ.

Definition 3.4. Let X be a set. A perception triple is a triple (Φ, Φ′, S) with Φ,ΦbX and SAutΦ,Φ(X). The set X is called the domain of the perception triple and is denoted by dom(Φ, Φ′, S).

Example 3.5. Given X = ℝ2, consider two rectangles R and R′ in X. Assume Φ: = {φ: X → [0, 1]: supp(φ) ⊆ R} and Φ′: = {φ′:X → [0, 1]: supp(φ′) ⊆ R′}. We recall that, if we consider a function f: X → ℝ, the support of f is the set of points in the domain, where the function does not vanish, i.e., supp(f) = {xX | f(x) ≠ 0}. Consider S as the set of translations that bring R into R′. The triple (Φ, Φ′, S) is a perception triple. If Φ represents a set of gray level images, S determines which translations can be applied to our pictures.

3.2 Pseudo-metrics on data sets

In our model, considering a generic set X, data are represented by a space ΩbX of bounded real-valued functions. We endow the real line ℝ with the usual Euclidean metric and the space X with an extended pseudo-metric induced by Ω:

DXΩ(x1,x2)=supωΩ|ω(x1)-ω(x2)|

for every x1, x2X. The choice of this pseudo-metric over X means that two points can only be distinguished if they assume different values for some measurements. For example, if Φ contains only a constant function and X contains at least two points, the distance between any two points of X is always null.

The pseudo-metric space XΩ:=(X,DXΩ) can be considered as a topological space with the basis

BΩ={BΩ(x0,r)}x0X, r+={{xX: DXΩ(x,x0)<r}}x0X, r+,

and the induced topology is denoted by τΩ. The reason for considering a topological space X, rather than just a set, follows from the need of formalizing the assumption that data are stable under small perturbations.

Remark 3.6. In our case, there are two collections of functions Φ and Φ′ in bX representing our data, both of which induce a topology on X. Hence, in the model, we consider two pseudo-metric spaces XΦ and XΦ with the same underlying set X. If ΦΦbX, the topologies τΦ and τΦ are comparable and, in particular, τΦ is finer than τΦ.

Now, given a set ΩbX, we will prove a result about the compactness of the pseudo-metric space XΩ. Before proceeding, let us recall the following lemma (e.g., see Gaal, 1964):

Lemma 3.7. Let (P,d) be a pseudo-metric space. The following conditions are equivalent:

1. P is totally bounded;

2. Every sequence in P admits a Cauchy subsequence.

Theorem 3.8. If Ω is totally bounded, XΩ is totally bounded.

Proof: By Lemma 3.7, it will suffice to prove that every sequence in X admits a Cauchy subsequence with respect to the pseudo-metric DXΩ. A sequence (xi)i∈ℕ in XΩ is considered and a real number ε > 0 is taken. Since Ω is totally bounded, we can find a finite subset Ωε = {ω1, …, ωn} such that for every ω ∈ Ω, there exists ωr ∈ Ω for which ||ω−ωr|| < ε. We can consider now the real sequence (ω1(xi))i∈ℕ, which is bounded since ω1bX. From Bolzano-Weierstrass Theorem, it follows that we can extract a convergent subsequence (ω1(xih))h∈ℕ. Again, we can extract from (ω2(xih))h∈ℕ another convergent subsequence (ω2(xiht))t∈ℕ. Repeating the process, we are able to extract a subsequence of (xi)i∈ℕ, that for simplicity of notation we can indicate as (xij)j∈ℕ, such that (ωk(xij))j∈ℕ is a convergent subsequence in ℝ, and hence a Cauchy sequence in ℝ, for every k ∈ {1, …, n}. By construction, Ωε is finite, then we can find an index ȷ̄ such that for any k ∈ {1, …, n}

|ωk(xi)-ωk(xim)|ε,for every ,mȷ̄.

Furthermore, we have that, for any ω ∈ Ω, any ωk ∈ Ωε, and any ℓ, m ∈ ℕ

|ω(xi)-ω(xim)||ω(xi)-ωk(xi)|+                                    |ωk(xi)-ωk(xim)|+|ωk(xim)-ω(xim)|                                    ||ω-ωk||+|ωk(xi)-ωk(xim)|+                                    ||ωk-ω||.

We observe that the choice of ȷ̄ depends only on ε and Ωε not on k. Then, choosing a ωk ∈ Ωε such that ||ωk−ω|| < ε, we get ||ω(xi)−ω(xim)|| < 3ε for every ω ∈ Ω and every ,mȷ̄. Then,

DXΩ(xi,xim)=supωΩ|ω(xi)-ω(xim)|<3εfor every ,mȷ̄.

Then (xij)j∈ℕ is a Cauchy sequence in XΩ. For Lemma 3.7 the statement holds.

Corollary 3.9. If Ω is totally bounded and XΩ is complete, XΩ is compact.

Proof: From Theorem 3.8, we have that XΩ is totally bounded, and since by hypothesis it is also complete, it is compact.

Now, we will prove that the choice of the pseudo-metric DXΩ on X makes the functions in Ω non-expansive.

Definition 3.10. Two pseudo-metric spaces (P, dP) and (Q, dQ) are considered. A non-expansive function from (P, dP) to (Q, dQ) is a function f: PQ such that dQ(f(p1), f(p2)) ≤ dP(p1, p2) for any p1, p2P.

We denote as NE(P, Q) the space of all non-expansive functions from (P, dP) to (Q, dQ).

Proposition 3.11. Ω ⊆ NE(XΩ, ℝ).

Proof: For any x1, x2X, we have that

|ω(x1)-ω(x2)|supωΩ|ω(x1)-ω(x2)|=DXΩ(x1,x2).

Then, the topology on X induced by DXΩ naturally makes the measurements in Ω continuous. In particular, since the previous results hold for a generic ΩbX, they are also true for Φ and Φ′ in our model.

Remark 3.12. Assuming that (Φ, Φ′, S) is a perception triple. A function φ′ ∈ Φ′ may not be continuous from XΦ to ℝ and a function φ ∈ Φ may not be continuous from XΦ to ℝ. In other words, the topology on X induced by the pseudo-metric of one of the function spaces does not make the functions in the other continuous.

Example 3.13. Assuming X = ℝ, for every a, b ∈ ℝ the functions φa:X → ℝ and φb:X are defined by setting

φa(x)={0if xa1otherwise,   φb(x)={0if xb1otherwise.

Suppose Φ: = {φa:a ≥ 0} and Φ:={φb:b0}. Consider the symmetry with respect to the y-axis, i.e., the map s(x) = −x. Surely, sAutΦ,Φ(X). We can observe that the function φ1 ∈ Φ is not continuous from XΦ to ℝ; indeed DXΦ(0,2)=0, but |φ1(0)−φ1(2)| = 1.

However, if Φ ⊆ Φ′, we have that the functions in Φ are also continuous on XΦ, indeed:

Corollary 3.14. If Φ ⊆ Φ′, then ΦNE(XΦ,).

Proof: By Proposition 3.11, the statement trivially holds since ΦΦNE(XΦ,).

3.3 Pseudo-metrics on the space of operations

Proposition 3.15. Every element of AutΦ,Φ(X) is non-expansive from XΦ to XΦ.

Proof: Considering a bijection sAutΦ,Φ(X) we have that

DXΦ(s(x1),s(x2))=supφΦ|φs(x1)-φs(x2)|                                  =supφΦs|φ(x1)-φ(x2)|                                  supφΦ|φ(x1)-φ(x2)|=DXΦ(x1,x2)

for every x1, x2X, where Φs = {φs, φ ∈ Φ}. Then, sNE(XΦ,XΦ) and the statement is proved.

Now, we are ready to put more structure on AutΦ,Φ(X). Considering a set ΩbX of bounded real-valued functions, we can endow the set Aut(X) with a pseudo-metric inherited from Ω

DAutΩ(s1,s2):=supωΩ||ωs1-ωs2||

for any s1, s2 in Aut(X).

Remark 3.16. Analogously to what happens in Remark 3.6 for X, the sets Φ and Φ′ can endow Aut(X) with two possibly different pseudo-metrics DAutΦ and DAutΦ. In particular, we can consider AutΦ,Φ(X) as a pseudo-metric subspace of Aut(X) with the induced pseudo-metrics.

Remark 3.17. We observe that, for any s1, s2 in Aut(X),

DAutΩ(s1,s2):=supωΩωs1-ωs2                          =supxXsupωΩ|ω(s1(x))-ω(s2(x))|                          =supxXDXΩ(s1(x),s2(x)).    (3.3.1)

In other words, the pseudo-metric DAutΩ, which is based on the action of the elements of Aut(X) on the set Ω, is exactly the usual uniform pseudo-metric on XΩ.

3.4 The space of operations

Since we are only interested in transformations of functions in Φ, it would be natural to just endow AutΦ,Φ(X) with the pseudo-metric DAutΦ. However, it is sometimes necessary to consider the pseudo-metric DAutΦ in order to guarantee the continuity of the composition of elements in AutΦ,Φ(X), whenever it is admissible. Considering two elements s, t in AutΦ,Φ(X) such that st is still an element of AutΦ,Φ(X), i.e., for every function φ ∈ Φ, we have that φst ∈ Φ′. Then, for any φ ∈ Φ we have that

φ:=φsΦsΦ,φtΦ.

Therefore, t is also an element of AutΦs,Φ(X). By definition, Φs is contained in Φ′ for every sAutΦ,Φ(X), and this justifies the choice of considering in AutΦ,Φ(X) also the pseudo-metric DAutΦ. We have shown, in particular, that if s, t are elements of AutΦ,Φ(X) such that st is still an element of AutΦ,Φ(X), t is an element of AutΦs,Φ(X), which is an implication of the following proposition:

Proposition 3.18. Let s,tAutΦ,Φ(X). Then, stAutΦ,Φ(X) if tAutΦs,Φ(X).

Proof: If the composition st belongs to AutΦ,Φ(X), we have already proven that tAutΦs,Φ(X). On the other hand, if tAutΦs,Φ(X), we have that φ̄tΦ for every φ̄Φs. Since φ(st) = (φs)t, it follows that φ(st) ∈ Φ′ for every φ ∈ Φ. Therefore, stAutΦ,Φ(X) and the statement is proved.

Remark 3.19. Let tAutΦ,Φ(X). We can observe that if s ∈ AutΦ(X), Φs ⊆ Φ and stAutΦ,Φ(X).

Lemma 3.20. Consider r, s, t ∈ Aut(X). For any ΩbX, it holds that

DAutΩ(rt,st)=DAutΩ(r,s).

Proof: Since Rt preserves the distances, we have that:

DAutΩ(rt,st):=supωΩ||ωrt-ωst||                          =supωΩ||ωr-ωs||                          =DAutΩ(r,s).

Lemma 3.21. Consider r, s ∈ Aut(X) and tAutΦ,Φ(X). It holds that

DAutΦ(tr,ts)DAutΦ(r,s).

Proof: Since Φt ⊆ Φ′, we have that:

DAutΦ(tr,ts)=supφΦ||φtr-φts||                       =supφΦt||φr-φs||                       supφΦ||φr-φs||                       =DAutΦ(r,s).

Let Π be the set of all pairs (s, t) such that s,t,stAutΦ,Φ(X). We endow Π with the pseudo-metric

DΠ((s1,t1),(s2,t2)):=DAutΦ(s1,s2)+DAutΦ(t1,t2)

and the corresponding topology.

Proposition 3.22. The function :Π(AutΦ,Φ(X),DAutΦ) that maps (s, t) to st is non-expansive and hence continuous.

Proof: Consider two elements (s1, t1), (s2, t2) of Π. By Lemma 3.20 and Lemma 3.21,

DAutΦ(s1t1,s2t2)DAutΦ(s1t1,s2t1)+DAutΦ(s2t1,s2t2)                                 DAutΦ(s1,s2)+DAutΦ(t1,t2)                                 =DΠ((s1,t1),(s2,t2)).

Therefore, the statement is proved.

Let Υ be the set of all s with s,s-1AutΦ,Φ(X).

Proposition 3.23. The function (·)-1:(Υ,DAutΦ)(AutΦ,Φ(X),DAutΦ), that maps s to s−1, is non-expansive, and hence continuous.

Proof: Consider two bijections s1, s2 ∈ Υ. Because of Lemma 3.20 and Lemma 3.21, we obtain that

DAutΦ(s1-1,s2-1)=DAutΦ(s1-1s2,s2-1s2)                               =DAutΦ(s1-1s2,idX)                               =DAutΦ(s1-1s2,s1-1s1)                               DAutΦ(s2,s1)=DAutΦ(s1,s2).

We have previously defined the map

R:Φ×AutΦ,Φ(X)Φ,    (φ,s)φs

where R(Φ,s)=Rs(Φ), for every sAutΦ,Φ(X).

Proposition 3.24. The function R is continuous by choosing the pseudo-metric DAutΦ on AutΦ,Φ(X).

Proof: We have that

R(φ,t)-R(φ¯,s)=φt-φ¯s                                             ||φt-φs||+||φs-φ¯s||                                             =||φt-φs||+||φ-φ¯||                                             DAutΦ(t,s)+||φ-φ¯||

for any φ,φ¯Φ and any t,sAutΦ,Φ(X). This proves that R is continuous.

Now, we can give a result about the compactness of (AutΦ,Φ(X),DAutΦ) under suitable assumptions.

Proposition 3.25. If Φ and Φ′ are totally bounded, (AutΦ,Φ(X),DAutΦ) is totally bounded.

Proof: Consider a sequence (si)i∈ℕ in AutΦ,Φ(X) and a real number ε > 0. Since Φ is totally bounded, we can find a finite subset Φε = {φ1, …, φn} such that for every φ ∈ Φ, there exists φr ∈ Φ for which ||φφr|| < ε. Now, consider the sequence (φ1si)i∈ℕ in Φ′. Since also Φ′ is totally bounded, from Lemma 3.7, it follows that we can extract a Cauchy subsequence (φ1sih)h∈ℕ. Again, we can extract another Cauchy subsequence (φ2siht)t∈ℕ. Repeating the process for every k ∈ {1, …, n}, we are able to extract a subsequence of (si)i∈ℕ, that for simplicity of notation we can indicate as (sij)j∈ℕ, such that (φksij)j∈ℕ is a Cauchy sequence in Φ′ for every k ∈ {1, …, n}.

Since Φε is finite, we can find an index ȷ̄ such that for any k ∈ {1, …, n}

||φksi-φksim||ε,for every ,mȷ̄.    (3.4.1)

Furthermore, we have that for any φ ∈ Φ, any φk ∈ Φε, and any ℓ, m ∈ ℕ

||φsi-φsim||||φsi-φksi||+                                      ||φksi-φksim||+||φksim-φsim||                                      =||φ-φk||+||φksi-φksim||+||φk-φ||.

We observe that the choice of ȷ̄ in (3.4.1) depends only on ε and Φε not on φ. Then, choosing a φk ∈ Φε such that ||φkφ|| < ε, we get ||φsiφsim|| < 3ε for every φ ∈ Φ and every ,mȷ̄. Hence, for every ℓ, m ∈ ℕ

DAutΦ(si,sim)=supφΦ||φsi-φsim||<3ε

Therefore, (sij)j∈ℕ is a Cauchy sequence in AutΦ,Φ(X). For Lemma 3.7, the statement holds.

Corollary 4.26. Assume that SAutΦ,Φ(X). If Φ and Φ′ are totally bounded and (S,DAutΦ) is complete, it is also compact.

Proof: From Proposition 3.25, we have that S is totally bounded, and since by hypothesis it is also complete, the statement holds.

4 The space of P-GENEOs

In this section, we introduce the concept of Partial Group Equivariant Non-Expansive Operator (P-GENEO). P-GENEOs allow us to transform data sets, preserving symmetries and distances and maintaining the acceptability conditions of the transformations. We will also describe some topological results about the structure of the space of P-GENEOs and some techniques used for defining new P-GENEOs in order to populate the space of P-GENEOs.

Definition 4.1. Let X, Y be sets and (Φ, Φ′, S), (Ψ, Ψ′, Q) be perception triples with domains X and Y, respectively. Consider a triple of functions (F, F′, T) with the following properties:

F: Φ → Ψ, F′:Φ′ → Ψ′, T: SQ;

• For any s, tS such that stS it holds that T(st) = T(s)T(t);

• For any sS such that s−1S it holds that T(s−1) = T(s)−1;

• (F, F′, T) is equivariant, i.e., F′(φs) = F(φ)T(s) for every φ ∈ Φ, sS.

The triple (F, F′, T) is called a perception map or a Partial Group Equivariant Operator (P-GEO) from (Φ, Φ′, S) to (Ψ, Ψ′, Q).

In Remark 3.3, we observed that idXAutΦ,Φ(X) if Φ ⊆ Φ′. Then, we can consider a perception triple (Φ, Φ′, S) with Φ ⊆ Φ′ and idXSAutΦ,Φ(X). Now, we will show how a P-GEO from this perception triple behaves.

Lemma 4.2. Consider two perception triples (Φ, Φ′, S) and (Ψ, Ψ′, Q) with domains X and Y, respectively, and with idXSAutΦ,Φ(X). Let (F, F′, T) be a P-GEO from (Φ, Φ′, S) to (Ψ, Ψ′, Q). Then, Ψ ⊆ Ψ′ and idYQAutΨ,Ψ(Y).

Proof: Since (F, F′, T) is a P-GEO, by definition, we have that, for any s, tS such that stS, T(st) = T(s)T(t). Since idXS, then

T(idX)=T(idXidX)=T(idX)T(idX)

and hence T(idX)=idYQAutΨ,Ψ(X). Moreover, for Remark 3.3, we have that Ψ ⊆ Ψ′.

Proposition 4.3. Consider two perception triples (Φ, Φ′, S) and (Ψ, Ψ′, Q) with domains X and Y, respectively, and with idXSAutΦ,Φ(X). Let (F, F′, T) be a P-GEO from (Φ, Φ′, S) to (Ψ, Ψ′, Q). Then F|Φ=F.

Proof: Since (F, F′, T) is a P-GEO, it is equivariant, and by Lemma 4.2, we have that

F(φ)=F(φidX)=F(φ)T(idX)=F(φ)idY=F(φ)

for every φ ∈ Φ.

Definition 4.4. Assume that (Φ, Φ′, S) and (Ψ, Ψ′, Q) are perception triples. If (F, F′, T) is a perception map from (Φ, Φ′, S) to (Ψ, Ψ′, Q) and F, F′ are non-expansive, i.e.,

        ||F(φ1)-F(φ2)||||φ1-φ2||,||F(φ1)-F(φ2)||||φ1-φ2||

for every φ1, φ2 ∈ Φ, φ1,φ2Φ, (F, F′, T) is called a Partial Group Equivariant Non-Expansive Operator (P-GENEO).

In other words, a P-GENEO is a triple (F, F′, T) such that F, F′ are non-expansive and the following diagram commutes for every sS:

 s ΦΦF F T(s) ΨΨ

Remark 4.5. We can observe that a GENEO (see Bergomi et al., 2019) can be represented as a special case of P-GENEO, considering two perception triples (Φ, Φ′, S), (Ψ, Ψ′, Q) such that Φ = Φ′, Ψ = Ψ′, and the subsets containing the invariant transformations S and Q are groups (and then the map T: SQ is a homomorphism). In this setting, a P-GENEO (F, F′, T) is a triple where the operators F, F′ are equal to each other (because of Proposition 4.3), and the map T is a homomorphism. Hence, instead of the triple, we can simply write the pair (F, T) that is a GENEO.

Considering two perception triples, we typically want to study the space of all P-GENEOs between them with the map T fixed. Therefore, when the map T is fixed and specified, we will simply consider pairs of operators (F, F′) instead of triples (F, F′, T), and we say that (F, F′) is a P-GENEO associated with or with respect to the map T. Moreover, in this case, we indicate the property of equivariance of the triple (F, F′, T) writing that the pair (F, F′) is T-equivariant.

Example 4.6. Let X = ℝ2. Take a real number ℓ > 0. In X, consider the square Q1: = [0, ℓ] × [0, ℓ] and its translation sa of a vector a=(a1,a2)2, Q1:=[a1,+a1]×[a2,+a2]. Analogously, let us consider a real number 0 < ε < ℓ and two squares inside Q1 and Q1, Q2: = [ε, ℓ−ε] × [ε, ℓ−ε] and Q2:=[a1+ε,+a1-ε]×[a2+ε,+a2-ε], as shown in Figure 2.

Figure 2
www.frontiersin.org

Figure 2. Squares used in Example 4.6.

Consider the following function spaces in bX:

Φ:={φ:X | supp(φ)Q1}Φ:={φ:X | supp(φ)Q1}  Ψ:={ψ:X | supp(ψ)Q2}  Ψ:={ψ:X | supp(ψ)Q2}.

Let S:={sa-1}, where sa is the translation by the vector a = (a1, a2). The triples (Φ, Φ′, S) and (Ψ, Ψ′, S) are perception triples. This example could model the translation of two nested gray-scale images. We want to build now an operator between these images in order to obtain a transformation that commutes with the selected translation. We can consider the triple of functions (F, F′, T) defined as follows: F: Φ → Ψ is the operator that maintains the output of functions in Φ at points of Q2 and set them to zero outside it; analogously, F′:Φ′ → Ψ′ is the operator that maintains the output of functions in Φ′ at points of Q2 and set them to zero outside it and T = idS. Therefore, the triple (F, F′, T) is a P-GENEO from (Φ, Φ′, S) to (Ψ, Ψ′, S). It turns out that the maps are non-expansive, and the equivariance holds

F(φsa-1)=F(φ)T(sa-1)=F(φ)sa-1

for any φ ∈ Φ. From the point of view of application, we are considering two square images and their translations, and we apply an operator that “cuts” the images, taking into account only the part of the image that interests the observer. This example justifies the definition of P-GENEO as a triple of operators (F, F′, T), without requiring F and F′ to be equal in the possibly non-empty intersection of their domains. In fact, if φ is a function contained in Φ ∩ Φ′, its image via F and F′ may be different.

4.1 Methods to construct P-GENEOs

Starting from a finite number of P-GENEOs, we will illustrate some methods to construct new P-GENEOs. First of all, the composition of two P-GENEOs is still a P-GENEO.

Proposition 4.7. Given two composable P-GENEOs, (F1,F1,T1):(Φ,Φ,S)(Ψ,Ψ,Q) and (F2,F2,T2):(Ψ,Ψ,Q)(Ω,Ω,K), their composition defined as

(F,F,T):=(F2F1,F2F1,T2T1):(Φ,Φ,S)(Ω,Ω,K)

is a P-GENEO.

Proof: First, one could easily check that the map T = T2T1 respects the second and the third property of Definition 4.1. Therefore, it remains to verify that F(Φ) ⊆ Ω, F′(Φ′) ⊆ Ω′ and the properties of equivariance and non-expansiveness are maintained.

1. Since F1(Φ) ⊆ Ψ and F2(Ψ) ⊆ Ω, we have that F(Φ) = (F2F1)(Φ) = F2(F1(Φ)) ⊆ F2(Ψ) ⊆ Ω. Analogously, F′(Φ′) ⊆ Ω′.

2. Since (F1,F1,T1) and (F2,F2,T2) are equivariant, (F, F′, T) is equivariant. Indeed, for every φ ∈ Φ, we have that

F(φs)=(F2F1)(φs)=F2(F1(φs))                =F2(F1(φ)T1(s))=F2(F1(φ))T2(T1(s))                =(F2F1)(φ)(T2T1)(s)=F(φ)T(s).

3. Since F1 and F2 are non-expansive, F is non-expansive; indeed for every φ1, φ2 ∈ Φ, we have that

||F(φ1)-F(φ2)||=||(F2F1)(φ1)-(F2F1)(φ2)||                                           =||F2(F1(φ1))-F2(F1(φ2))||                                           ||F1(φ1)-F1(φ2)||                                           ||φ1-φ2||.

Analogously, F′ is non-expansive.

Given a finite number of P-GENEOs with respect to the same map T, we illustrate a general method to construct a new operator as a combination of them. Given two sets X and Y, consider a finite set {H1, …, Hn} of functions from ΩbX to bY and a map L:n, where ℝn is endowed with the norm ||(x1,,xn)||:=max1in|xi|. We define L*(H1,,Hn):ΩbY as

L*(H1,,Hn)(ω):=[L(H1(ω),,Hn(ω))],

for any ω ∈ Ω, where [L(H1(ω),,Hn(ω))]:Y is defined by setting

[L(H1(ω),,Hn(ω))](y):=L(H1(ω)(y),,Hn(ω)(y))

for any yY. Now, consider two perception triples (Φ, Φ′, S) and (Ψ, Ψ′, Q) with domains X and Y, respectively, and a finite set of P-GENEOs (F1,F1),(Fn,Fn) between them associated with the map T: SQ. We can consider the functions L*(F1,,Fn):ΦbY and L*(F1,,Fn):ΦbY, defined as before and state the following result.

Proposition 4.8. Assume that L:n is non-expansive. If L*(F1,,Fn)(Φ)Ψ and L*(F1,,Fn)(Φ)Ψ, (L*(F1,,Fn),L*(F1,,Fn)) is a P-GENEO from (Φ, Φ′, S) to (Ψ, Ψ′, Q) with respect to T.

Proof: By hypothesis, L*(F1,,Fn)(Φ)Ψ and L*(F1,,Fn)(Φ)Ψ, so we just need to verify the properties of equivariance and non-expansiveness.

1. Since (F1,F1),,(Fn,Fn) are T-equivariant, for any φ ∈ Φ and any sS, we have that:

L*(F1,,Fn)(φs)=[L(F1(φs),,Fn(φs))]                                   =[L(F1(φ)T(s),,Fn(φ)T(s))]                                   =[L(F1(φ),,Fn(φ))]T(s)                                   =L*(F1,,Fn)(φ)T(s).

Therefore, (L*(F1,,Fn),L*(F1,,Fn)) is T-equivariant.

2. Since F1, …, Fn and L are non-expansive, for any φ1, φ2 ∈ Φ, we have that:

||L*(F1,,Fn)(φ1)-L*(F1,,Fn)(φ2)||=maxyY|[L(F1(φ1),,Fn(φ1))](y)    -[L(F1(φ2),,Fn(φ2))](y)|=maxyY|L(F1(φ1)(y),,Fn(φ1)(y))    -L(F1(φ2)(y),,Fn(φ2)(y))|maxyY||(F1(φ1)(y)-F1(φ2)(y),,Fn(φ1)(y)-Fn(φ2)(y))||=maxyYmax1in|Fi(φ1)(y)-Fi(φ2)(y)|=max1in||Fi(φ1)-Fi(φ2)||||φ1-φ2||.

Hence, L*(F1,,Fn) is non-expansive. Analogously, since F1,,Fn and L are non-expansive, L*(F1,,Fn) is non-expansive.

Therefore, (L*(F1,,Fn),L*(F1,,Fn)) is a P-GENEO from (Φ, Φ′, S) to (Ψ, Ψ′, Q) with respect to T.

Remark 4.9. The above result describes a general method to build new P-GENEOs, starting from a finite number of known P-GENEOs via non-expansive maps. Some examples of such non-expansive maps are the maximum function, the power mean, and the convex combination (for further details, see Frosini and Quercioli, 2017; Quercioli, 2021a,b).

4.2 Compactness and convexity of the space of P-GENEOs

Given two perception triples, under some assumptions on the data sets, it is possible to show two useful features in applications: compactness and convexity. These two properties guarantee, on the one hand, that the space of P-GENEOs can be approximated by a finite subset of them, and, on the other hand, that a convex combination of P-GENEOs is again a P-GENEO.

First, we define a metric on the space of P-GENEOs. Let X, Y be sets and consider two sets ΩbX,ΔbY, we can define the distance

DNEΩ(F1,F2):=supωΩ||F1(ω)-F2(ω)||

for every F1, F2NE(Ω, Δ).

The metric DP-GENEO on the space FTall of all the P-GENEOs between the perception triples (Φ, Φ′, S) and (Ψ, Ψ′, Q) associated with the map T is defined as

DP-GENEO((F1,F1),(F2,F2)):=max{DNEΦ(F1,F2),DNEΦ(F1,F2)}=max{supφΦ||F1(φ)-F2(φ)||,supφΦ||F1(φ)-F2(φ)||}

for every (F1,F1),(F2,F2)FTall.

4.2.1 Compactness

Before proceeding, we need to prove that the following result holds:

Lemma 4.10. If (P, dP), (Q, dQ) are compact metric spaces, NE(P, Q) is compact.

Proof: Theorem 5 in the study by Li et al. (2012) implies that NE(P, Q) is relatively compact, since it is a equicontinuous space of maps. Hence, it will suffice to show that NE(P, Q) is closed. Considering a sequence (Fi)i∈ℕ in NE(P, Q) such that limiFi=F, we have that

dQ(F(p1),F(p2))=limidQ(Fi(p1),Fi(p2))dP(p1,p2)

for every p1, p2P. Therefore, FNE(P, Q). It follows that NE(P, Q) is closed.

Consider two perception triples (Φ, Φ′, S) and (Ψ, Ψ′, Q), with domains X and Y, respectively, and the space FTall of P-GENEOs between them associated with the map T: SQ. The following result holds:

Theorem 4.11. If Φ, Φ′, Ψ and Ψ′ are compact, FTall is compact with respect to the metric DP−GENEO.

Proof: By definition, FTallNE(Φ,Ψ)×NE(Φ,Ψ). Since Φ, Φ′, Ψ and Ψ′ are compact, for Lemma 4.10, the spaces NE(Φ, Ψ) and NE(Φ′, Ψ′) are also compact, and then, by Tychonoff's Theorem, the product NE(Φ, Ψ) × NE(Φ′, Ψ′) is also compact, with respect to the product topology. Hence, to prove our statement, it suffices to show that FTall is closed. Let us consider a sequence ((Fi,Fi))i of P-GENEOs, converging to a pair (F, F′) ∈ NE(Φ, Ψ) × NE(Φ′, Ψ′). Since (Fi,Fi) is T-equivariant for every i ∈ ℕ and the action of Q on Ψ is continuous (see Proposition 3.24), (F, F′) belongs to FTall. Indeed, we have that

F(φs)=limiFi(φs)=limiFi(φ)T(s)=F(φ)T(s)

for every sS and every φ ∈ Φ. Hence, FTall is a closed subset of a compact set, and then, it is also compact.

4.2.2 Convexity

Assume that Ψ, Ψ′ are convex. Let (F1,F1),,(Fn,Fn)FTall and consider an n-tuple (a1,,an)n with ai ≥ 0 for every i ∈ {1, …, n} and i=1nai=1. We can define two operators FΣ:Φ → Ψ and FΣ:ΦΨ as

FΣ(φ):=i=1naiFi(φ), and FΣ(φ):=i=1naiFi(φ)

for every φ ∈ Φ, φ′ ∈ Φ′. We notice that the convexity of Ψ and Ψ′ guarantees that FΣ and FΣ are well defined.

Proposition 4.12. (FΣ,FΣ) belongs to FTall.

Proof: By hypothesis, for every i ∈ {1, …, n}, (Fi,Fi) is a perception map, and then:

FΣ(φs)=i=1naiFi(φs)=i=1nai(Fi(φ)T(s))                                                 =(i=1naiFi(φ))T(s)                                                 =FΣ(φ)T(s)

for every φ ∈ Φ and every sS. Furthermore, since for every i ∈ {1, …, n}, Fi(Φ) ⊆ Ψ and Ψ are convex, also FΣ(Φ) ⊆ Ψ. Analogously, the convexity of Ψ′ implies that FΣ(Φ)Ψ. Therefore (FΣ,FΣ) is a P-GEO. It remains to show the non-expansiveness of FΣ and FΣ. Since Fi is non-expansive for any i, for every φ1, φ2 ∈ Φ, we have that

||FΣ(φ1)-FΣ(φ2)||=||i=1naiFi(φ1)-i=1naiFi(φ2)||                                                =||i=1nai(Fi(φ1)-Fi(φ2))||                                                i=1n|ai|||Fi(φ1)-Fi(φ2)||                                                i=1n|ai|||φ1-φ2||=||φ1-φ2||.

Analogously, since every Fi is non-expansive, for every φ1,φ2Φ, we have that

FΣ(φ1)FΣ(φ2)i=1n|ai|φ1φ2=φ1φ2.

Therefore, we have proven that (FΣ,FΣ) is a P-GEO with FΣ and FΣ non-expansive. Hence it is a P-GENEO.

Then, the following result holds:

Corollary 4.13. If Ψ, Ψ′ are convex, the set FTall is convex.

Proof: It is sufficient to apply Proposition 4.12 for n = 2 by setting a1 = t, a2 = 1−t for 0 ≤ t ≤ 1.

5 P-GENEOs in applications

The importance of equivariance with respect to a group is becoming clear and widespread in many machine learning applications used for drug design, traffic forecasting, object recognition, and detection (see, e.g., Bronstein et al., 2021; Gerken et al., 2023). In some situations, however, requiring equivariance with respect to a whole group could even become an obstacle in the correct learning process of an equivariant neural network. In the following, we describe a possible application to optical character recognition (OCR), in which partial equivariance might be better suited than equivariance. Consider a planar transformation that deforms characters. One may notice that if such transformation is performed too many times, the letter may lose or change its meaning, as shown in Figure 3. Another example is given by a reparameterization of the domain of a sound message. While a limited contraction or dilation of the domain can preserve the meaning attributed to the sound, an iterated application of the same transformation can radically change the perceived message.

Figure 3
www.frontiersin.org

Figure 3. Applying a “shape-preserving” homeomorphism twice can change a letter k into a letter x.

Furthermore, experiments performed in the study by Weiler and Cesa (2019) have shown that tuning the level of equivariance in each layer of a neural network may increase the performance of the model. This tuning is, however, performed manually. The successive step, conducted in the study by Romero and Lohit (2022), is to learn the level of equivariance of each layer directly from data, possibly restricting to certain subsets whenever the full equivariance prevents a good classification performance. The authors of Romero and Lohit (2022) test their result on MNIST. In applications of this type, the use of P-GENEOs could allow partial equivariance to be framed within a precise mathematical model.

6 Conclusion

In this article, we proposed a generalization of some known results in the theory of GENEOs to a new mathematical framework, where the collection of all symmetries is represented by a subset of a group of transformations. We introduced P-GENEOs and showed that they are a generalization of GENEOs. We defined pseudo-metrics on the space of measurements and on the space of P-GENEOs and studied their induced topological structures. Under the assumption that the function spaces are compact and convex, we showed compactness and convexity of the space of P-GENEOs. In particular, compactness guarantees that any operator can be approximated by a finite number of operators belonging to the same space, while convexity allows us to build new P-GENEOs by taking convex combinations of P-GENEOs. Compactness and convexity together ensure that every strictly convex loss function on the space of P-GENEOs admits a unique global minimum. Given a collection of P-GENEOs, we presented a general method to construct new P-GENEOs as combinations of the initial ones.

Data availability statement

The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.

Author contributions

LF: Writing – original draft. PF: Writing – original draft, Writing – review & editing. NQ: Writing – original draft, Writing – review & editing. FT: Writing – original draft, Writing – review & editing.

Funding

The author(s) declare financial support was received for the research, authorship, and/or publication of this article. This research has been partially supported by INdAM-GNSAGA. FT was supported by the Wallenberg AI, Autonomous System and Software Program (WASP) funded by Knut and Alice Wallenberg Foundation. PF and NQ carried out this work in the framework of the CNIT National Laboratory WiLab and the WiLab-Huawei Joint Innovation Center.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Bergomi, M. G., Frosini, P., Giorgi, D., and Quercioli, N. (2019). Towards a topological-geometrical theory of group equivariant non-expansive operators for data analysis and machine learning. Nat. Mach. Intellig. 1, 423–433. doi: 10.1038/s42256-019-0087-3

Crossref Full Text | Google Scholar

Bocchi, G., Botteghi, S., Brasini, M., Frosini, P., and Quercioli, N. (2023). On the finite representation of linear group equivariant operators via permutant measures. Ann. Mathem. Artif. Intellig. 91, 465–487. doi: 10.1007/s10472-022-09830-1

Crossref Full Text | Google Scholar

Bocchi, G., Frosini, P., Micheletti, A., Pedretti, A., Gratteri, C., Lunghini, F., et al. (2022). GENEOnet: a new machine learning paradigm based on Group Equivariant Non-Expansive Operators. An application to protein pocket detection. arXiv.

Google Scholar

Bronstein, M. M., Bruna, J., LeCun, Y., Szlam, A., and Vandergheynst, P. (2017). Geometric deep learning: going beyond euclidean data. IEEE Signal Process. Mag. 34, 18–42. doi: 10.1109/MSP.2017.2693418

Crossref Full Text | Google Scholar

Bronstein, M. M., Bruna, J., Cohen, T., and Velivcković, P. (2021). Geometric deep learning: grids, groups, graphs, geodesics, and gauges. arXiv.

Google Scholar

Camporesi, F., Frosini, P., and Quercioli, N. (2018). “On a new method to build group equivariant operators by means of permutants,” in Machine Learning and Knowledge Extraction: Second IFIP TC 5, TC 8/WG 8.4, 8.9, TC 12/WG 12.9 International Cross-Domain Conference, CD-MAKE 2018. Hamburg, Germany: Springer, 265–272.

Google Scholar

Cascarano, P., Frosini, P., Quercioli, N., and Saki, A. (2021). On the geometric and Riemannian structure of the spaces of group equivariant non-expansive operators. arXiv.

Google Scholar

Chachlski, W., De Gregorio, A., Quercioli, N., and Tombari, F. (2023). Symmetries of data sets and functoriality of persistent homology. Theory Appl. Categor. 39, 667–686.

Google Scholar

Cohen, T., and Welling, M. (2016). “Group equivariant convolutional networks,” in International Conference on Machine Learning. PMLR, 2990–2999. Available online at: jmlr.org

Google Scholar

Conti, F., Frosini, P., and Quercioli, N. (2022). On the construction of group equivariant non-expansive operators via permutants and symmetric functions. Front. Artif. Intellig. 5, 16. doi: 10.3389/frai.2022.786091

PubMed Abstract | Crossref Full Text | Google Scholar

Finzi, M., Benton, G., and Wilson, A. G. (2021). “Residual pathway priors for soft equivariance constraints,” in Advances in Neural Information Processing Systems (Cambridge, MA: The MIT Press), 30037–30049.

Google Scholar

Frosini, P., Gridelli, I., and Pascucci, A. (2023). A probabilistic result on impulsive noise reduction in topological data analysis through group equivariant non-expansive operators. Entropy 25, 1150. doi: 10.3390/e25081150

PubMed Abstract | Crossref Full Text | Google Scholar

Frosini, P., and Quercioli, N. (2017). “Some remarks on the algebraic properties of group invariant operators in persistent homology,” in Machine Learning and Knowledge Extraction, eds. A. Holzinger, P. Kieseberg, A. M. Tjoa, and E. Weippl. Cham. Springer International Publishing, 14–24.

Google Scholar

Gaal, S. (1964). Point Set Topology. Amsterdam: Elsevier Science.

Google Scholar

Gerken, J., Aronsson, J., Carlsson, O., Linander, H., Ohlsson, F., Petersson, C., et al. (2023). Geometric deep learning and equivariant neural networks. Artif. Intell. Rev. 56, 1–58. doi: 10.1007/s10462-023-10502-7

Crossref Full Text | Google Scholar

Li, R., Zhong, S., and Swartz, C. (2012). An improvement of the Arzel-Ascoli theorem. Topol. Appl. 159, 2058–2061. doi: 10.1016/j.topol.2012.01.014

Crossref Full Text | Google Scholar

Masci, J., Rodolà, E., Boscaini, D., Bronstein, M. M., and Li, H. (2016). “Geometric deep learning,” in SIGGRAPH ASIA 2016 Courses. New York, NY: Association for Computing Machinery, 1–50.

Google Scholar

Micheletti, A. (2023). A new paradigm for artificial intelligence based on group equivariant non-expansive operators. Eur. Math. Soc. Mag. 128, 4–12. doi: 10.4171/mag/133

Crossref Full Text | Google Scholar

Quercioli, N. (2021a). On the Topological Theory of Group Equivariant Non-Expansive Operators (PhD thesis). Bologna: Alma Mater Studiorum - Universit di Bologna. Available online at: http://amsdottorato.unibo.it/9770/. (accessed July 12, 2023).

Google Scholar

Quercioli, N. (2021b). “Some new methods to build group equivariant non-expansive operators in TDA,” in Topological Dynamics and Topological Data Analysis, 229–238.

Google Scholar

Romero, D. W., and Lohit, S. (2022). Learning partial equivariances from data. Adv. Neural Inform. Proc. Syst. 35, 36466–36478.

Google Scholar

van der Ouderaa, T., Romero, D. W., and van der Wilk, M. (2022). Relaxing equivariance constraints with non-stationary continuous filters. Adv. Neural Inform. Proc. Syst. 35, 33818–33830.

Google Scholar

Wang, R., Walters, R., and Yu, R. (2022). “Approximately equivariant networks for imperfectly symmetric dynamics,” in Proceedings of the 39th International Conference on Machine Learning. PMLR, 23078–23091. Available online at: jmlr.org

Google Scholar

Weiler, M., and Cesa, G. (2019). “General E(2)-equivariant steerable CNNs,” in Advances in Neural Information Processing Systems, eds. H. Wallach, H. Larochelle, A. Beygelzimer, F. d Alché-Buc, E. Fox, and R. Garnett. New York: Curran Associates, Inc, 32.

Google Scholar

Keywords: partial-equivariant neural network, P-GENEO, pseudo-metric space, compactness, convexity

Citation: Ferrari L, Frosini P, Quercioli N and Tombari F (2023) A topological model for partial equivariance in deep learning and data analysis. Front. Artif. Intell. 6:1272619. doi: 10.3389/frai.2023.1272619

Received: 04 August 2023; Accepted: 27 November 2023;
Published: 21 December 2023.

Edited by:

Fabio Anselmi, University of Trieste, Italy

Reviewed by:

Maurizio Parton, G. d'Annunzio University of Chieti and Pescara, Italy
Filippo Maggioli, Sapienza University of Rome, Italy

Copyright © 2023 Ferrari, Frosini, Quercioli and Tombari. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Nicola Quercioli, bmljb2xhLnF1ZXJjaW9saSYjeDAwMDQwO2dtYWlsLmNvbQ==

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.