网刊加载中。。。

使用Chrome浏览器效果最佳,继续浏览,你可能不会看到最佳的展示效果,

确定继续浏览么?

复制成功,请在其他浏览器进行阅读

Granular Relational Calculus:At the Junction of Computing with Relations and Data  PDF

  • PEDRYCZ Witold
Department of Electrical and Computer Engineering, University of Alberta, Edmonton T6R 2V4, Canada

CLC: TP181

Updated:2022-04-24

DOI:10.11908/j.issn.0253-374x.21544

  • Full Text
  • Figs & Tabs
  • References
  • Authors
  • About
CN CITE
OUTLINE

Abstract

While granular computing, in particular fuzzy sets, offers a comprehensive conceptual framework of managing conceptual entities-information granules with the processing mechanisms provided by fuzzy sets, this mechanism does not fully engage experimental data whose characteristics could be fully incorporated in the obtained results. Having this in mind, the fundamental constructs of relational computing and fuzzy relational computing were augmented by mechanisms originating from the area of granular computing. It is advocated that the prescriptive constructs existing in the area of relational calculus are augmented by incorporating a descriptive component, which is governed by the existing data. It is shown that the principle of justifiable granularity helps enhance the existing results by experimental evidence residing with the data and develop the results in the form of information granules.

人工智能、数据挖掘。E-mail:wpedrycz@ualberta.ca

Relational computing

1-2 has been one of the focal points of fundamental research in fuzzy sets and interval analysis. The well-known concepts, such as projection, Cartesian product, and composition operators, are prescriptive, viz. they provide universal concepts and a way of generic computing that navigate processing sets and fuzzy sets. A number of them can be regarded as certain aggregation procedures involving many arguments so it could be beneficial to have them endowed with abilities to capture the nature of data they operate on. For instance, the projection operation returns the maximal value of the characteristic function or membership function. Adding a descriptive facet to the constructs and making them have a prescriptive-descriptive character could be a desirable property. Having this in mind, in the present study, it is advocated that the results of such aggregation can be conveniently described as information granules reflecting the diversity of the partial results. In other words, the ultimate objective is to revise and augment the mechanisms and results of relational computing by engaging the mechanisms of granular computing 3-7. It is shown that the principle of justifiable granularity serves as a sound conceptual vehicle to accommodate the characteristics of data by producing a granular format of the results. Interestingly, this avenue of developments of relational constructs has not been fully investigated. There have been some initial studies pointing at the statistical properties of logic operators that impact the resulting operator; however the idea of information granules as a result of such logic operations has not been studied.

In this paper, some generic definitions are presented concerning Cartesian products, projections and the following reconstruction mechanism, composition operators. In the sequel, fuzzy relational equations are covered along with a topic of a reconstruction of fuzzy relations. In addition, the principle of justifiable granularity, which is central to the ensuing granular constructs is introduced. Moreover, both a one-dimensional and multivariable case is studied. Furthermore, G-operations are presented 4 along with a variety of architectures (such as G-Cartesian products, G-relational composition operations, granular relational equations, and granular logic networks).

1 Generic definitions

Let A and B be two sets or fuzzy sets defined in finite spaces X and Y, respectively, dimX) =n, and dimY)=m. They are described by characteristic functions (sets) or membership functions (fuzzy sets). In what follows, the fundamental definitions and constructs forming a backbone of relational calculus is briefly recalled. Interestingly, these entities are the same for sets and fuzzy sets meaning that the definitions put forward apply to the same extent to sets and fuzzy sets.

1.1 Cartesian product and projection operations

Relations and fuzzy relations are cornerstones when describing and processing relationships existing among real-world objects.

Cartesian product

A Cartesian product of A and BA× B, is a relation (fuzzy relation) described by their characteristic or membership functions

28-9

A× B)(x, y)=min (Ax), By))

x X, y Y (1)

For fuzzy sets, the minimum operation is replaced by any t-norm (A× B)(x, y)= Ax)t By). Of course, if A and B are sets, all t-norms return the same result.

To focus attention, relations defined over two spaces are considered. However, the constructs are readily extended to multidimensional situations. A two-dimensional fuzzy relation R is defined over the Cartesian product X ×Y. The well-known definition of projection is presented as

Projection of fuzzy relation

The projections of R on X and Y, respectively, are defined as

projXRx)=supyRx,y (2)

and

projYRy)=supxRx,y (3)

The result of projection is a fuzzy set. From the computational perspective, it is worth noting that the result of projection is determined by the maximal value of the relation across one argument for the second one being fixed. In this way, this is an optimistic view: the membership values of R depend upon the extreme value of the slice of R reported along the x or y coordinate.

Reconstruction

The results of projection of R can be used to reconstruct the original fuzzy relation. This is a typical example in image processing when an original 3D object has to be reconstructed in an efficient way based on their projections.

The common way of realizing reconstruction is to take a Cartesian product of the projection results, say A and B and compute their Cartesian product. If R is an image, it is apparent that the projection operation returns a result based on the maximal value of R for a value of one argument fixed, see Fig. 1.

Fig. 1  Projection of relation on x and y coordinates along with the reconstruction result

There is no guarantee that A× B always returns R. This problem will be expounded later on.

1.2 Composition operators

The composition operators that commonly discussed in relational calculus involve a sup-min (max-min) and inf-max (min-max) operations. Given a fuzzy set A in X and a fuzzy relation R, these compositions return another fuzzy set B in Y with the following membership function

sup-min composition

B =A R By) = supx[min(Ax), Rx,y)] (4)

inf-max composition

B = A R By) =

infx[max(Ax), Rx,y)] (5)

The apparent generalizations concern the use of triangular norms and conforms in the realization of the composition operators

sup-tmax-t

By)=supxAxtRx,y)] (6)

inf-smin-s

By)=infxAxsRx,y)] (7)

Where t stands for t-norm and s denotes some t-conorm. Finally, the composition operation could be further generalized into s-t and t-s composition which are read as

s-t

By) = SxAxtRx,y (8)

t-s

By) = TxAx)sRx,y (9)

It is worth noting that if A is an overall space, Ax)=1, then the composition operator returns the projection of R.

1.3 Logic aggregation of many fuzzy sets

t-norms and t-conorms are commonly used in the realization of logic aggregation. The or operation of fuzzy sets defined in the same space A1A2,..., Ac is applied to the individual membership grades A1xA2x)... Anx) returning A1xsA2xs... sAnx). For the operation, one considers any t-norm and the results becomes A1xtA2xt... tAnx). One could note that depending on the t-norm being used, the largest and smallest membership grades of the results are bounded

or: maximum < A1xsA2xs... sAnx< drastic sum

and

and: drastic product < A1xtA2xt... tAnx< minimum.

If the number of arguments increases, the result tends to zero (and operation) or 1 (or operation).

1.4 Fuzzy relational equations

Fuzzy relational equations have been studied from the seventies

1-210-11 with a significant progress reported over the decades 12-13. These equations are directly based on the composition operators studied in Section 2.2. The two expressions (4) and (5) could be sought as equations with regard to A and R being streated as unknown. More specifically, there are two key types of equations

–estimation problem: Given A and B, determine R

–inverse problem: Given R and B, determine A

For the sup-min and inf-min composition operators, the solutions are well-known

1-2. Equations with sup-t and inf-t compositions are also discussed in the context of their analytical solutions11. It is also shown that if the solution set is nonempty, the largest (smallest) solutions can be determined. Likewise, the solutions could be determined for the sup-t and inf-s compositions. However, the equations with the s-t and t-s compositions cannot be processed in a general way. Likewise, if the solution sets are empty, one has to resort to approximate solutions obtained by invoking optimization methods, say gradient-based techniques.

The estimation problem can be generalized by considering a system of fuzzy relational equations

A1 R = B1A2 R = B2,..., Ac R = Bc

1.5 Reconstruction of fuzzy relations -a parametric augmentation

The previous way of carrying out projection is generalized in the form of the procedure where the s-t composition is introduced and some parametric flexibility are brought in to the calculations

Ax) = SxwytRx,y)] (10)
By)= SyvxtRx,y)] (11)

In case of finite spaces X and Yw and v are vectors of weights assuming values in [0,1]. Their objective is to calibrate an impact of individual elements of X and Y when determining the respective contributions to the generated results.

As to the reconstruction, it is augmented by weights f and g which yield

R~x,y) = [Axsfx)]tBysgy)] (12)

The overall scheme of projection and reconstruction is visualized in Fig.2.

Fig. 2  Realization of reconstruction of fuzzy relation

As a matter of fact, the expressions (10)–(12) are examples of logic neurons: the projection is completed in terms of the two OR neurons whereas the reconstruction is conducted with the aid of the AND neuron.

The selection of the values of the weights wvf, and g is completed by solving the following optimization problem,

minwvfg || RR~|| (13)

where ||. || is a certain distance function.

2 Principle of justifiable granularity

Building information granules on a basis of experimental data constitutes a pivotal item on the agenda of granular computing with far-reaching implications on the design methodology of granular models and ensuing applications. Clustering and fuzzy clustering

14 are often used as a vehicle to produce information granules. The principle of justifiable granularity rooted in the compelling intuitively appealing arguments guides a construction of an information granule 15-17. In a nutshell, a resulting information granule becomes a summarization of data (viz. the available experimental evidence). The underlying intuitive rationale behind the principle is to deliver a concise and abstract characterization of the data such that the two requirements are addressed, i.e., the produced granule is justified in light of the available experimental data, and the granule comes with a well-defined semantics meaning that it can be easily interpreted and becomes distinguishable from the others.

Formally speaking, these two intuitively appealing criteria are expressed by the criterion of coverage and the criterion of specificity. Coverage states how much data are positioned behind the constructed information granule. Rephrase it differently, coverage quantifies an extent to which information granule is supported by available experimental evidence. Specificity, on the other hand, is concerned with the semantics of information granule stressing the semantics (meaning) of the granule.

In what follows, the developments of the principle of justifiable granularity is elaborated on by starting with a generic version. The main assumptions and the features of the environment in which information granules are being formed are also summarized in a concise manner. Information granules are formalized in an interval format so that an interval [a, b] is constructed. The interval type of information granule is explored for illustrative purposes; however the method is far more general and the principle can be applied to different granular and numeric experimental data and produce information granules formalized in terms fuzzy sets, rough sets and others.

2.1 One-dimensional data

One-dimensional real data x1x2, …, xN are considered. The bounds of the data are xmin and xmaxxmin= arg mink xkxmax = arg maxk xk.

The coverage measure is associated with a count of the number of data embraced by A, namely

covA)=1N cardxkxkA (14)

Where card (.) denotes the cardinality of A, viz. the number (count) of elements xk belonging (covered) to A. In essence, coverage has a visible probabilistic flavor. The specificity of A, sp(A), is regarded as a decreasing function g of the size (length) of information granule. If the granule is composed of a single element, spA) attains the highest value and returns 1. If A is included in some other information granule B, then spA> spB). In a limit case, if A is an entire space, spA) returns zero. For an interval-valued information granule A =[ab], a simple implementation of specificity with g being a linearly decreasing function comes as

spA)=glengthA))=1-b-arange (15)

where range is defined as range = xmax -xmin.

The criteria of coverage and specificity are in an obvious conflicting relationship. The increase in the values of coverage implies lower values of specificity and vice versa. If it is wished to maximize both criteria, a sound compromise has to be reached. Let us introduce the following product of the criteria

V = covAspA (16)

The maximization of the performance index V gives rise to information granule where some trade-offs between coverage and specificity are reached. The design of information granule is accomplished by maximizing the above product of coverage and specificity. Formally speaking, consider that an information granule is described by a vector of parameters pVp), the principle of justifiable granularity yields an information granule that maximizes Vpopt = arg pVp).

To maximize the index V through the adjusting the parameters of the information granule, two different strategies are encountered:

(1) A two-phase development is considered. First a numeric representative (mean, median, mode, etc.) of the available data is determined. It can be regarded as their initial representation. Next, the parameters of the information granule are optimized by maximizing V. For instance, in case of an interval, one has two bounds (a and b) to be determined. These two parameters are determined separately, viz. a and b are formed by maximizing Va) and Vb). The data used in the maximization of Vb) involves the data larger than the numeric representative. Likewise, Va) is optimized on a basis of the data lower than this representative.

(2) A single-phase procedure in which all parameters of information granule are determined at the same time. In the above case, the values of a and b are simultaneously optimized. In comparison with the previous method, here the location of the interval is not biased towards one of the “anchor” points such as the mean or median. This approach yields more flexibility and finally results in possibly higher values of Vp).

As an example, let us consider 500 data governed by a normal distribution N(0, 1). The range is [-3.43, 3.00].

The use of the principle of justifiable granularity where both bounds are optimized at the same time leads to the interval information granule [-0.896, 1.261]. The obtained product of the coverage and specificity criteria is 0.730.

Proceeding with the separate optimization of the bounds and assuming a numeric representative to be the mean value (0.047), the optimal interval [-0.983, 0.846] is obtained and the optimized performance index is 0.654. If the numeric representative is taken as the median (0.654), the numeric representative is practically the same as before, [-0.987, 0.847] resulting in the same value of the performance index.

Considering the 500 data generated by the Laplace distribution L(0,1), the obtained results are

–simultaneous optimization of the bounds: [-1.876 , 1.497 ], performance is 0.822

–separate optimization of the bounds, mean as the representative: [-1.493, 1.498], performance 0.774

–separate optimization of the bounds, median as the representative: [-1.480, 1.473], performance 0.768

For the Cauchy distribution (again 500 data), the obtained results are

–simultaneous optimization of the bounds: [-10.123 , 14.808], performance is 0.970

–separate optimization of the bounds, mean as the representative: [-5.639, 21.847]

performance 0.958

–separate optimization of the bounds, median as the representative: [-5.943, 14.794], performance 0.952

If fuzzy sets are considered as information granules, the definitions of coverage and specificity are reformulated to take into account the notion of partial membership. Here the fundamental representation theorem is invoked, stating that any fuzzy set can be represented as a family of its α-cuts [8,15], namely

Ax)=supα[0,1][min(α,Aα(x))] (17)

where

Aα(x)=xA(x)α (18)

The supremum (sup) operation is taken over all values of α. In virtue of the representation theorem, any fuzzy set represented as a collection of sets is obtained.

Having this family of α-cuts in mind and considering Eqs. (14) and (15) as a point of departure for constructs of sets (intervals), the following relationships are obtained.

–coverage

covA)= XAxdx/N (19)

where X is a space over which A is defined. Moreover, one assumes that A can be integrated. The discrete version of the coverage expression comes in the form of the sum of membership degrees. If each data point is associated with some weight wx), the calculations of the coverage involve these values

covA)= XwxAxdx/Xwxdx (20)

–specificity

sp(A)=01sp(Aα)dα (21)

2.2 Multivariable data

Let us consider multivariable data X ={x1x2,..., xN} where xk Rn. It is assumed that the data are normalized to the unit hypercube. The process is carried out in the same way as described in the first option. It is assumed that some numeric representative prototype of X associated with the data is given. Denote it by v. It could be also produced by running some clustering or fuzzy clustering technique.

Around the numeric prototype v, one spans an information granule GG=(vρ) whose optimal size (radius) is obtained as the result of the maximization of the already introduced criterion, namely

ρi,opt=arg maxρi[cov{Vi)sp(Vi)]        (22)

where

covVi)=1N card{xk | ||xk-V|| n ρ (23)

As before, the specificity is expressed as a linearly decreasing function of ρ

spG) =1 - ρ (24)

The geometry of information granule depends upon the form of the distance function. For instance, the Tchebyshev distance implies a hyperbox shape of the granules.

When using the fuzzy clustering method, the data X come with their weights associated with the individual elements of X, say (x1w1), (x2w2), ..., (xNwN), where wk [0,1] serves as a degree of membership of the kth data point. The coverage criterion is modified to reflect the existing weights. Introduce the following set

Ω= { xk | ||xk-v||   (25)

Then the coverage is expressed in the form

covG) = 1NxkΩwk (26)

The specificity measure is defined as presented before.

3 G-operations

Projections and composition operators are examples of multi-argument aggregation operators. Many input variables give rise to a single result. In this paper, it is advocated that the result of aggregation of numeric input arguments is an information granule which helps incorporate the diversity of input arguments and copes with the descriptive aspect of the result.

The classic definition of projection, although conceptually sound, is quite limited when it comes to the incorporation of data distribution. Both in Eq. (2) and (3), the result hinges upon the maximal value of the characteristic or membership function. For Boolean relations, the result is only either 0 or 1 irrespective of the distribution of values of the input variables. For fuzzy relation, the result is the maximal membership grade. It is a bit counter-intuitive as the projections of very different rows of R yields the same result. The use of any t-conorm gives a better insight as the result captures the membership values of the fuzzy relation for some fixed x, yet it does not reflect the nature (distribution) of the data.

In sum, it could be intuitive to anticipate that the result of projection has to accommodate the diversity of the entries of R forming any row or column and become an information granule of type-2. The result of any operation on fuzzy sets involving many arguments should reflect the existing diversity and return an information granule whose localization and specificity is impacted by the existing data. This gives rise to a slew of constructs such as projection (and the associated reconstruction process), granular composition operators, and granular relational equations. It is also shown that this leads to a new class of granular logic networks built on a basis of G-AND and G-OR neurons.

Conceptually, the results of processing individual inputs, say fx1), fx2),..., fxn) where f is a certain operator, say maxmint-norm, t-conorm, etc. are considered en block and regarded as a certain information granule. In a concise way, the process is described as T = Gf({x1x2, ... , xN}) with G denoting a procedure of the principle of justifiable granularity returning an interval information granule T=[t-t+] on a basis of fx1), fx2),..., fxn).

3.1 G-Cartesian product and a reconstruction problem

G-Cartesian product

Recall that Ay)= projxR = supxRxy). The G-projxR returns an information granule A~ with the bounds [a- x), a+x)] where these bounds are produced by the principle of justifiable granularity being applied to the collection of membership grades {Rx1y), Rx2y),..., Rxny)} so some y fixed.

Likewise B~ is an information granule [b- y), b+y)] arising through the formation of information granule which emerges after processing { Rxy1), Rxy2),..., Rxym)}

Reconstruction

The reconstruction, as in case of type-1 information granules (see Section 4), is realized by taking a Cartesian product of A and B. Here, however, A and B are type-2 information granules. Thus R itself is a type-2 fuzzy relation R~ with the entries [a- xtb- y), a+xtb+y)]. An interesting question arises as to the quality of the obtained reconstruction. As R and R~ are of two different types, the performance of reconstruction is evaluated by using the measures of coverage and specificity. Those are the same which were studied with regard to the principle of justifiable granularity. The coverage is expressed in the form

cov¯=1nmx,ycov(Rx,y,R~x,y) (27)

and

sp¯=1nmx,y(1-R~(x,y))          (28)

The coverage operator, cova, [b,c]) returns 1 once a is included in the interval [b,c]. The performance of reconstruction is expressed as the product cov¯ sp¯, the higher the value of this index, the better the performance of the reconstructed fuzzy relation.

3.2 Granular relational composition operators

The composition operators discussed in Section 2.2 are examples of multi-argument aggregation operations. the operation of G-min, G-max, G-t, and G-s are introduced as analogous to the discussed compositions in the following way

G-min and G-t compositions

By) = G-minxAx), Rx,y)) (29)
By) = G-txAx), Rx,y)) (30)

The principle of justifiable granularity is applied to sets {min(Ax1), Rx1y)), min(A(x2), Rx2y)),...,min(Axn), Rxny))} and {Ax1tRx1y), Ax2tRx2y),..., AxntRxny)} , respectively.

For the inf-min and inf-s, one has

By) = G-maxxAx), Rx,y)) (31)
By) = G-sxAx), Rx,y)) (32)

With the principle of justifiable granularity applied to sets

{max(Ax1), Rx1y)), max(Ax2), Rx2y)),..., max(Axn), Rxny))} (33)

Ax1sRx1y), Ax2sRx2y),..., A(xnsRxny)}

(34)

3.3 Granular fuzzy relational equations

The composition operations discussed above, both in terms of their numeric and granular versions, are considered in the generalized versions of relational equations, G-relational equations. As before, there are two types of problems

Estimation problem. Given are fuzzy sets A and B, determine R. In virtue of the G-max or G-t composition, the result of the composition of A and R is B~B~y)=G-min(Ax), Rx,y)) or B~y)=G-tAx), Rx,y)). The unknown fuzzy relation R is optimized in a way the product of coverage and specificity of B~cov¯ sp¯ are maximized where

cov¯=1mycov(By,B~y) (35)

and

sp¯=1my(1-B~(y)) (36)

The solution (optimal fuzzy relation) Ropt is the one that maximizes the product of the average coverage and specificity, Ropt =arg maxRcov¯ sp¯)

For the system of equations where pairs of data (Ak, Bk), k =1, 2,..., N are provided, the formulation of the problem is the same as shown above with the coverage and specificity expressed over all data meaning that

cov¯=1Nmk, ycov(Bky,Bk~y) (37)

and

sp¯=1Nmk,y(1-Bk~(y)) (38)

In the estimation problem, the data in the form (AkB*k) could be encountered where B*k is a type-2 information granule. In this case, the performance index quantifies how closely B*k and Bk~ are. A certain performance measure P is expressed in the form

P=1Nkcard(Bk*Bk~)card(Bk*Bk~) (39)

where card(.) denotes the cardinality of the information granules.

Inverse problem

Here given are R and B and A has to be determined. The formulation of the optimization problem in which A is searched for where G-min or G-t applied to A and R returns B~ for which the coverage and specificity are calculated in the form given by Eqs. (35) and (36).

The solutions to the estimation and inverse problems cannot be obtained analytically. A sound alternative is to involve some population-based optimization methods such as PSO(Particle Swarm Optimization), GA(genetic algorithms), DE(differential evolution) or alike and regard the product of coverage and specificity as a suitable fitness function.

3.4 Granular logic neurons and granular neural networks

The composition operators form the computing setting of granular neurons. The G-t composition gives rise to a G-OR neurons and G-s composition produces a G-AND neuron; refer to Fig. 3.

Fig. 3  Granular logic neurons

In essence, the granular neurons are realized by G-t or G-s composition operators returning a type-2 information granule Y. The weights play an important role by endowing the neurons by some parametric flexibility which is central to the realization of the learning capabilities of the neurons.

G-s composition: if all weights are equal to 1, the original data are used in the principle of justifiable granularity. If the weights are getting lower then the population of inputs is shifted towards lower values and the resulting information granule is moved towards the lower end of the unit interval. For the G-AND neuron, if all weights are set to zero, the obtained information granule is built on a basis of the original data. If the weights are getting larger, the obtained information granule migrates towards higher end of the space. Note that in both classes of neurons, the numeric inputs produce a granular output Y.

The generalization of the logic processor discussed in Refs. [

915] is a two-layer architecture where the first layer is composed of G-AND followed by the output layer composed of G-OR units. A multiple input-single output topology is illustrated in Fig. 4.

Fig. 4  Architecture of G logic processor

The outputs of G-AND neurons are information granules. Denote them by Z1Z2,..., Zm. They are processed by the G-OR neuron. The lower and upper bounds of Zis are processed separately, which at the end gives rise to information granule of type-2 (the level of type of information granule has been elevated). For the purpose of learning, one can treat them as information granule of type-1 use in the performance index the bounds of the granule as depicted in Fig. 5.

Fig. 5  Processing in G logic processor(Stressed is an elevated level of information granules obtained when moving along successive layers of the architecture)

To realize learning of the network, two components have to be clearly identified, namely a suitable performance index and a learning algorithm. To focus on this discussion, consider that the learning data are given as a family of input-output pairs (xktargetk), k=1, 2,..., N with n input variables and a single output. With regard to the first one, because numeric data are confronted with the information granule, the optimization process has to be guided by a suitable performance index that takes into account the diverse nature of data and results of the model.

As we encounter type-2 information granules, two optimization performance indexes are considered. There are A+ =[yk--  yk++] and A- =[yk-+,yk+ - ] where A- A+.

The optimization problem is formulated for A+ and A- in the following form

cov¯=1Nkcov(targetk,[yk--,yk++]) (40)

and

sp¯=1Nksp([yk--,yk++]) (41)
V1= cov¯ sp¯ (42)

and

cov¯=1Nkcov(targetk,[yk-+,yk+ - ]) (43)
sp¯=1Nksp([yk-+,yk+ - ]) (44)
V2 =cov¯ sp¯ (45)

The solution is wopt = arg maxw V1 or wopt = arg maxw V2 where w is a collection of weights of the G-neurons forming the network.

Given the complexity of the optimized performance index whose gradient with respect to the parameters of the network is difficult to determine, a feasible optimization vehicle comes from a family of population-based optimization techniques.

4 Conclusions

This study develops a new perspective and provides algorithmic environment of data augmented constructs of relational computing. This enhanced the current consideration from a purely prescriptive ground to embrace the descriptive aspect involving data content and data characteristics. It has been shown that multivariable constructs (projections, compositions, etc.) give rise to results of elevated aspect of information granularity. In particular, the arguments that are membership grades lead to the granular result, say, an interval of possible membership grades. It can be stated that there is an interesting effect of elevation of type of information granules: an aggregation (composition) of type-p information granules produces a single type (p+1) information granule. Granular architectures have been introduced, referred to as G-constructs, say G-projection, G-composition, among others. The idea of fuzzy relational equations is generalized to G-relational equations. As a follow-up of weighted composition operations, a family of granular neurons and neural networks has been established that focuses on numeric processing producing granular results.

While the study opens up a new avenue of research in fuzzy sets, relational calculus, there are a number of promising directions worth pursuing. First, all investigations have been conducted for interval information granules. However, the framework proposed here is for more general and as such deserves more studies. The solutions to the G-relational equations are demanding given the underlying processing thus a way of their determination calls for more thorough investigations as to efficiency of optimization methods.

References

1

SANCHEZ E.Resolution of composite fuzzy relation equations[J].Information and Control197630) :38. [Baidu Scholar] 

2

NOLA DISESSA SPEDRYCZ Wet al.Fuzzy relation equations and their applications to knowledge engineering [M]. NorwellKluwer1989. [Baidu Scholar] 

3

BARGIELA APEDRYCZ W. Granular computing: An introduction[M]. DordrechtKluwer Academic Publishers2003. [Baidu Scholar] 

4

BARGIELA APEDRYCZ W. Toward a theory of Granular Computing for human-centered information processing[J]. IEEE Transactions on Fuzzy Systems2008162): 320. [Baidu Scholar] 

5

PEDRYCZ W. Granular computing[M]. Boca RatonCRC Press2013. [Baidu Scholar] 

6

PEDRYCZ W. Granular computing for data analytics: A manifesto of human-centric computing[J]. IEEE /CAA Journal of Automatica Sinica201851025. [Baidu Scholar] 

7

ZADEH L A. Towards a theory of fuzzy information granulation and its centrality in human reasoning and fuzzy logic[J]. Fuzzy Sets and Systems199790111. [Baidu Scholar] 

8

KLIR G JYUAN B. Fuzzy Sets and fuzzy logic theory and applications[J].Upper Saddle River: Prentice Hall1995. [Baidu Scholar] 

9

PEDRYCZ WGOMIDE F. Fuzzy systems engineering: Toward human-centric computing[M]. HobokenJohn Wiley2007. [Baidu Scholar] 

10

CZOGALA EDREWNIAK JPEDRYCZ W. Fuzzy relation equations on a finite set[J]. Fuzzy Sets and Systems1982789. [Baidu Scholar] 

11

PEDRYCZ W. Numerical and applicational aspects of fuzzy relational equations[J]. Fuzzy Sets and Systems1983111. [Baidu Scholar] 

12

YANG S J. An algorithm for minimizing a linear objective function subject to the fuzzy relation inequalities with addition-min composition[J]. Fuzzy Sets & Systems201425541. [Baidu Scholar] 

13

YANG X. Solutions and strong solutions of min-product fuzzy relation inequalities with application in supply chain[J]. Fuzzy Sets and Systems2020384) :54. [Baidu Scholar] 

14

BEZDEK J C. Pattern recognition with fuzzy objective function algorithms[M]. New YorkPlenum Press1981. [Baidu Scholar] 

15

PEDRYCZ W. An Introduction to computing with fuzzy sets - Analysis, design, and applications[M]. Springer2020. [Baidu Scholar] 

16

PEDRYCZ WHOMENDA W. Building the fundamentals of granular computing: A principle of justifiable granularity[J]. Applied Soft Computing2013134209. [Baidu Scholar] 

17

PEDRYCZ W. 从局部到全局的规则模型: 粒聚合研究[J]. 同济大学学报(自然科学版): 2021491): 142. [Baidu Scholar] 

PEDRYCZ W. From local to global rule-based models: A study in granular aggregation[J]. Journal of Tongji University (Natural Science)2021491):142. [Baidu Scholar] 

18

PEDRYCZ W. Allocation of information granularity in optimization and decision-making models: Towards building the foundations of Granular Computing[J]. European Journal of Operational Research2014232137. [Baidu Scholar]