via complexity digest
Without reference to this article, my instant thought was ‘swarms… aren’t really very complex, are they?’
Complexity Measures: Open Questions and Novel Opportunities in the Automatic Design and Analysis of Robot Swarms
- 1Department of Computer Science and Engineering, Campus of Cesena, Alma Mater Studiorum Università di Bologna, Bologna, Italy
- 2IRIDIA, Université libre de Bruxelles, Brussels, Belgium
Complexity measures and information theory metrics in general have recently been attracting the interest of multi-agent and robotics communities, owing to their capability of capturing relevant features of robot behaviors, while abstracting from implementation details. We believe that theories and tools from complex systems science and information theory may be fruitfully applied in the near future to support the automatic design of robot swarms and the analysis of their dynamics. In this paper we discuss opportunities and open questions in this scenario.
Metrics that quantify the complexity of a system and measure information processing are used in a wide range of scientific areas, including neuroscience, physics, and computer science. In the scientific literature, the word complexity is overloaded, as it may refer to the amount of effort needed to describe a system, or to create it, or also to quantify its structure both in terms of components and dynamical relations among its parts. For example, let us consider a swarm of robots: we may ask what is the complexity of a function describing the overall behavior of the swarm, or what is the complexity of the problem of optimally assigning tasks to the robots, or what is the complexity of each of the tasks. These objectives require different measures, each addressing a specific question. As a consequence, there is no unique and all-encompassing complexity measure: a plethora of metrics are available. Most come from information theory, which abstracts from specific system’s details and focuses on information processing. While notable results have been attained, we believe that the potential of these methods has still to be fully exploited in the automatic design of robot swarms and in the analysis of their behaviors.
In automatic design methods, the design problem is cast into an optimization problem that is solved either off-line or on-line, i.e., either before the swarm is deployed in its target environment or while the swarm is operating in it. A prominent example of automatic design is evolutionary robotics (ER), where the control software—typically an artificial neural network (ANN)—is optimized by means of an evolutionary algorithm (Nolfi and Floreano, 2000). A number of alternative methods depart from the classical ER by employing control software architectures other than ANNs and/or optimization techniques other than evolutionary computation (Watson et al., 2002; Hecker et al., 2012; Francesca et al., 2014; Gauci et al., 2014). A review of the main studies on automatic design of robot swarms—both off-line and on-line—is provided by Francesca and Birattari (2016).
The aim of this paper is to outline what we think are the most important open questions and to describe opportunities to use complexity measures for supporting the automatic design of swarms of robots and the analysis of their behaviors. In section 2, we provide an introduction to complexity measures. In section 3, we highlight the main contributions to the robotics field. In section 4, we illustrate our perspective and outline relevant open questions.
2. A Capsule Introduction to Complexity Measures
The notion of complexity is multifaceted. If, by the term “complex,” one means “difficult to predict,” then a suitable metric is provided by information theory with Shannon entropy (Shannon, 1948). Let us consider a simple system of which we observe the state at a given time. The observations can be modeled as a random variable X, which can assume values from a finite and discrete domain . If the observation is , which has a probability P(x), then the amount of information carried by the observation of x is defined as 1. Shannon entropy is defined as the expected value of the information of all symbols: . Intuitively, H(X) measures the amount of surprise—or, equivalently, the lack of knowledge—about the system; we may also observe that Shannon entropy measures the degree of disorder in a system or process. Many complexity measures are based on Shannon entropy. For example, the reciprocal influence between two parts of a system can be estimated by computing their mutual information, defined as I(X; Y) = H(X) + H(Y) − H(X, Y), where H(X, Y) is the joint entropy of the variables X and Y, defined on the basis of the joint probability P(x, y). I(X; Y) provides a measure of the information we can gain on a variable, by observing the other. Information-theoretic metrics are currently widely applied, as they have the property of being model independent and able to capture non-linear relations. In practice, probabilities are usually estimated through the observed frequencies.
When the objective is to measure the complexity of the description of a system, then algorithmic complexity may be used, as proposed by Kolmogorov (1965): the complexity of a string of symbols is defined as the length of the shortest program producing it. This measure is not computable in general, but approximations are available, such as the ones based on compression algorithms (Lempel and Ziv, 1976). Shannon entropy and Kolmogorov complexity are conceptually different (Teixeira et al., 2011). The former measures the average uncertainty of a random variable X, and so it estimates the difficulty of predicting the next symbol of a sequence received from a source. Conversely, Kolmogorov complexity measures the length of the minimal (algorithmic) description of a given sequence of symbols σ, therefore it estimates the difficulty of describing or reconstructing the sequence. However, they both capture the notion of compressibility of a signal and, in particular, they are null when X (resp. σ) is constant and maximal when X (resp. σ) is random.
Kolmogorov complexity also provides a theoretical framework for the principle known as Occam’s razor that states that among all the possible explanations of a set of data, the simplest one is preferable. A similar argument supports the notion of stochastic complexity, proposed by Rissanen (1986), which is the shortest description of the data with respect to a given probabilistic model.
The term “complex” is often used for capturing the notion of structure or pattern observed in data or in the dynamics of a system, once random elements are discarded. This concept is also related to the extent to which correlations distribute across the parts of the system observed (Grassberger, 1986a). The intuition is that high complexity should be associated to conditions characterized by a mixture of order and disorder, structure and randomness, easily predictable dynamics and novelty. Along this line, several measures have been proposed (Grassberger, 1986a; Lindgren and Nordahl, 1988; Li, 1991; Crutchfield, 1994; Gell-Mann and Lloyd, 1996; Shalizi and Crutchfield, 2001). A survey on complexity metrics is out of the scope of this contribution and we refer the interested reader to prominent works on the subject (Grassberger, 1986a; Lindgren and Nordahl, 1988; Badii and Politi, 1999; Lloyd, 2001; Prokopenko et al., 2009; Lizier, 2013; Moore et al., 2018; Thurner et al., 2018; Valentini et al., 2018).