New paper shows how computational tools can evaluate intelligent systems, from animals to robots.
Reference:
Zenil H, Marshall JAR and Tegnér J (2023) Approximations of algorithmic and structural complexity validate cognitive-behavioral experimental results. Front. Comput. Neurosci. 16:956074. doi: 10.3389/fncom.2022.956074
https://www.frontiersin.org/articles/...
A new paper co-authored by a group of AI scholars from Cambridge, Oxford, KAUST, and Sheffield, published in the journal Frontiers in Computational Neuroscience provides important insights and indicators of the way natural intelligence can be understood, mimicked and evaluated using tools based on an approach to Artificial General Intelligence capable of evaluating, capturing and predicting the complexity of behavioral patterns resulting from human or animal decision making. Ultimately, these could find applications in the design of new cognitive strategies to improve narrower applications of artificial intelligence, including generative AI like ChatGPT, or robotics.
The team, led by Dr. Hector Zenil and Prof. James Marshall, both of whom also lead the British natural and artificial intelligence startups Oxford Immune Algorithmics and Opteran, respectively, together with Prof. Jesper Tegner of King Abdullah University of Science and Technology (KAUST), examined research looking at the behavior of ants, fruit flies, and rats to ascertain whether mathematical models based on the theory of algorithmic probability, a theoretical framework for the most powerful type of artificial intelligence, namely Artificial General Intelligence or AGI, could be used to derive a set of objective tools with which to characterize “complexity” in behavioral experiments, both natural and artificial. One shared insight arrived at by applying these tools to animal behavior across all the studies is that animals appear to have some as yet unknown mechanism(s) with which to perceive and cope appropriately with different degrees of complexity in their environments. Beyond coping, it appears they can harness and utilize the environment in their internal decision processes and in how they implement their decisions.
This could be relevant to the understanding of human decision systems, particularly to understanding how humans perceive and utilize randomness and complex behavior. In another paper, published by the journal PLoS Computational Biology, a group led by Dr. Zenil with other collaborators, had already demonstrated how the same tools could frame human intelligence-- in what amounted to a reverse Turing test-- modeling the peak and decline of human intelligence. This paper attracted wide media attention. Taken together, the results reported in the recently published paper and these previous results suggest the existence of an algorithmic bias in human and animal reasoning for decision processes that brains exploit to their advantage, beyond simple statistical pattern recognition. This happens to be consistent with recent developments in other areas of science, such as Integrated Information Theory (IIT), according to which consciousness necessarily entails an internal experience. Here one indication of such an experience is the internal computation necessary to filter out or even adopt non-random strategies, even in the absence of stimuli, that may appear random by design.
Информация по комментариям в разработке