|
From dissipative structures to machines that learn from experience In 1977, Ilya Prigogine received the Nobel Prize for his contributions to non-equilibrium thermodynamics and for describing "dissipative structures" - systems that, by maintaining a continuous flow of energy and matter, can spontaneously organize their internal order. The idea became a landmark for complexity science and offers a fertile frame of thought for a very current question: what might machines that accumulate experience look like in the future? If we treat learning as a form of self-organization, then a machine that can accumulate its own experience must not be conceived as a static program, but as an open system crossed by flows: sensory data, energy, feedback, objectives. From this perspective, the transition from "trained once, used forever" to "learning on the fly, lifelong" is not only about algorithms; it is an engineering of flows. What does it mean, concretely, to "accumulate experience" Accumulating experience goes beyond memorizing examples. It means: - Lifelong learning: integrating an uninterrupted stream of new situations without forgetting what you already knew (avoiding "catastrophic forgetting"). - Multi-timescale memories: sensory (milliseconds), episodic (minutes-days), semantic/procedural (weeks-years). - World models: the capacity to form hypotheses about cause-effect, to predict, and to plan. - Intrinsic motivations: curiosity, exploration, the drive to reduce uncertainty - in other words, the machine builds its own curriculum. This list suggests that future machines will combine perception, memory, prediction, and action in a closed loop - an informational metabolism. Design principles inspired by self-organization - Locality and plasticity In nature, learning relies on local rules (synaptic plasticity) that, when aggregated, yield coherent global behavior. In technology, the equivalents are local update rules, spiking neural networks, or homeostasis mechanisms (regularization that keeps the model stable over time). - Redundancy and reconfiguration Robust systems can internally rearrange resources when the environment shifts. Future models will have specialized modules that can attach/detach (a move toward compositional architectures, "expert neurons," modular world models). - Layered memory A fast layer for immediate adaptations (on-device fine-tuning), an intermediate layer for episodes (selective replay), and a slow layer for stable knowledge (periodic distillation/compression). Much like an algorithmic sleep, the system consolidates and forgets in a controlled way. - Prediction before classification Machines that accumulate experience will first and foremost be machines that predict. Prediction forces the construction of causal representations; classification becomes a special case. - Safe intrinsic motivations Curiosity, novelty, and empowerment can guide exploration, but they must be bounded by rules (safety constraints, formal verification, sandboxing) so the exploration remains beneficial. Why hardware matters: from NPU to neuromorphic Returning to the idea of an open system: you cannot have self-organization without a suitable physical substrate. - Edge NPUs (such as the Hailo accelerator in Raspberry Pi 5 AI Kit) make real-time inference possible with low power consumption, right next to the data source. The next step is local micro-training: small on-device adjustments to the model, trained from the device's own experiences. - Neuromorphic architectures (event-driven, spiking) promise energy efficiency and plasticity closer to biology - a "healthier metabolism" for continual learning. - Compute-in-memory and intelligent sensors (e.g., event cameras) reduce entropy at the source, passing downstream only the significant signals. In short: without an appropriate energy metabolism (efficiency, low latency, memory near compute), algorithmic self-organization remains theory. Where the first "machines with experience" will appear - Proximity robots (logistics, assistance): they repeat tasks, but each day brings small variations; learning from micro-differences builds expertise. - Predictive maintenance: sensors that live for years in a factory and learn the "normal sound" of a machine, adapting to wear and season. - Local assistants (on the phone, on the home gateway): they encounter the user's daily routine and adjust to it, respecting privacy through federated learning. - Education and prototyping (as in the Embedac lab): students design "artifacts with experience," devices that visibly improve after each iteration. A plausible 5-10 year path - Today to 2 years: on the edge we move from inference-only to micro-adaptation (local fine-tuning, prompt-tuning, incremental distillation). Tooling for continual learning becomes standard in the pipeline. - 3-5 years: "experience runtimes" emerge - a kind of operating system that manages memories across timescales, schedules consolidation, enforces safety rules, and offers an audit API. Education integrates courses on algorithmic metabolism (energy, memory, latency). - 5-10 years: neuromorphic modules reach niche products; compliance standards require journaling and forgetting control; we see multi-agent systems where experience is exchanged selectively (federated learning with semantics). Conclusion Prigogine taught us that order can emerge from flow, not in spite of it. Machines that accumulate experience will be exactly such systems: open, adaptable, with internal mechanisms that transform the flow of data and energy into durable knowledge. In the short term, NPUs and Edge AI bring this vision to workshops and laboratories. In the long term, combining plastic architectures, layered memories, and safety rules will produce machines that not only execute, but evolve. And the educational stake is clear: to train engineers who know how to design such ecosystems - not just good algorithms, but living systems in the most technical sense of the term: ordered by flow and robust through self-organization. Prigogine visited our experimental self-organising system in 1994. |