|Title:||Neuro Evolving Robotic Operatives||Type:||Linux Game|
|Category:||Strategy ➤ AI Scripting||Commercial:|
|Tags:||Strategy; AI Scripting||Demo:|
|Released:||Latest : 2.0 / Dev : 2012-11-28||Package Name:|
|Played:||Single & Multi||PDA:|
|Quality (record):||Quality (game):|
|Contrib.:||Goupil & Louis||ID:||12050|
Trailer / Gameplay [en] / [en] / [fr] :
Website & videos
[Homepage] [Dev site] [Features/About] [Screenshots] [Videos t t t t t t g] [Reviews] [WIKI] [FAQ] [RSS] [Changelog 1 2]
Commercial : (empty)
[Open Hub] [rtNEAT]
Devs (Neural Networks Research Group [fr] [en]) : [Site] [Forums] [twitter] [YouTube] [Interview 1 2]
Devs (NERO 2.0 Team [fr] [en]) : [Site] [Forums] [twitter] [YouTube] [Interview 1 2]
Devs (OpenNERO Team [fr] [en]) : [Site] [Forums] [twitter] [YouTube] [Interview 1 2]
Game : [Blog] [Forums] [twitter] [YouTube]
On other sites
News / Source of this Entry (SotE) / News (SotN)
Un jeu d'intelligence artificielle, par le Neural Networks Research Group, la NERO 2.0 Team & la OpenNERO Team.
Neuro Evolving Robotic Operatives (NERO, aka OpenNERO) est un jeu d'intelligence artificielle multijoueur en ligne.
Développez votre propre armée de robots en réglant leurs cerveaux artificiels pour des tâches difficiles, puis envoyez-les affronter celle de vos amis dans des compétitions en ligne !
Un mode de jeu interractif dénommé "territory capture" et des outils d'entraînement sont également disponibles.
NERO est le résultat d'un projet de recherche académique en intelligence artificielle, basé sur l'algorithme rtNEAT. C'est aussi une plateforme pour de futures recherches sur des technologies d'agents intelligents.
Le projet NERO est conduit par le Neural Networks Group du département des sciences informatiques de l'université du Texas à Austin.
NERO (which stands for Neuro-Evolving Robotic Operatives) is a new kind of machine learning game being developed at the Neural Networks Research Group, Department of Computer Sciences, University of Texas at Austin. The goals of the project are (1) to demonstrate the power of state-of-the-art machine learning technology, (2) to create an engaging game based on it, and (3) to provide a robust and challenging development and benchmarking domain for AI researchers.
In August 2003 the Digital Media Collaboratory (DMC) of the Innovation Creativity and Capital Institute (IC2) at the University of Texas held its second GameDev conference. The focus of the 2003 conference was on artificial intelligence, and consequently the DMC invited several people from the Neural Networks Research Group at the University's Department of Computer Science (UTCS) to make presentations on academic AI research with potential game applications.
The GameDev conference also held break-out sessions where groups brainstormed ideas for innovative games. In one of these sessions, Ken Stanley came up with an idea for a game based on a real-time variant of his previously published NEAT learning algorithm. On the basis of Ken's proposal, the DMC/IC2 resolved to staff and fund a project to create a professional-quality demo of the game. (See production credits.)
The resulting NERO project started in October 2003; NERO 1.0 was released in June 2005, and NERO 1.1 in November 2005. NERO 2.0 is a major new release, including a new interactive territory mode of game play, a new user interface, and more extensive training tools (for more details about the new features, see the What's new? page).
The NERO project has resulted in several spin-off research projects in neuroevolution techniques and intelligent systems (see the project bibliography.) It is also used as the basis for an undergraduate research projects courses at UT Austin (NSC309, CS370).
The NERO Story
The NERO game takes place in the future as the player tries to outsmart an ancient AI in order to colonize a distant Earth-like planet (read the NERO story).
The NERO Game
NERO is an example of a new genre of games, called Machine Learning Games. Although it resembles some RTS games, there are three important differences: (1) in NERO the agents are embedded in a 3D physics simulation, (2) the agents are trainable, and (3) the game consists of two distinct phases of play. In the first phase individual players deploy agents, i.e. simulated robots, in a "sandbox" and train them to the desired tactical doctrine. Once a collection of robots has been trained, a second phase of play (either battle or territory mode) allows players to pit their robots in a battle against robots trained by some other player, to see how effective their training was. The training phase is the most innovative aspect of game play in NERO, and is also the most interesting from the perspective of AI research.