This is a database* of Creative AI tools for those interested in incorporating machine learning (ML) and other forms of artificial intelligence (AI) into their practice. They cover a broad spectrum of possibilities presented by the current advances in ML like enabling users to generate images from their own data, create interactive artworks, draft texts or recognise objects. Most of the tools require some coding skills, however, we’ve noted ones that don’t. Beginners are encouraged to turn to RunwayML.
The database is an initiative of the Creative AI Lab (a collaboration between Serpentine's R&D Platform and the Department of Digital Humanities at King's College London). It has been customised for Stages to show only tools. For the further resources like publications, essays, courses and interviews visit the full database here. The Lab commissioned Luba Elliott to aggregate the tools listed here in 2020. To submit further tools, get in touch with the Lab.
 
VFRAME is an open-source project that develops customized object detection models, visual search engine tools, and synthetic image datasets for training deep convolutional neural networks.
Jukebox is neural network that generates music, including rudimentary singing, as raw audio in a variety of genres and artist styles.
Forthcoming. OpenAI researchers have released a paper about GPT-3, a language model capable of achieving state-of-the-art results on a set of benchmark and unique natural language processing tasks that range from language translation to generating news articles to answering SAT questions. GPT-3 has a 175 billion parameters.
Charr-rnn-tensorflow is a character-level language model in Python using Tensorflow. It is free and requires at least intermediate coding skills.
Magenta Studio is a collection of plugins for music generation with MIDI Files. It includes 5 tools: Continue, Groove, Generate, Drumify, and Interpolate. It is free and available for Windows, MacOS and as an Ableton plugin. It does not require coding skills.
Realistic-Neural-Talking-Head-Models is able to generate a moving face based a single image. This is a free implementation and requires advanced coding skills.
iMotions integrates various sensor technologies to track different aspects of human responses to stimuli in many kinds of environments. Pricing on request.