This database* is an ongoing project to aggregate tools and resources for artists, engineers, curators & researchers interested in incorporating machine learning (ML) and other forms of artificial intelligence (AI) into their practice. Resources in the database come from our partners and network; tools cover a broad spectrum of possibilities presented by the current advances in ML like enabling users to generate images from their own data, create interactive artworks, draft texts or recognise objects. Most of the tools require some coding skills, however, we’ve noted ones that don’t. Beginners are encouraged to turn to RunwayML or entries tagged as courses.
*This database isn’t comprehensive—it's a growing collection of research commissioned & collected by the Creative AI Lab. The latest tools were selected by Luba Elliott. Check back for new entries.
deepdream.c is an artistic experiment trying to implement Convolutional Neural Network inference and back-propagation using a minimal subset of C89 language and standard library features.
Sema lets you compose and perform music in real time using simple live coding languages. It enables you to customise these languages, create new ones, and infuse your code with bespoke neural networks, which you can build and train using interactive workflows and small data sets. All of this with the convenience of a web-based environment.
DALL.E is a 12-billion parameter version of GPT-3 trained to generate images from text descriptions using a dataset of text-image pairs.
VFRAME is an open-source project that develops customized object detection models, visual search engine tools, and synthetic image datasets for training deep convolutional neural networks.
Jukebox is neural network that generates music, including rudimentary singing, as raw audio in a variety of genres and artist styles.
Charr-rnn-tensorflow is a character-level language model in Python using Tensorflow. It is free and requires at least intermediate coding skills.
Magenta Studio is a collection of plugins for music generation with MIDI Files. It includes 5 tools: Continue, Groove, Generate, Drumify, and Interpolate. It is free and available for Windows, MacOS and as an Ableton plugin. It does not require coding skills.
Realistic-Neural-Talking-Head-Models is able to generate a moving face based a single image. This is a free implementation and requires advanced coding skills.
iMotions integrates various sensor technologies to track different aspects of human responses to stimuli in many kinds of environments. Pricing on request.
This tool uses GPT-2 to generate scripts. It is modeled on a database of film scripts from IMSDB (The Internet Movie Script Database).