This database* is an ongoing project to aggregate tools and resources for artists, engineers, curators & researchers interested in incorporating machine learning (ML) and other forms of artificial intelligence (AI) into their practice. Resources in the database come from our partners and network; tools cover a broad spectrum of possibilities presented by the current advances in ML like enabling users to generate images from their own data, create interactive artworks, draft texts or recognise objects. Most of the tools require some coding skills, however, we’ve noted ones that don’t. Beginners are encouraged to turn to RunwayML or entries tagged as courses.
*This database isn’t comprehensive—it's a growing collection of research commissioned & collected by the Creative AI Lab. The latest tools were selected by Luba Elliott. Check back for new entries.
Sema lets you compose and perform music in real time using simple live coding languages. It enables you to customise these languages, create new ones, and infuse your code with bespoke neural networks, which you can build and train using interactive workflows and small data sets. All of this with the convenience of a web-based environment.
VFRAME is an open-source project that develops customized object detection models, visual search engine tools, and synthetic image datasets for training deep convolutional neural networks.
Jukebox is neural network that generates music, including rudimentary singing, as raw audio in a variety of genres and artist styles.
Forthcoming. OpenAI researchers have released a paper about GPT-3, a language model capable of achieving state-of-the-art results on a set of benchmark and unique natural language processing tasks that range from language translation to generating news articles to answering SAT questions. GPT-3 has a 175 billion parameters.
Charr-rnn-tensorflow is a character-level language model in Python using Tensorflow. It is free and requires at least intermediate coding skills.
Magenta Studio is a collection of plugins for music generation with MIDI Files. It includes 5 tools: Continue, Groove, Generate, Drumify, and Interpolate. It is free and available for Windows, MacOS and as an Ableton plugin. It does not require coding skills.
Realistic-Neural-Talking-Head-Models is able to generate a moving face based a single image. This is a free implementation and requires advanced coding skills.
iMotions integrates various sensor technologies to track different aspects of human responses to stimuli in many kinds of environments. Pricing on request.