MIT Researchers uses AI to recreate the painting techniques

MIT Researchers uses AI to recreate the painting techniques

MIT Artificial intelligence: MIT researchers have developed an AI tool that creates time-lapse videos predicting how human artists use their hands to produce watercolor or digital paintings.

You can’t go back in time to how Monet or Van Gogh made their masterpieces, but maybe AI can give you the next best thing. Researchers at MIT CSAIL have developed a machine learning program, Timecraft, that can deduce how a painting was made and reproduce the probable brushstrokes, even for popular artists. The design was first trained on about 200 timelapse videos of digital and aquarelle paintings, after which the scientists created a coevolutionary neural network to ‘deconstruct’ artwork based on what they had learned.

MIT Artificial intelligence –

MIT Researchers uses AI to recreate the painting techniques
Image Credit: Freepik

The AI is trained using Vimeo and YouTube time-lapse videos of people making art. The probabilistic model can synthesize and predict moments from just one single image of an artwork in the painting process.

The network is meant to mimic the ability skilled human artists possess to see a piece of art and comprehend the series of brush strokes or steps a person took to put it together.

“Artists paint using unique combinations of brushes, strokes, and colors. There are often many possible ways to create a given painting. Our goal is to learn to capture this rich range of possibilities,” researchers wrote in a paper describing the AI.

The authors describe their work as distinct from other types of precognitive AI that forecast future frames in a video while others prefer to concentrate on physical processes such as flowering or human activity and forecast over relatively short time frames.

How Timecraft works?

The data set contains 117 time-lapse videos for digital painting, averaging four minutes in length, and 116 videos for watercolor painting, an average of 20 minutes each. Both data sets focus on landscape paintings and still life.

As part of the experiment, approximately 150 human evaluators were hired from Amazon’s Mechanical Turk. They were given tasks to compare video generated by the MIT model with visual deprojection, a method for recovering missing frames from videos. This method was introduced at the 2019 Conference on Computer Vision and Pattern Recognition (CVPR).

Read More:

“We show that human evaluators almost always prefer our method to an existing video synthesis baseline and often find our results indistinguishable from time-lapses produced by real artists,” the paper reads. “To the best of our knowledge, this is the first work that models and synthesizes distributions of videos of the past, given a single final frame.”

The method of frame interpolation predicts the next sequence in generated time-lapse videos, as well as various transfer techniques in the AI style. A coevolutionary neural network removes any frames which include hands, paintbrushes, and shadows to curate video data sets.

Posted in AI & Chatbots, News | Tagged , | Leave a comment

If you have any inquiry related to this article then feel free to contact us. We will be happy to assist you.