Creating Intricate Art with Neural Style Transfer
This post is an article originally authored by Kalai Ramea on Medium.
Kalai Ramea is a data scientist at PARC. She focuses on statistical machine learning and data analytics on various domains. Her research interests include applied machine learning and deep learning, exploration of novel statistical modeling techniques, and big data analytics.
Creating intricate art
Ever since the neural artistic transfer algorithm was published by Gatys, we’ve seen plenty of pictures being turned into artwork. The algorithm uses a feed-forward network to apply the ‘style’ of a painting to a given picture. We also saw an impressive approach for non-artistic neural style transfer, where “non-paintings” or everyday objects can be tiled as style image to create art. Later on, improvements were made in this area to develop a fast neural style transfer approach by Johnson et al. This paved way to many mobile applications, the notable one being Prisma, which allows users to create an artwork within seconds out of a picture they took with their phone.
Most of the artwork generated by these applications have mainly used pictures as the content image. This article introduces a novel way of producing a type of intricate art (or design) with silhouettes as content and doodles as style.
Origin of the intricate art idea
I am a huge fan of artists who can hand-draw “zentangles.” This is a type of art where intricate patterns or doodles are fit into a rectangle or any other shape. They look sophisticated and beautiful.

A hand-drawn zentangle
These have been lately used in coloring books for adults. As one can imagine, it requires a lot of patience to produce this kind of artwork. For those who lack patience (including me), but still want to create these sophisticated art pieces, deep learning comes to the rescue.
Intricate style transfer architecture
The architecture is based on Gatys’ style transfer algorithm with a few minor modifications. In this case, the content image is a silhouette and style image can be any pattern (ranging from simple black and white doodle to more complex color mosaics). The code also contains a module to invert and create a mask based on the content image, which will eventually be applied to the generated pattern.
Weights from pre-trained networks (VGGNET) are used for this application. Initial feature layers are used for style image and later feature layers are used for the content image. Gram matrix is defined to measure style loss and content loss, and the combined loss is minimized at every iteration.

Schematic of the Intricate Art Neural Transfer Architecture
Once the final combined image is generated, mask transfer is applied and saved as output. The algorithm is implemented in keras with tensorflow backend. Gihub link to the code with more details on implementation can be found here.
To follow the tradition of machine learning use cases, the algorithm is first tested with a cat silhouette.

Intricate artwork generated from a cat silhouette
Not too bad. This was styled from a simple set of doodles tiled together.
Let’s try more complex color patterns for style input. These require anywhere between 100 to 250 iterations. If run for fewer iterations, the output may contain some ‘blackness’ leftover from the silhouette.

Input style (left), generated art (right)
When geometric patterns are given as style input, we get this interesting ‘stained glass’ effect in the generated artwork.


Geometric artwork as style (left), generated artwork (right)
The code also allows to specify background image or background color optionally.

Artwork generated with text silhouette
Running out of pattern ideas? Take a picture of your grandmother’s quilt!

Quilt pattern (left), Darth Vader quilt art (right)
Now, Darth Vader and your grandmother have something in common!
Why silhouettes?
Why do we have to go through all this trouble to produce this? Couldn’t we just clip from an existing pattern? Or we could just use white noise to generate a new kind of pattern and then clip to any shape. Why specifically silhouettes?
Turns out, using silhouettes as content has its advantages. Take the below three generated art pieces for example. (X) is the original pattern merely clipped to the shape of the dancer, (Y) has patterns generated from white noise content, and clipped to the shape after the final image was generated, and (Z) used the dancer silhouette as content and later mask transfer was applied to clip the image.

For the first two images (X) and (Y), we see that the patterns (generated or not) do not complement the shape of the dancer. With (Y), we get new patterns, but they do not align with the shape. Whereas in (Z), the patterns almost fit like a skeleton within the shape, giving a neat finishing.

Pattern fitting along the edge (left), quilt stitches appearing along the edges (right)
The artwork generated from quilt pattern (see Darth Vader above), gives an illusion that the ‘stitches’ appear along the edges as if the fabric was cut and custom-made! This does not happen when you train on a picture or white noise or clip directly from the pattern.
Moreover, the silhouettes need not be in black specifically. In fact, different colors act as “seeds” to generate different variations of the artwork. One can choose the color depending what dominant color they want to retain (note: black color acts as a sink — it is neutral).

Artwork generated from different silhouette colors with the same style
Applications
- Any graphic design generation (e.g., logo, icon, poster)
- Custom fabric design (fancy a T-shirt with your favorite personality or quote, or want to generate a fabric with new abstract design?)
- Typography design (just feed in a text silhouette)
- Coloring book designs
Note
This work was originally presented as a poster at Self-Organizing Conference on Machine Learning, 2017.
References
- Leon A. Gatys, Alexander S. Ecker, Matthias Bethge. A Neural Algorithm of Artistic Style. 2015. https://arxiv.org/abs/1508.06576
- Roman Novak and Yaroslav Nikulin. Improving the Neural Algorithm of Artistic Style. 2016. https://arxiv.org/abs/1605.04603
- Some of the code is developed based on Somshubra Majumdar’s implementation of neural style transfer: https://github.com/titu1994/Neural-Style-Transfer
- Geometric patterns are obtained from artist Rebecca Blair: https://rebeccablairart.tumblr.com/
Acknowledgements
This work would not have been made possible without the GPU access provided by PARC and valuable feedback from PARC researchers.
Learn more about PARC’s latest research in the field of video and image analytics.
Kalai Ramea focuses on statistical machine learning and data analytics on various domains. Her research interests include applied machine learning and deep learning, exploration of novel statistical modeling techniques, and big data analytics. Prior to PARC, Kalai worked as a researcher at University of California, Davis, where she developed several numerical models in the domains of energy, transportation and climate, in order to assess long-term policy impacts of energy efficient technologies and human behavior. Kalai also worked as a research fellow at International Institute for Applied Systems Analysis in Austria as part of the Young Scientists Summer Program, where she developed an energy systems model which incorporated consumer purchase behavior to analyze long-term climate impacts. She also has 3 years of consulting experience in engineering design. Dr. Ramea received her Ph.D. from University of California, Davis in Transportation Technology and Policy, and M.S. from University of Southern California in Civil Engineering. She has several publications in the field of econometrics and quantitative modeling with focus on climate policy analysis and human behavior. When not working with data, she draws comics and volunteers at pet rescue centers.
Additional information
Our work is centered around a series of Focus Areas that we believe are the future of science and technology.
We’re continually developing new technologies, many of which are available for Commercialization.
PARC scientists and staffers are active members and contributors to the science and technology communities.