image-to-image translation

Experimentation on image-to-image translation with pix2pixHD. The goal/ hypothesis is to be able to convert face sketches to synthesized, stylistically realistic images.

For this model I am not training with image labels, just using sketch lines as the input training data set, with the intension to use this one for straight forward drawing input. For model training, I used I used the first 6,000 images with 512^2 resolution, with sketch + photo paired set as the training data set.

To make an interactive ML web app prototype, I used Streamlit (front end), Colab (as the backend), and created a temporary URL using ngrok. See the video documentation of the ML web app below.

Result with Face Dataset

Input with color masks

Input (Egon Schiele)

Output

Output

Result with Shoes Dataset

Sketch input

Sketch input

Sketch input

Output

Output

Output