Lesson 2:
How To Use AI In Your
Photobashing Process
AI (Artificial Intelligence) artwork has been a huge subject of
experimentation and debate these last few months. Whether this is a fad
or the future, who knows. But if we are going to use AI, my goal
is have the AI assist me rather than replace me, I want to find that
balance between my own artistic ideas / style and that of the
algorithm. And while AI continues to evolve, and so workflows will as
well, I wanted to share with you my current attempts at finding that
balance.
As a visual
artist, I've been looking at AI tools, and trying to decide how I could
use their strengths without reducing my contribution to a few well
chosen keywords. So this tutorial will discuss the methods I've been
using to
incorporate
AI artwork into some of my paintings.
Note: This isn't a tutorial showing you how to use any specific piece
of software, this is speaking about the process at a higher level, and
can be applied to pretty much any software you'd like.
Lesson 3:
Layer
Breakdown: The Final Push
As many of you know, I've been playing a lot recently with AI generated
artwork, especially creating robot designs. However, one thing I've
been unable to achieve in midjourney (the AI I've been currently using)
is a 3/4 view of the robot designs. All of them end up being frontal
view only. This is likely because the images the AI trained on either
haven't been 3/4 view, or maybe they have but that particular attribute
hasn't been labeled, and so the AI hasn't learned what a 3/4
perspective is. While I'm sure this will be fixed in the future, I've
been playing with giving the robots depth by projecting the artwork
onto simple 3d geometry. So this tutorial discusses this technique
which I used to make my painting "The Final Push"
Lesson 4:
Keyword Or
Placebo? Testing
Midjourney Prompts
With the current crop of AI Art Generators, your two major
inputs are image prompts and text prompts. This tutorial is going to be
focusing on text prompts in Midjourney, and asks the question do all of
the keywords people tend to add to their prompt really affect the final
image? Or are they actually not contributing at all, and maybe even
confusing the AI?
This tutorial shows some experiments I did with the Midjourney AI
trying to figure out what are the most useful and least useful
keywords, and I hope by presenting them it gives you a better idea of
the best way to turn your ideas into images.
If you want to download the giant tables in the video that show the
different keyword comparisons, they are here and here.
Lesson 5:
The 10 Things I've Learned
Comparing Midjourney And DALL-E
So I recently got access to the AI Art Generator known as
DALL-E 2, and after some initial playing, my next big question was how
it compared and contrasted to the other major AI Art Generator that
I've been using, Midjourney. So I ran both systems through a battery of
tests, and have found some interesting results. In short, each as you'd
expect have their own advantages and disadvantages. So if you've been
playing with Midjourney and are wondering how it compares to DALL-E,
I've compiled my 10 things I've learned comparing the two software.
If you want to download the giant tables in the video that show the
different keyword comparisons, they are here
and here.
Lesson 6:
What Would Concept Artists
Want From An AI Tool?
The main focus I've seen so far in the world of Generative AI
Artwork is how to replace the artist. But what about tools that help an
artist to
work faster? If the AI field could be anything a concept artist wanted
it to be, what would that look like? This discussion will explore some
of the things I personally would like to see from an AI tool that helps
our work, as opposed to disrupting it entirely.