Generative AI Investigations 2023
By Neil Blevins
Created On: May 8th 2022
Updated On: July 25th 2024
Software: Various

Back in 2022 and 2023, I did some investigations into using Generative AI for my art making process. While I am interested in the possibilities of the technology for the future, I feel the current technology isn't being pushed in the correct direction, and so have discontinued my exploration until a number of major issues have been sorted through (such as the use of copyrighten imagery in the datasets). Below is a set of articles, tests and tutorials I made during that year long period that I feel still have value, exploring some of the ways AI could potentially be used to enhance the artist workflow. My hope is to see the tech change course in the coming years either voluntarily or through regulations, at which point I'll give it another try.

Lesson 1:
AI Artwork, The Road Ahead


If you've been following social media these last few months, you've probably come across a ton of artwork generated by Artificial Intelligence. AI can take two different images and mix them in surprising ways, it can shade a line drawing, or it can even generate artwork by providing it a line of descriptive text. Tons of artists are starting to play with this sort of software, including myself, and so I felt now was a good time to go over some of what these sorts of applications can do, and have a brief discussion of where this may all be leading us.

Note, this is not a lesson on how to use a particular piece of AI software, this is about my experiments with them, giving you some idea of the sorts of things you can achieve with the techniques.



Lesson 2:
How To Use AI In Your Photobashing Process

AI (Artificial Intelligence) artwork has been a huge subject of experimentation and debate these last few months. Whether this is a fad or the future, who knows. But if we are going to use AI, my goal is have the AI assist me rather than replace me, I want to find that balance between my own artistic ideas / style and that of the algorithm. And while AI continues to evolve, and so workflows will as well, I wanted to share with you my current attempts at finding that balance.

As a visual artist, I've been looking at AI tools, and trying to decide how I could use their strengths without reducing my contribution to a few well chosen keywords. So this tutorial will discuss the methods I've been using to incorporate AI artwork into some of my paintings.

Note: This isn't a tutorial showing you how to use any specific piece of software, this is speaking about the process at a higher level, and can be applied to pretty much any software you'd like.



Lesson 3:
Layer Breakdown: The Final Push

As many of you know, I've been playing a lot recently with AI generated artwork, especially creating robot designs. However, one thing I've been unable to achieve in midjourney (the AI I've been currently using) is a 3/4 view of the robot designs. All of them end up being frontal view only. This is likely because the images the AI trained on either haven't been 3/4 view, or maybe they have but that particular attribute hasn't been labeled, and so the AI hasn't learned what a 3/4 perspective is. While I'm sure this will be fixed in the future, I've been playing with giving the robots depth by projecting the artwork onto simple 3d geometry. So this tutorial discusses this technique which I used to make my painting "The Final Push"



Lesson 4:
Keyword Or Placebo? Testing Midjourney Prompts

With the current crop of AI Art Generators, your two major inputs are image prompts and text prompts. This tutorial is going to be focusing on text prompts in Midjourney, and asks the question do all of the keywords people tend to add to their prompt really affect the final image? Or are they actually not contributing at all, and maybe even confusing the AI?

This tutorial shows some experiments I did with the Midjourney AI trying to figure out what are the most useful and least useful keywords, and I hope by presenting them it gives you a better idea of the best way to turn your ideas into images.



If you want to download the giant tables in the video that show the different keyword comparisons, they are here and here.

Lesson 5:
The 10 Things I've Learned Comparing Midjourney And DALL-E

So I recently got access to the AI Art Generator known as DALL-E 2, and after some initial playing, my next big question was how it compared and contrasted to the other major AI Art Generator that I've been using, Midjourney. So I ran both systems through a battery of tests, and have found some interesting results. In short, each as you'd expect have their own advantages and disadvantages. So if you've been playing with Midjourney and are wondering how it compares to DALL-E, I've compiled my 10 things I've learned comparing the two software.



If you want to download the giant tables in the video that show the different keyword comparisons, they are here and here.

Lesson 6:
What Would Concept Artists Want From An AI Tool?

The main focus I've seen so far in the world of Generative AI Artwork is how to replace the artist. But what about tools that help an artist to work faster? If the AI field could be anything a concept artist wanted it to be, what would that look like? This discussion will explore some of the things I personally would like to see from an AI tool that helps our work, as opposed to disrupting it entirely.



Lesson 7:
Making Variations Of A Design Using AI

In my last video, "What Would A Concept Artist Want From An AI Tool?", I discussed the need for a feature that lets you make real easy variations of one of your own pre-existing designs. Well a couple of hours after I posted the video, the Ai "Stable Diffusion" added this exact feature. So in this update video, I show off my tests using the new Initial Image and Variations feature in Stable Diffusion's DreamStudio.



Lesson 8:
Using AI To Make A Tileable Texture

You may remember a month or so ago I did a video called "What would concept artists want from an AI tool", and one feature I didn't think about at the time was tileable textures, especially if you're using 3d as part of your concepting process. But since that video's release, AIs like DALL-E and midjourney have added the ability to do this, and I can certainly say making a texture tileable was a boring job that I have no issue seeing automated away. So here's an example of the technique using Dall-e.




This site is ©2024 by Neil Blevins, All rights are reserved.
NeilBlevins.com TwitterMastodonBlueskyInstagramCaraBloggerFacebookLinkedInArtStationKickstarterGumroadYouTubeIMDB