March 2025: AI-assisted creativity and the race to claim the future

In this issue we grab the future by its shiny metal cranium and hold on for dear life while we look for the control panel.

Generative AI, when used effectively, isn't about replacing creativity. Instead, it can help you identify patterns and create frameworks based on your existing work. But the conversation around AI is often dominated by powerful voices that risk steering us toward a future we don't want. That's why it's crucial for creatives to take control of these tools.

  • For Writers: LLMs can help identify overused phrases, pacing issues, or inconsistencies in character voice – creating frameworks for self-editing checklists.
  • For Musicians: Musicians could use AI to analyze song structures, chord progressions, or mixing habits, creating guides for experimenting with new arrangements or identifying sonic blind spots. I asked "what makes a pirate song anyway?" and got a quick useful answer that all my searching on traditional search engines failed to deliver.
  • For Visual Artists/Photographers: Visual artists and photographers could use AI to analyze their compositions, color palettes, or recurring motifs, generating style guides to push their boundaries or refine their signature look.

I’m no critic of writing, art, music, or photography—that’s an entire skill set I never took the time to develop. Yet by feeding AI samples of my own work, I can generate style guides that highlight my quirks and recurring themes. I see the techniques I use (sometimes unconsciously) laid out in black and white. That’s a huge boost to my self-awareness and lets me focus my practice where it matters most. It’s like having an ultra-patient, high-level creative consultant on call. For example: it pulled some common threads out of my fiction writing like glowy mystery ruins and biomechanical structures. Think leviathans like Moya on the Farscape series. Not so much Borg Cubes. I favor design and elegance over structure-spanning redundancy.

Of course, you can’t rely on AI to be your creative engine. That spark still has to come from you. But if you view AI as a toolkit—one that shows you patterns in your existing work—you stay firmly in the driver’s seat. It’s no replacement for a sapient critic’s discerning view, but for most of us, that's much of the value.

AI offers a neutral space to get that initial, unfiltered assessment, which can then inform more nuanced discussions with human peers. It’s not about replacing traditional feedback, but supplementing it, especially when you need consistent input or a perspective given without fear of hurting your feelings. That hesitance toward bluntness is a good and normal way to be in most cases, but devastating for creative growth when it's all you have. An LLM chatbot will be as honest or flattering as you tell it to be.

Generative AI is also a fantastic research tool and a gateway for quick reflection. It can help you draft and edit, rapidly iterate through possibilities, or quickly get the gist of background material or early-stage ideas. See my Perplexity.ai post for a concrete example.

We need you because the conversation around AI is often dominated by four major voices:

  • The Bosses, who blurt out sweeping mandates and hang the threat of replacement by AI over employees' heads. Some try to gloss over the potential downsides by mentioning the need for basic income and housing guarantees. And please don't look into their operations or regulate them, you'll get in the way of The Future! Bosses gonna boss and have all the way to and past the introduction of the power loom. Be part of the looming power that holds them accountable.
  • The Swindlers, who promise you’ll become the next boss. Every developer who knows how to tie together a chat app with cloud AI APIs is making one with varying quality, and not all of them are on the level. This would make an excellent pathway for a bad actor to exfiltrate all kinds of data. Stick to reputable people.
  • The Incredulous, content to watch with a skeptical side-eye. It feels safe over there on the side lines, but this stuff is here and it's only going to find its way into more parts of our lives. Be part of the countervailing force that keeps it in check.
  • The Chorus, taking potshots from the sidelines at anyone who tries to engage with the topic. I get it, a lot of the people pushing this stuff early on were assholes, and opinions were set hard because of it. If this feels like it's aimed at you, it's time to take another look.

All four, in their own ways, risk steering us toward a brain drain we might not recover from—unless we figure out how to take control of these tools.

Companies have taken to a recent trend of discarding valuable domain expertise by assuming that AI can entirely replace the nuanced insights of human experts. In reality, while AI is a powerful tool, it cannot replicate the deep, context-driven understanding that comes from years of experience. Instead, forward-thinking organizations will and hopefully are capturing and retaining this expertise—ensuring that the knowledge of seasoned professionals remains in-house rather than lost to retirement or moves to new careers.

We’ve already witnessed this dynamic in hiring: companies post job ads with sky-high expectations yet refuse to invest in training, leaving them scrambling for talent as experienced workers retire or change careers and the pipeline of seasoned professionals dwindles. AI can help train new people, but it can't replace them, and it can't stand in for a good hiring process. Someone has to come along with the fresh ideas and beginner mind that allows people, companies, organizations, and civilization to thrive decade after decade.

That’s why I’m choosing a more intentional route. AI isn’t here to do the writing for me or pump out my next masterpiece—it’s just an on-call creative assistant. Sure, AI isn't as kind and supportive as a good friend, but it also won’t sidestep honest critique for fear of hurting your feelings. What they lack in emotional intelligence they make up in availability. For example: I completed the drafting process for this article, exploring ideas and focus points and structures, in a day when it normally takes a week or longer. I still handed it to a real human comrade for a beta read once it was to that point.

Think of it as a sounding board, not just for yourself, but potentially for creative partnerships. Imagine quickly testing out different approaches with a collaborator, using AI to rapidly visualize or prototype ideas before investing heavily in one direction.

Let's take the future of creativity back into our own hands. This isn't about being swept away by a technological tide; it's about creatives actively steering the ship. Join me in figuring this out – sharing our experiments, insights, and strategies as we navigate this wild ride together.

Some cloud-based tools to consider:

  • I use ChatGPT, going free when I just need to poke at it sometimes, paying $20 for a month when it's time to power through a project. The few messages a day you get on the free plan go a long way once you get a knack for prompting.
  • T3 chat, developed by a YouTube-famous developer who goes by Theo, has an $8/month paid plan with access to most of the best models and file uploads. The best balance of features and price for most people.
  • Google's Gemini has generous free limits and produces consistently high quality responses. They also offer NotebookLM which is more research-focused.

Resources

  • 3Blue1Brown's series on neural networks and vcubingx's series focusing on LLMs will get you up to speed. Not strictly necessary to understand, but no artist has been made worse off by understanding their tools in depth.
  • OpenAI released a guide to prompting the new class of models collectively known as "reasoning." Reasoning models, while still LLMs with all the problems they have, are much better.
  • The generative AI page on Wikipedia
  • InvokeAI and LM Studio are two popular ways to run these tools locally. InvokeAI is focused on image generation models. LM Studio has file support, so you can do like I suggested further up and have the models it supports analyze your work. There's also ComfyUI if you like fine-grain control over your image generation.
  • LM Studio does not support commercial use in its terms, but it's the easiest way to get used to this stuff and try out different sizes (like 7B and 1B) and kinds of model like instruct, coding, and math. You can check the LocalLLaMA subreddit for discussion of the latest tools and resources, including finding something that suits your needs best. There's also the old standard AlternativeTo.
  • Hugging Face and CivitAI are the places to go to find models for use cases from general to bizarrely specific.
  • If you don't have a powerful GPU at home, store bought is fine. Running models on cloud hosts is outside my experience, so you'll have to research this one yourself. This goes with the next point: running a model on a dedicated machine learning system is more efficient than running it on your gaming GPU and likely uses less electricity. You can also evaluate their green energy claims.
  • On the stuff I haven't addressed here but will in more detail in future writing: While I have a well-founded and well-tested belief that the training for these models isn't stealing in a legal sense, and that properly used can be less damaging to the environment than not using them, I do understand some people aren't convinced. There are efforts underway to create models using public domain and freely licensed stuff and tooling to enable it at human-scale costs, plus the distillation (using a big general model to train a small task-focused model), RLHF, and synthetic data (curated model outputs put back into the model to adjust statistical weights) methods already used to improve and focus the major models. These efforts make training more accessible, and make it more viable to run local models using electricity generation sources you can account for, and it only gets better from here.

Prompting tip

The more you can see the inputs and outputs as blobs of "stuff," the better. If LLMs think, it's not in any way we yet understand. They're probability engines that transform data into different data using weighted probabilities.

Imagine you're someone with a task for an ancient time sharing machine. You have something you need the machine to produce based on your inputs, and it's very expensive in value terms to run your task instead of someone else's. "Crap in, crap out" applies here and the training data behind this machine means there's a lot of in you have to account for that isn't your own.

Assumptions and biases are encoded in the model. You can "invoke" other biases and assumptions with a better-specified input. The less you give it, the more it will tend to follow pathways with the highest weights, and that's not going to be the novel or obscure and interesting stuff you can pluck out with the right words.

Bad: "Write a science fiction story."

Better: "List all the major subgenres of science fiction. Write a story in [subgenre]."

Best:

  • "List all the major subgenres of [major genre 1] and [major genre 2]."
  • "List major tropes of [list of chosen subgenres]."
  • "Outline a long short story merging [list of aspects of subgenre one, list of aspects of subgenre two, ...]. First person perspective. Absolutely no robots. Cyborgs are fine. Emphasize [list of tropes]. Deemphasize [list of other tropes]. Throw in some dinosaurs. Make them accurate, refer to the uploaded PDF on accurate dinosaur depictions, but also cool."

Also: collect studies and references relevant to your work and interests. Save Wikipedia pages as PDF. These come in handy.

  • "Now give me a draft written from the perspective of the original Sherlock Holmes but he's just landed in the middle of this story and has no idea what's going on."

That's where you find the gems. I had Star Trek's take on Professor Moriarty opine on digital clones once using the same series' own take on the subject to focus it.