AI-generated image created using a prompt I designed to explore the themes of style, authorship, and appropriation discussed in this post.
There's been a lot of talk lately about the “Ghibli-fication” of images—people taking personal photos, uploading them into image generators, and getting back something that looks like a frame from a Studio Ghibli film. As the quality of these tools improves, the conversation has heated up. Understandably so.
There are a lot of different questions tangled up here: legal ones, policy ones, ethical ones. I’m not entirely sure where I land on all of them, but I do think we need to separate them out and be clear about what we’re actually talking about.
Let’s start with the legal side.
There’s a reason that style is not copyrightable, and I think that’s a good thing. Copyright protects specific expressions—particular images, phrases, pieces of music—not general style or aesthetic. You can’t copyright “the Ghibli look” any more than you can copyright impressionism, noir lighting, or the way Wes Anderson frames a shot. Trying to lock down a style as intellectual property would make it difficult for people to learn from or build on earlier work. It would be both unworkable and creatively stifling.
This also applies to the ongoing debate about training data. Even if a generative tool was trained on a particular artist’s work, there’s no realistic way to measure how much that artist influenced the output. These models are trained on billions of data points. Even if you could isolate the impact of one artist, what would attribution or compensation look like? That infrastructure doesn’t exist—and frankly, if it was going to, it should’ve been built before the current generation of tools was unleashed. We’re past that point now.
Instead, we’re left with something that—for now—looks like fair use. If someone creates something in the style of Ghibli and doesn’t pretend it’s an actual Ghibli film or a lost Miyazaki sketch, there’s nothing illegal about that. They’re not stealing anything. They’re using a tool, applying a recognizable style, and putting their name on the result.
AI-generated image created using a prompt I designed to explore the themes of style, authorship, and appropriation discussed in this post.
The ethical side is murkier.
You can imagine bad uses. Someone could deliberately mislead people, try to pass off AI-generated work as something it’s not, or use a familiar style to boost credibility they haven’t earned. Those are ethical concerns, and real ones.
But I don’t think most people are doing that. If you’re experimenting, exploring, learning, or just trying to make something interesting—and you’re being honest about what you’re doing—I don’t see that as unethical. Style is not the same thing as authorship. Mimicking a look isn’t the same thing as claiming ownership of someone’s work.
This often comes down to a question of appropriation versus appreciation. I don’t think we’re going to get very far trying to treat appropriation in a legal sense here. If we go down that road, almost anything could be challenged. Someone could say, “You copied my way of drawing trees,” or “That line work looks like mine.” And then what? We’ve closed off entire avenues of artistic growth, experimentation, and exchange.
Now, there’s a related concern that’s worth naming. In most conversations about AI and authorship, the worry is that someone will pass AI-generated work off as their own. But in the Ghibli-style case, the issue is reversed: someone might take their own AI-generated work and try to pass it off as if it came from someone else—a lost Miyazaki sketch, an unreleased Studio Ghibli concept.
That’s a different kind of problem. It’s not plagiarism—it’s forgery.
And forgery has been around forever. People have painted in the style of Van Gogh or Rembrandt and tried to sell their work as the real thing. The issue there isn’t the paintbrush or the canvas—it’s the deception. The same logic applies here. AI tools make it easier to create convincing imitations, which means we’re going to see more of them. Someone will, at some point, pass off their own AI-generated work as something from a famous artist or studio. That’s going to happen. But again, it’s not the tool. It’s the intent to deceive that crosses the ethical and legal line.
We don’t blame the software for the forgery any more than we blame oil paint for art fraud. We look at how the tools are used, and whether someone is trying to mislead others for personal gain—whether that’s money, attention, or credibility. That’s where the line is.
Context also matters. In education, the rules are different. We’re measuring student work—we have to know whether they’re developing skills and understanding. There’s a responsibility there. But in the professional and creative world, it’s less clear-cut. I use AI tools regularly, sometimes to help generate visuals, sometimes to tighten writing, sometimes just to get ideas moving. I don’t always say that upfront. I don’t deny it if someone asks, but I also don’t think it’s necessary to spell out every tool I used in every project.
That’s partly because I believe if I’m responsible for the final product, then it’s mine. If I stand behind the ideas and shape the final message, I own it. If it’s good, people can engage with the substance. If it’s bad, that’s on me.
Still, I recognize that we’re in a period where community norms are unsettled. Disclosure, transparency, intent—these are all part of the conversation, and I want to be in it. I don’t think we need rigid rules, but I do think we need people to model thoughtful use. Especially when our work influences others or shapes decisions, a little transparency can go a long way.
To me, the ethical line is pretty straightforward: don’t deceive people. If you’re not trying to mislead anyone—if you’re not pretending to be someone else or passing someone else’s work off as your own—you’re probably on solid ground.
One last point that often gets missed:
Remix is a natural part of creative work, but AI shifts its scale and visibility.
Artists often develop their voice through imitation, transformation, and experimentation with the work of others. Generative AI tools complicate this by enabling those processes to happen much faster and making them accessible to a much wider group of people, potentially to nearly everyone. That shift raises real questions about effort, authorship, and originality.
But I don’t think the presence of these tools doesn’t automatically make AI-driven remix unethical or illegitimate. It just means we’re in new territory. Context, intent, and transparency matter more than ever. The tools aren’t making the choices—we are. The responsibility still lies with the person using them.
As always, I’m thinking out loud here. If you’ve got thoughts, disagreements, or good examples, I’d love to hear them. We’re building these norms in real time.