By Laura Hunt Miller
In the first two parts of this series, I talked about using AI for low-stakes tasks and as a thinking space. There’s a meaningful shift, however, when AI moves from helping you internalize an idea to helping you externalize one.
Once words turn into images, or rough thoughts turn into finished documents, the outputs stop living quietly in your head and they enter the real world. This is where questions of ethics begin to surface.
In the context of AI, ethics has less to do with what the technology is capable of, and more to do with how we govern what we allow it to contribute on our behalf, and how those contributions affect others.
Before we take a dive into ethics however, let’s review how AI can help us make images and documents, and its limitations.
Yep, that is an AI generated image. Just for you.
Most people experimenting with AI-generated images are using free or low-cost tools. At first these tools feel quite expansive when all you have to do is ask for an image, and voilà, it appears. But free versions typically come with constraints such as:
- limited number of image generations per day
- lower image resolution
- fewer controls over style and revision
- stricter safety filters
Those limits aren’t just about monetization. They’re also part of how companies slow misuse, manage system strain, and identify where problems arise, like coding errors.
A more humorous (or frustrating) reminder of AI’s limits appears if you ask for a complex image with too many variables or vague instructions, and end up with a five-armed person. Remember AI is a pattern assembler without true understanding, not an art director. When constraints are unclear, results can get… strange fast. This quirk is amusing in images, but can be more consequential in documents.
Hm, I wonder what is wrong with this image…
When it comes to document creation, a common misconception is that AI models are constantly scanning all available content on the internet in real time. In reality, most models are trained on large datasets that include older material and are updated periodically rather than continuously. Depending on the system, that core knowledge may lag months behind current events, which is part of why AI can be very confidently wrong sometimes.
If you want up-to-date information, you have to explicitly ask for it. And even then, AI can only access so many sources. Paywalled databases, proprietary research, and private records are typically off-limits.
This is why AI should never be treated as a substitute for real research. But it can be a very effective data aggregator and a tool for revealing gaps in logic, assumptions, or missing information; more like a research compass than an academic authority.
I personally like to use AI as a thought whiteboard as discussed in the last article, then use skills I learned in school like doing my own independent research, producing an outline and draft, then feeding my work back into AI to edit, check grammar and logical flow. A singular asset to one who tends to ramble.
I keep my wording and tone, and make sure AI does not remove any information or points I want to convey, as it has a tendency to do when editing. When I am done, I have work that is 100% my thoughts, with about 10–15% help in organizing and expressing them, like having a human editor by my side, 24-7.
As for AI image generation, I have been trained in graphic and studio art, but when writing I am interested in quickly pairing content with visual support rather than producing finished art for each piece. This is not a situation in which I would pay another artist to do the work either, but rather something that allows me to get words and images on the page, then give my kids a bath and take care of life’s other to-dos.
Another fun example of image generation not quite living up to the hype, after being given the prompt to create an image from the previous paragraph.
Do I have to edit a lot of AI images in Photoshop to make them really work for me? Yes, yes I often do. But it is still far faster and more effective than creating images from scratch just so one person can say, “oh yeah, that image does represent what I just read.”
AI image generation has quickly come under scrutiny however, as selfish people misuse it to cause harm. AI can generate convincing likenesses that resemble real people, bodies, and scenes. Used responsibly, this can be playful, illustrative, or creatively useful. Used irresponsibly, it can be deeply violating.
Harm doesn’t require physical contact to be real. As a new technology, non-consensual image generation, especially sexualized or alarming images, has been difficult to assign consequences to thus far. Exploitation like this is not new, the technology just lowered the barrier for many bad actors to access it.
The concerns around AI-generated writing are typically less disturbing, but no less real. Students use AI to shortcut assignments. Employees submit AI-generated drafts as original work. Lines blur between assistance and substitution.
The ethical question isn’t “did AI touch this?” It’s “did the creator still do the thinking?”
We’ve navigated similar questions before. Calculators didn’t eliminate math education. Spellcheck didn’t end writing. Templates didn’t destroy learning. But skipping the learning process altogether has consequences, whether AI is involved or not.
AI is a tool meant to support thinking and labor, not replace it with hollow results.
If AI helps you organize, explore, refine, or express ideas you genuinely understand, that’s assistance. If it produces work meant to demonstrate learning you didn’t do, that’s misrepresentation, regardless of the results. (Yes kids, that is what most schools would call cheating.)
It’s easy to see why some push back on this distinction. Teachers often see assignments as necessary steps students must climb to reach their intellectual potential. Students, on the other hand, may experience some of that work as irrelevant, repetitive, or disconnected from their actual abilities. And sometimes, they’re not wrong.
And there are plenty of jobs with mundane, repetitive work where creative authorship hardly seems the concern of an employer. But bypassing the process doesn’t challenge the system or a dysfunctional work environment, it just avoids it. Using AI to substitute for learning or work isn’t outsmarting anyone; it’s opting out of the very skills the work was meant to build.
Creation is closely tied to identity. Words and images carry trust. When authorship becomes unclear, or is quietly replaced altogether, that erodes trust. People lose confidence not just in the work itself, but in the person presenting it. And that kind of blemish on your character is hard to shake.
What AI thinks guilt looks like.
Feeling worried about people or students using AI for work doesn’t mean one has to be anti-technology. It just means you know there are going to be misuses that slip through the cracks. But fear of drawbacks doesn’t require dismissing the good these tools can provide.
Once again, AI doesn’t remove responsibility or human effort, it just creates a new process for creation people must take responsibility for and be aware of. So rather than trying to outlaw AI entirely, here are a few guiding principles:
- Don’t generate or share images of real people without consent.
- Don’t generate harmful or misleading images intended to cloud truth and trust.
- Don’t claim AI images as your own self-made creations from scratch.
- Don’t outsource work meant to teach you something.
- Don’t treat AI output as neutral or objective by default if using it for our own work.
- Don’t publish what you wouldn’t defend as your own decision or point of view.
Used thoughtfully, AI can amplify creativity and clarity. Used carelessly or selfishly, it can amplify harm just as efficiently.
AI Homework Time! Ok, if the ethics talk didn’t scare you off, here are a few things you can try to familiarize yourself with AI image and document creation.
Exercise 1: Do you have kids or work buddies that need a little pick-me-up? Take a picture of yourself, then ask AI to create a cartoon version of yourself delivering some sort of cheesy slogan, like “I love you to the moon and back,” or “Hang in there,” complete with a kitten hanging on a branch if you like. You have permission to ask AI to make you look thinner.
Exercise 2: Pretend you need to deliver a short report to someone, for example, “Reasons Care Bears are as Tough as GI Joe’s,” or perhaps something more applicable, like an argument for a workplace policy you would like to change. Tell AI what your thoughts are, any relevant facts, then ask it to write a short persuasive piece. Edit it to fit your own voice, remove anything that doesn’t fit, and share it with a friend for funsies.
If either exercise feels helpful or fun, keep experimenting. If not, feel free to keep those jam sessions between you and your sketchbook or Steam account of choice.
In the next piece, we’ll move from individual use toward industrial work and AI as infrastructure, and the challenges and questions those issues raise.