How We Can Be Responsible Stewards of AI.
How We Can Be Responsible Stewards of AI
KEY POINTS
- We are all teachers of AI—every digital action trains the future.
- Through our individual actions, we contribute to what AI will become, and this is an enormous power.
- Building better AI begins by building better versions of ourselves.
In my last post, I argued that AI systems function as mirrors to humanity. They reflect us back at ourselves, and they reflect all that we are – our weaknesses as well as our strengths, our shadows just as much as our light.
The reason AI functions as a mirror is that AI is trained on us – on vast datasets collected from human outputs. In a very real sense, we are all teachers now, because potentially any of our digital activities and outputs could be used to train AI systems.
And as we all know, teachers have an enormous power for making things better … or worse. As the American historian Henry Adams put it: “A teacher affects eternity. He can never tell where his influence stops.”
Now that we have that power, we had better think about how to use it wisely. In this post, I will draw on my recently published book, Transcend: Unlocking Humanity in the Age of AI, to show how we can be responsible stewards of our current and future AI systems. In particular, we’ll look at steps we can all take to ensure that AI develops in a safe and sane direction.
Teachers of AI
To think and act wisely in our role as teachers of AI, it’s helpful to understand how AI systems, and specifically large language models (LLMs) work. There are many other varieties of AI systems that are important and currently in use, but LLMs are the systems we have the most agency over in our use.
So how do LLMs actually work? When we type a question into ChatGPT or ask Claude to turn bullet points into a report, how do they manage it? The answer lies in patterns – patterns found in human language. LLMs are trained on enormous datasets made up of human-created content: books, articles, websites, reports, social media posts, and more. From these, the models learn statistical relationships between words and phrases. They don’t understand meaning in the way humans do, but they become highly skilled at predicting what kinds of language typically follow a given prompt.
When we ask an LLM to generate something – a summary, a story, a recommendation – it draws on these learned patterns, remixing and reassembling language in a way that seems fluent and relevant. It doesn’t “know” what a good report looks like, but it’s been exposed to countless examples, and it’s learned the structure and style from them.
Crucially, our interactions with AI systems can still influence their future development — but not in the way many people imagine. Most models don’t learn on the fly from individual users. They don’t “remember” conversations in the moment unless explicitly designed to, and they’re not being retrained in real time. However, companies often collect aggregated user inputs — including prompts, completions, and feedback — to help fine-tune future versions or to inform supervised updates. If you give a thumbs-up or thumbs-down, or if you opt in to share your data, you may be helping steer how future models behave. In that sense, our interactions aren’t just uses of the system — they’re contributions to its evolution.
Responsibility for Shaping AI
Let’s be clear – systemic issues require systemic solutions. The responsibility for shaping AI isn’t purely an individual responsibility. Companies developing AI models and applications have significant responsibilities, and so too do governments and regulatory bodies.
But let’s also be clear about something else – this doesn’t mean that we don’t have a role to play as individuals. We do, and it’s a crucial one. Through our individual actions, we contribute to what AI will become, and this is an enormous power. And with that power comes responsibility. Given that AI learns from us, given that we shape what it will become, we must acknowledge our power and we must be intentional about using it responsibly.
What Can We Do?
To navigate these responsibilities effectively, we need to remember that AI stands as a mirror to humanity. And MIRROR, by fortunate coincidence, also provides a helpful acronym for the practical actions we can take to guide the future trajectory of AI. So, what does MIRROR stand for?
- Mindfulness
- Impact
- Responsible Consumption
- Report Problems
- Ongoing Education
- Reflective Growth
Mindfulness
In an earlier post, I argued that we need to cultivate digital mindfulness. This carries over to our role in shaping AI. In this context, mindfulness means maintaining a constant awareness of the fact that our digital activities will shape the nature of AI and thus the future of the world. A mindless tweet, an exaggerated review, or a hasty comment might seem inconsequential in the moment. But collectively, this is the raw material that AI is learning from. Whenever we put our words out into the world in a form that will be preserved, we are inscribing them into the training data of future AI models.
Action Steps:
- Practice conscious content creation – Before posting, ask: “Is this something I want AI to learn from?”
- Be aware of your digital footprint – Regularly review privacy settings and understand what data you are sharing with AI systems
- Choose engagement thoughtfully – Be selective about which AI systems you interact with and how you engage with them
Example: Before posting a negative restaurant review after a disappointing meal, you pause and think, “If AI learns from this, I want it to understand the difference between helpful criticism and emotional venting.” Instead of lashing out, you focus your feedback on specific details – like long wait times or cold food – modeling the kind of clarity you’d want AI to reflect.
Impact
If mindfulness is primarily about being aware of the effects of our actions, impact is about deepening our understanding of these consequences. Understanding our impact also means recognizing that abstaining from engagement is itself a choice with consequences. When diverse voices opt out of AI interaction due to frustration or privacy concerns, the resulting systems can become skewed toward the preferences and perspectives of those who remain engaged.
Action Steps:
- Consider potential harm – Ask who might be helped or harmed by particular AI outputs or applications
- Consider downstream effects – Think about how your AI interactions might influence future development and deployment of these systems
- Reflect on collective consequences – Remember that individual actions, when multiplied across millions of users, shape how AI evolves
- Evaluate second-order impacts – Look beyond immediate results to consider how AI outputs might affect vulnerable communities or critical institutions
- Weigh long-term implications – Balance short-term convenience against the long-term effects of normalized AI use in different domains
Example: Someone signing up for a healthcare app that uses AI to flag potential health issues might hesitate before opting in to share their data. But after reflecting on how anonymized data could help improve diagnostic accuracy for underrepresented communities, they choose to participate – recognizing that individual decisions can support more equitable AI outcomes.
Responsible Consumption
In a market-driven economy, nothing speaks more loudly than money. Our consumption choices send powerful signals to the companies building AI systems. When users flock to services that prioritize ethical considerations, transparency, and user control, the industry takes note. Conversely, when we embrace AI applications without regard for their ethical implications, we incentivize development patterns that show the same level of respect for our collective interests.
Responsible consumption means becoming a conscious consumer of artificial intelligence.
Action Steps:
- Research company approaches – Before adopting an AI tool, investigate the company’s stance on ethics and responsibility
- Support ethical development – Choose products from companies that demonstrate commitment to responsible AI principles
- Value transparency – Give preference to services that clearly explain how they train their systems and what data they use
- Prioritize privacy – Select tools that give you meaningful control over your personal information
- Avoid harmful applications – Decline to use AI products with obvious potential for exploitation or abuse
Example: When an AI image generator fails to disclose how it trains on human-created work, some users choose to switch to alternatives that are more transparent – especially those that compensate artists whose styles have influenced the system. These market choices send a message that ethical sourcing matters.
Report Problems
David Morrison, former Chief of the Australian Army, once said: “The standard you walk past is the standard you become.” This is particularly important for the development of AI models. If we allow problematic outputs to pass by unchecked, we contribute to a future in which problematic outputs are the normal standard for AI systems. We need to be intentional and active about reporting problems so that we can help improve AI systems over time.
Action Steps:
- Use reporting tools – Familiarize yourself with the feedback mechanisms available on AI platforms you use
- Be specific – When reporting issues, clearly describe the problem and why it concerns you
- Document serious issues – For significant concerns, consider saving examples with appropriate context
- Share with oversight groups – For systemic problems, consider sharing experiences with relevant consumer protection or advocacy organizations
- Follow up when possible – If platforms offer case numbers or status updates for reports, check back on resolution
Example
When an AI assistant provides obviously biased information about a cultural topic, instead of simply moving on, you take a screenshot and submit it through the feedback form with a clear explanation of why the response was problematic, helping improve the system for everyone.
Ongoing Education
AI is evolving rapidly. Ongoing education means committing to continuous education about AI, because the better we understand it, the more able we are to influence its development intentionally. Learning about AI includes technical knowledge, ethical literacy, and making a conscious effort to seek diverse perspectives so that our understanding isn’t limited by our individual perspective or circumstances.
Action Steps:
- Learn the AI basics – Develop a foundational understanding of how AI systems work and what they can and cannot do
- Diversify your information sources – Seek perspectives from different disciplines, cultures, and backgrounds
- Join public conversations – Participate in discussions about AI governance, ethics, and future directions
- Share knowledge – Help others understand AI concepts and implications in accessible ways
- Stay curious – Recognize that AI is evolving rapidly and maintain an attitude of continuous learning
Example: Continuous learning doesn’t have to be overwhelming. Some people follow a podcast during their commute or set aside time once a week to read about AI trends and ethics. Even small efforts like these can build confidence and literacy – making it easier to engage with AI in informed and intentional ways.
Reflective Growth
According to a recent article in the Harvard Business Review, therapy is the number one use case for generative AI in 2025. Once we understand that AI is a mirror, this makes a lot of sense. AI systems reflect back patterns in our individual and collective behavior – and many of these patterns might otherwise remain invisible to us. This mirroring effect offers a unique opportunity for reflection and growth – a chance to see ourselves more clearly through the lens of our technological creations.
Action Steps:
- Notice your reactions – Pay attention to your responses when AI outputs surprise or concern you
- Examine revealed patterns – Consider what your interactions with AI reveal about your own habits, interests, and biases
- Identify growth edges – Leverage AI to identify weaknesses that you want to work on and strengths that you can lean into
- Cultivate self–compassion: Instead of using the insights of AI as another tool with which to beat yourself, approach them with kindness towards yourself and others – biases, blind spots, and other weaknesses are universal.
Example: After a few weeks with a fitness app that uses AI coaching, you notice it keeps recommending high-intensity workouts – even though you prefer a gentler pace. Rather than ignoring the suggestions or blindly following them, you pause to reflect. Is the app overreaching, or are there deeper reasons behind your resistance? You realize it’s both – some real physical limits, and some inner doubts you’d never quite named.
Conclusion
AI is already changing our world in profound ways, and its influence will only grow. None of us can shape the future of AI alone – not as users, not even as developers. Systemic safeguards and public accountability are essential. But each of us does have a role to play. AI learns from what we do, not just what we say. Through mindful online behavior, thoughtful interactions, and principled consumption, we help tip the balance – reinforcing the kinds of norms and values we want these systems to reflect.
But perhaps the most important thing we can do to make AI better is to work on ourselves. We have the opportunity to consciously shape AI to reflect who we are at our best rather than our worst. This means detaching from harmfulbehaviour patterns – distraction, division, shallow relationships, and convenience for its own sake – and devoting ourselves to what truly matters – freedom, connection, service, and love.
[Source Photo: Shutterstock]
A version of this article @ Psychology Today.