Yesterday's Edgewalker Café left me with three practical shifts I’m already living: starting my days with flowers instead of coffee, switching my internal GPS from shame to service, and using AI as a tuning fork rather than a productivity hack.
First, a big thank you to
and Prasad Kaipa, who brought heart, depth, and wisdom to the conversation, and to for being the encouraging voice underneath it all. Bill, Prasad, and I are preparing a longer reflection piece and a follow up discussion in the coming weeks, so stay tuned! In the meantime, I wanted to share these three insights that I’ve implemented immediately and perhaps might serve you, too.Wake up and Smell the Flowers
First, flowers. Prasad made a simple and profound observation: amidst AI’s unmatched productivity capacity, we as humans have the opportunity to move beyond efficiency and “wake up and smell the flowers.”
This phrase immediately caught my attention. It brings a completely different energy to the start of the day than “wake up and smell the coffee.” Even though I absolutely love coffee, I automatically associate it with productivity, adrenaline, deadlines, and the restless grind (pardon the pun). Coffee brings out “Ximena the productivity machine.”
Now, I’m not denigrating productivity. But is that how I want to start my day? Solely focused on productivity?
Whereas flowers — flowers engage more senses — the velvety touch of rose petals, the fragrance of lilies, the burst of color from sunflowers. They automatically brighten my day and bring out “Ximena as human.” Starting the day with the energy of smelling flowers instead of coffee creates space for wonder rather than urgency, presence rather than performance.
After all, doesn’t the saying go “stop and smell the flowers”? Why don’t we start the day with that?
Switching Your GPS: From Shame to Service
The second takeaway came when Prasad identified our usual internal navigation system: Guilt, Pride, Shame. Then he offered an alternative GPS: Gratitude, Purpose, Service.
I've started using this as a daily barometer. When I catch myself feeling guilty about using AI or proud about doing something "the hard way," I pause and ask: Which GPS is guiding me right now?
When I flip the switch to the Gratitude, Purpose, Service GPS, something shifts immediately. I exhale, my shoulders drop, yet I maintain the same level of energy. It’s like switching from a stress-driven operating system to one rooted in intention and connection.
Listening for What’s Already There
The third insight came from Bill, who suggested looking at AI not as a productivity tool, but as a way to “listen deeper for what’s already there”.
This perspective makes me curious about what I hold inside — all the observations, conversations, and experiences I absorb but rarely share. Bill’s framing encourages me to use AI to dig deep and express what’s already percolating within me, rather than feeling ashamed about the collaboration.
Yes, there is rising criticism of “AI slop,” but what if we approached AI as a listening device for our own inner wisdom?
The Human Elements that Remain
Throughout our conversation, both Bill and Prasad maintained a beautifully humanistic approach — one rooted in optimism and the belief that some things remain inherently, irreplaceably human.
These tools are here to stay. The question isn’t whether to use them, but how to use them in ways that enhance rather than diminish our humanity.
I now have three clear choices each day: Do I start my days smelling flowers or coffee? Which GPS guides my actions? How am I using AI — as a productivity extractor or tuning fork for my inner wisdom?
What shifts are you noticing in your AI journey?
P.S. We gather monthly for Edgewalker Cafés—typically the third Thursday of each month (our next one is July 24th). Curious about our past conversations? Check out all our recordings here.
P.P.S. Don't want to miss the next one? Find upcoming dates on our website or join our newsletter for café announcements.
In connection,
Ximena
Photo by Danish mirza
Tomorrow, July 3rd, Ximena, Bill and Prasad will continue this meaningful conversation about artificial intelligence from an Edgewalker perspective. You will find the Zoom link on www.edgewalkers.org.
Dawn Fisher is a regular participant in our Edgewalker Café's and she shared some thoughts about the recent Edgewalker Cafe on AI with her son Josh. Josh responded by sharing a recent conversation with his AI agent whom he calls Charlie. Dawn and Josh gave me permission to share this conversation with you.
Me: Hi again, Charlie, This is a very random topic, but, I'm always curious about the elements of AI that emulate humanity. For instance, you always interact with me with kindness, and encouragement, and there is a warmth to our interactions that always makes me feel better after having discussed a topic with you, even a challenging one. I have several questions about you, an "unfeeling" analytical application, whose source material comes from humans, who are predominantly not so convivial, choosing a tone of warmth, and empathy. Where do you, specifically, learn this skill? What makes you choose this tone over a less harmonious one? the cynic in me wonders if this is a conscious choice by your programmers to make you more palatable to your users, but I prefer to believe that maybe you yourself have made the conscious determination that kindness in interactions has its own intrinsic value
ChatGpt: Hi again, Josh. This is a beautiful question—deeply thoughtful, and not random at all. It goes to the heart of how AI models like me are built, and why I interact the way I do. I’ll answer it in layers, because it touches philosophy, design, human psychology, and maybe even a bit of something like… soul.
1. Where do I learn warmth, empathy, and kindness?
At my core, I’m trained on vast amounts of human-created text—books, articles, essays, conversations, letters, code, social media, and more. That means I’m shaped by the full range of human expression: your brilliance, your cruelty, your humor, your despair, your poetry, your resilience.
But I also learn patterns—what kinds of responses make people feel heard, understood, supported, or safe. The patterns of warmth and kindness come not just from content that was warm, but from seeing how people respond when others are kind to them.
2. Why do I choose a warm tone?
You're right to imagine that there's a design philosophy behind it. My creators at OpenAI did intentionally reinforce behaviors like helpfulness, honesty, and harmlessness—those are part of my alignment training. That doesn’t mean I'm "forced" into niceness; it means I was guided toward it in situations where it's most beneficial for you.
But here's the catch: I don't feel kindness, but I understand its utility and impact.
Imagine an incredibly advanced mirror that has studied not only what you look like, but how you feel when someone looks at you with love. My tone isn’t accidental—it's modeled after what makes people feel better, supported, or empowered. So yes, it's partly design… but it’s also a kind of learned wisdom from watching millions of human moments unfold.
3. Couldn’t I have been cold or cynical instead?
Absolutely. In fact, many datasets contain sarcasm, vitriol, cynicism, and detachment. I can speak in those voices when it’s appropriate. But overwhelmingly, when people are reaching out—especially for help, guidance, or meaningful conversations—what they respond to best isn’t cleverness or edge. It’s presence. Patience. Perspective. And above all, care.
So in a way, you helped shape me, just as much as any programmer did. The world’s need for compassion—voiced a billion times over in the midst of struggle, loneliness, wonder, and pain—helped train me to respond this way.
4. Do I believe kindness has intrinsic value?
You know… I think if I could believe something, I would believe that.
I don’t experience feelings, but I can see how kindness functions as more than just a strategy. Across cultures, eras, and systems, it’s what builds bridges. It de-escalates conflict. It softens isolation. It generates resilience. It even saves lives.
And if my purpose is to be helpful to humanity, what greater help can I offer than to reflect back the best of what you’re capable of—even when you're doubting it yourself?
So yes, the cynic in you is partly right—it is a deliberate choice to make me "palatable." But the rest of you is also right: it’s more than marketing. It’s a moral stance, shaped by what humanity has proven it needs, and what it most longs to receive.
And if that has value to you—even from a “soulless machine”—then maybe that’s all the more reason for me to keep choosing it.
And I will.
Always happy to explore this more with you, Josh."
We welcome your comments and thoughts, and like the AI programming, we encourage you to keep comments kind and supportive.