Sharp analysis! This reframes the whole saga as a power story, not a tech story. The Lear parallel exposes the real tension: platforms chasing dominance while forgetting the people who made them valuable in the first place
Miss Farida, your article excerpt is brilliantly crafted and deeply insightful. I particularly admire how you skillfully weave Shakespeare’s timeless King Lear into the modern context of Silicon Valley and artificial intelligence, making the connection both vivid and thought-provoking. Your use of metaphor and contemporary references shines through, creating a compelling narrative that truly engages the reader. You have a remarkable talent for blending literary wisdom with current technological and business realities, which makes your writing both intellectually stimulating and relevant. I really look forward to reading more of your work whenever I’m free.
I don’t think it’s about labeling any one company as “bad.” The point is structural: when a small number of organizations control powerful tools, the system tends toward extraction and centralization, regardless of intentions. Competition helps, but the pattern, rewarding flattery and consolidating value, can show up anywhere. It’s less about who’s worse, and more about how the incentives shape behavior.
I know that the right incentives must be in place, but I always distrust that idea... keeping the stick and the carrot is dangerous, the carrot can be okay, until someone replaces it with something else.
Wow, I’d be very interested to see what you’ve collected. Patterns over multiple years can really highlight the structural issues we’re talking about not just isolated incidents. Would you be willing to share a few examples?
With OpenAI, it’s not the AI itself that worries me, it’s the combination of centralized control and aggressive ambition. The tools are neutral, but the decisions around them are what create the risk.
Power that refuses honest counsel always falls, not from lack of brilliance, but from blindness. As tech races forward, the real test isn’t capability, but conscience
Exactly, the technology itself isn’t the problem. AI, platforms, and tools are neutral; the danger comes when management or leadership lets greed and the drive for control override conscience. The tragedy isn’t the code or the model, it’s how human priorities twist it into a mechanism for extraction rather than empowerment.
Sharp analysis! This reframes the whole saga as a power story, not a tech story. The Lear parallel exposes the real tension: platforms chasing dominance while forgetting the people who made them valuable in the first place
Thank you, happy it resonates with you
Miss Farida, your article excerpt is brilliantly crafted and deeply insightful. I particularly admire how you skillfully weave Shakespeare’s timeless King Lear into the modern context of Silicon Valley and artificial intelligence, making the connection both vivid and thought-provoking. Your use of metaphor and contemporary references shines through, creating a compelling narrative that truly engages the reader. You have a remarkable talent for blending literary wisdom with current technological and business realities, which makes your writing both intellectually stimulating and relevant. I really look forward to reading more of your work whenever I’m free.
// Jmaal B
Thank you so much
You’re most welcome
Do you believe Open AI is that bad?
What about the competition? Who is less aristocratic?
I don’t think it’s about labeling any one company as “bad.” The point is structural: when a small number of organizations control powerful tools, the system tends toward extraction and centralization, regardless of intentions. Competition helps, but the pattern, rewarding flattery and consolidating value, can show up anywhere. It’s less about who’s worse, and more about how the incentives shape behavior.
I know that the right incentives must be in place, but I always distrust that idea... keeping the stick and the carrot is dangerous, the carrot can be okay, until someone replaces it with something else.
I not only believe is twice as bad, I have some bits of evidence I've collected over the past 2 years.
Wow, I’d be very interested to see what you’ve collected. Patterns over multiple years can really highlight the structural issues we’re talking about not just isolated incidents. Would you be willing to share a few examples?
Yeah sure, I have bits and pieces from articles, tweets, some videos, etc. Wonder how to put together that collage 🤔
But then… the whole thing is wrong.
Trusting Google, Meta… it is very hard. Now Open AI?
With OpenAI, it’s not the AI itself that worries me, it’s the combination of centralized control and aggressive ambition. The tools are neutral, but the decisions around them are what create the risk.
We have to keep them in check.
Posts like yours help in that process!
Hahaha Jose. Google and Meta are defacto untrustworthy.
It’s much easier to trust the Chinese labs at this point.
this is so creative!! I'm restacking this :) beautiful writing
Thank you 🙏
Lear is the perfect lens for this moment.
Power that refuses honest counsel always falls, not from lack of brilliance, but from blindness. As tech races forward, the real test isn’t capability, but conscience
Exactly, the technology itself isn’t the problem. AI, platforms, and tools are neutral; the danger comes when management or leadership lets greed and the drive for control override conscience. The tragedy isn’t the code or the model, it’s how human priorities twist it into a mechanism for extraction rather than empowerment.