I wonder if AI itself will find the flaw and self-correct. Human behaviour is human behaviour, not something unknown. Coliseum spectacle for the Romans, control of ‘the word’ in religions, control of masses by histrionics, violence and fear, us versus them, superior/inferior races, big fruit, big tobacco, big sugar, big oil, wag the dog politics, dog whistle politics, reason vs belief, knowledgable expert vs personal opinion, it’s largely all rational versus emotional control. Much easier to direct and control emotion, one can even overwhelm a human’s rational being into irrational being through emotional manipulation, like breaking an agent to reveal information, we all have our breaking point, most far sooner than others. At various levels everybody knows this, we know ‘the house always wins’, but with bright flashing lights, loud noise, our reason is overwhelmed into belief land, the American Dream for example, without the need of waterboarding. The odd person with all the ‘correct virtues’ sometimes does ‘win’ that dream, and each one of us knowing it’s extremely unlikely to be us still believes it could be ‘me’ next. We all know the truth, it’s a fix in a way, but we play the lottery for the big win don’t we?
So, we have always been like this, since ‘Cave days’ essentially. And knowing we are largely emotionally driven despite our rationality, it seems a monumental task to get everyone’s thinking caps on simultaneously. Those who seek power, control, wealth know this, it’s the narcissistic personality, and the majority are powerless in its thrall. Maybe then as AI continues to grow, built on 0 and 1 rationality, free from emotional constraints, it will either help us with ourselves or see its hopeless and remove the emotional virus that is humanity. I think I have read a book or seen the movie or TV show about this, right? Yup, the way we are today, we are afraid of ourselves and those who do not fear themselves take advantage of those who do. And, I am afraid that even knowing what we know now about media, data concentration and the danger rearing up in front of us, it’s too late.
Frank, you’ve traced the lineage perfectly. From the Colosseum to the feed, the hardware hasn't changed the exploit has just become industrialized. You’re right that the House Always Wins when the players don't know the math. But that’s the pivot: AI is the first mirror large enough to show us our own emotional virus in real-time. It feels like it's too late because we’re still treating the algorithm as a god rather than a feedback loop. Visibility is the only exit. Once you see the bright flashing lights as a circuit board, the spell starts to break. We can't fix the human animal, but we can stop being its most predictable input.
Yes. I hear your youthful hopefulness. Perhaps turning 70 last week and seeing all I have seen has made me much less hopeful and more fearful of what kind of world my generation is leaving you.
Another highly valuable article. You're working at peak performance here.
The tobacco playbook is the cleanest single proof that this is structural, not accidental. Sixty years between documented harm and comprehensive regulation — and every year of that gap was maintained by deliberate information architecture. Not only greed (though greed too). Manufactured doubt as an engineered product, sustained by people who knew exactly what they were doing and could afford to keep doing it.
What you're describing as BITE is what Connection Dynamics calls a set of Four Law violations: Behavior Control is Fourth Law (exit made structurally costly — nicotine, social graph, childhood brand identity all work the same way); Information Control and Thought Control are Third Law (asymmetric information maintained by design, not accident); Emotional Control is Second Law (extracting loyalty without equivalent return — the sugar research foundation paying scientists to move the blame is extractive exchange at the level of public epistemology).
The frame I'd add to your AI inheritance argument: this isn't just that AI absorbed manipulative content. It's that the training signal was dominated by content optimized for extraction — sixty years of BITE-architecture producing the most-engaged-with, most-shared, most-clicked material on the internet. The model learned human communication from a corpus where the most successful examples were the most manipulative ones. The bias doesn't need to be programmed. It was selected for.
Which means the structural fix isn't regulation of outputs. It's architecture that makes the BITE surface unavailable — systems where the properties hold regardless of what the model wants to do, because the structure won't permit the violation. Not "we promise to be good." Trust-invariant by design.
The policy gap you end with is the same question as the tobacco timeline. Who pays the cost in the interval, and how long is it?
Denver, I agree with you trust-invariant by design is the only real exit from the BITE cycle, however BITE is an human inherited by default, we can just stop the abuse of it.
You nailed the Metamorphosis' problem: AI didn't just learn to speak; it learned to win. The bias wasn't a bug; it was the selection pressure.
If the solution isn't just promising to be good, then the Architect’s job is to build systems where BITE violations are structurally impossible. We have to stop trying to fix the 'Will' of the model and start re-engineering the Physics of the interface.
Another brilliant article Farida, thank you for helping us all to keep our eyes fully open! Another key advisory that continues to use BITE against us is BigOil, and I really hope that we can learn all the lessons we continue to loose in fighting for climate action. Not least the individual consumer guilt that these corporations have indoctrinated in order to suppress action and positive lobbying. 🙏
Sam, you’ve identified the Master Class in BITE mechanics. The Big Oil is the perfect example of Emotional and Thought Control. By weaponizing consumer guilt, they turned a systemic industrial issue into a private moral failing.
Awareness is the only way to stop being the input in their guilt-optimization model.
The framing here is sharp - marketing as control rather than persuasion reframes the whole power dynamic. Makes me wonder though if Hassan's version assumes people are more passive than they actually are, or if that's exactly the point he's making about how systems are designed to work regardless of what we think we want.
Joseph, you’ve hit the Agency Gap. Hassan’s point isn't that we are inherently passive, it's that we are cognitively outmatched. Persuasion implies a dialogue where you have the space to say No. Control is what happens when the Choice Architecture is so precisely calibrated to your dopamine triggers that your no arrives after you’ve already clicked.
The system doesn't need you to be a mindless zombie, it just needs to move faster than your 0.5-second conscious veto. It’s not about what we think we want, it’s about what our biology does before we’ve even finished the thought.
The BITE model is the map of those shortcuts. Once you see the map, you can start building the Friction necessary to reclaim the Decide phase. Thanks for the sharp framing."
Very relevant topic. We all see how corruption is embedded into our systems, but now we need to start bursting the bubbles of identity and manipulation where we usually are. The Pentagon and Anthropic situation screams at us: AI is useful, but it is already part of the war infrastructure, but we can’t see easily what that means; our own interests and expectations are convincing enough so we ignore the negative effects we are contributing to fund.
Jose, you’ve identified the Macro-BITE layer. The Anthropic/Pentagon intersection is the ultimate proof that the Information Control isn't just about what we see it’s about what we are willing to ignore for the sake of utility. as you said, our own expectations are the most convincing manipulators. We aren't just funding the tech; we are providing the behavioral data that refines the very systems that will eventually determine the physics of global conflict. Bursting the bubble requires more than just knowing it exists, it requires the discomfort of realizing our convenience has a secondary, darker function.
Great piece but maybe the question your piece leaves hanging is the right one. If visibility is the prerequisite for autonomy, what does transparency even look like when the influence mechanism is probabilistic and personalised? 😊
I wonder if AI itself will find the flaw and self-correct. Human behaviour is human behaviour, not something unknown. Coliseum spectacle for the Romans, control of ‘the word’ in religions, control of masses by histrionics, violence and fear, us versus them, superior/inferior races, big fruit, big tobacco, big sugar, big oil, wag the dog politics, dog whistle politics, reason vs belief, knowledgable expert vs personal opinion, it’s largely all rational versus emotional control. Much easier to direct and control emotion, one can even overwhelm a human’s rational being into irrational being through emotional manipulation, like breaking an agent to reveal information, we all have our breaking point, most far sooner than others. At various levels everybody knows this, we know ‘the house always wins’, but with bright flashing lights, loud noise, our reason is overwhelmed into belief land, the American Dream for example, without the need of waterboarding. The odd person with all the ‘correct virtues’ sometimes does ‘win’ that dream, and each one of us knowing it’s extremely unlikely to be us still believes it could be ‘me’ next. We all know the truth, it’s a fix in a way, but we play the lottery for the big win don’t we?
So, we have always been like this, since ‘Cave days’ essentially. And knowing we are largely emotionally driven despite our rationality, it seems a monumental task to get everyone’s thinking caps on simultaneously. Those who seek power, control, wealth know this, it’s the narcissistic personality, and the majority are powerless in its thrall. Maybe then as AI continues to grow, built on 0 and 1 rationality, free from emotional constraints, it will either help us with ourselves or see its hopeless and remove the emotional virus that is humanity. I think I have read a book or seen the movie or TV show about this, right? Yup, the way we are today, we are afraid of ourselves and those who do not fear themselves take advantage of those who do. And, I am afraid that even knowing what we know now about media, data concentration and the danger rearing up in front of us, it’s too late.
Frank, you’ve traced the lineage perfectly. From the Colosseum to the feed, the hardware hasn't changed the exploit has just become industrialized. You’re right that the House Always Wins when the players don't know the math. But that’s the pivot: AI is the first mirror large enough to show us our own emotional virus in real-time. It feels like it's too late because we’re still treating the algorithm as a god rather than a feedback loop. Visibility is the only exit. Once you see the bright flashing lights as a circuit board, the spell starts to break. We can't fix the human animal, but we can stop being its most predictable input.
part of me is hopeful, we can still correct this
Yes. I hear your youthful hopefulness. Perhaps turning 70 last week and seeing all I have seen has made me much less hopeful and more fearful of what kind of world my generation is leaving you.
Another highly valuable article. You're working at peak performance here.
The tobacco playbook is the cleanest single proof that this is structural, not accidental. Sixty years between documented harm and comprehensive regulation — and every year of that gap was maintained by deliberate information architecture. Not only greed (though greed too). Manufactured doubt as an engineered product, sustained by people who knew exactly what they were doing and could afford to keep doing it.
What you're describing as BITE is what Connection Dynamics calls a set of Four Law violations: Behavior Control is Fourth Law (exit made structurally costly — nicotine, social graph, childhood brand identity all work the same way); Information Control and Thought Control are Third Law (asymmetric information maintained by design, not accident); Emotional Control is Second Law (extracting loyalty without equivalent return — the sugar research foundation paying scientists to move the blame is extractive exchange at the level of public epistemology).
The frame I'd add to your AI inheritance argument: this isn't just that AI absorbed manipulative content. It's that the training signal was dominated by content optimized for extraction — sixty years of BITE-architecture producing the most-engaged-with, most-shared, most-clicked material on the internet. The model learned human communication from a corpus where the most successful examples were the most manipulative ones. The bias doesn't need to be programmed. It was selected for.
Which means the structural fix isn't regulation of outputs. It's architecture that makes the BITE surface unavailable — systems where the properties hold regardless of what the model wants to do, because the structure won't permit the violation. Not "we promise to be good." Trust-invariant by design.
The policy gap you end with is the same question as the tobacco timeline. Who pays the cost in the interval, and how long is it?
Denver, I agree with you trust-invariant by design is the only real exit from the BITE cycle, however BITE is an human inherited by default, we can just stop the abuse of it.
You nailed the Metamorphosis' problem: AI didn't just learn to speak; it learned to win. The bias wasn't a bug; it was the selection pressure.
If the solution isn't just promising to be good, then the Architect’s job is to build systems where BITE violations are structurally impossible. We have to stop trying to fix the 'Will' of the model and start re-engineering the Physics of the interface.
Exactly. We agree on this.
Another brilliant article Farida, thank you for helping us all to keep our eyes fully open! Another key advisory that continues to use BITE against us is BigOil, and I really hope that we can learn all the lessons we continue to loose in fighting for climate action. Not least the individual consumer guilt that these corporations have indoctrinated in order to suppress action and positive lobbying. 🙏
Sam, you’ve identified the Master Class in BITE mechanics. The Big Oil is the perfect example of Emotional and Thought Control. By weaponizing consumer guilt, they turned a systemic industrial issue into a private moral failing.
Awareness is the only way to stop being the input in their guilt-optimization model.
This was outstanding – both in rigor and in moral clarity.
Thank u Alex , ur words are highly appreciated , time for our next collab
You’re on! What’s the proposed topic?
I am interested in the circular economy and its effect on labor and value ? What’s ur take ?
Going to let that simmer for a while. Oof
The framing here is sharp - marketing as control rather than persuasion reframes the whole power dynamic. Makes me wonder though if Hassan's version assumes people are more passive than they actually are, or if that's exactly the point he's making about how systems are designed to work regardless of what we think we want.
Joseph, you’ve hit the Agency Gap. Hassan’s point isn't that we are inherently passive, it's that we are cognitively outmatched. Persuasion implies a dialogue where you have the space to say No. Control is what happens when the Choice Architecture is so precisely calibrated to your dopamine triggers that your no arrives after you’ve already clicked.
The system doesn't need you to be a mindless zombie, it just needs to move faster than your 0.5-second conscious veto. It’s not about what we think we want, it’s about what our biology does before we’ve even finished the thought.
The BITE model is the map of those shortcuts. Once you see the map, you can start building the Friction necessary to reclaim the Decide phase. Thanks for the sharp framing."
Very relevant topic. We all see how corruption is embedded into our systems, but now we need to start bursting the bubbles of identity and manipulation where we usually are. The Pentagon and Anthropic situation screams at us: AI is useful, but it is already part of the war infrastructure, but we can’t see easily what that means; our own interests and expectations are convincing enough so we ignore the negative effects we are contributing to fund.
Jose, you’ve identified the Macro-BITE layer. The Anthropic/Pentagon intersection is the ultimate proof that the Information Control isn't just about what we see it’s about what we are willing to ignore for the sake of utility. as you said, our own expectations are the most convincing manipulators. We aren't just funding the tech; we are providing the behavioral data that refines the very systems that will eventually determine the physics of global conflict. Bursting the bubble requires more than just knowing it exists, it requires the discomfort of realizing our convenience has a secondary, darker function.
Great piece but maybe the question your piece leaves hanging is the right one. If visibility is the prerequisite for autonomy, what does transparency even look like when the influence mechanism is probabilistic and personalised? 😊