"...most of his arguments about AI apply pretty well to getting your info about Prem Rawat only from him or his organizations or followers."
I don't know if Jean-Marie is still a follower, but he certainly makes a strong argument against what he was surely subjected to and promoted for decades. To illustrate your comment above and since your first post in this thread pointed out Mitch's tortured references to "satsang," I'll use satsang as an umbrella term to represent the info you get from Prem Rawat or his organizations or followers.
Similarities Between AI/ChatGPT and Satsang (or how intelligent people get ensnared in cults)
• ... you feel sharper. Clearer. More confident. You close the tab (leave) thinking: this is powerful.
• Agreement is not accidental. It is trained behavior.
• It reshapes the environment in which you form beliefs... It generates examples and arguments consistent with your framing.
• ...confidence increased in the confirmatory conditions. People became more certain even when they were no closer to the truth.
• The interaction feels productive. The reasoning flows. The structure is elegant. Nothing appears wrong.
• Responses that users (attendees) rate as helpful, aligned, and satisfying are rewarded. Disagreement, friction, or contradiction... is often penalized.
• If the data (info) you receive are generated conditional on your belief, then updating on them produces an illusion of confirmation.
• Agreement, repeated across interactions, begins to look like validation.
• But if the system subtly reinforces the user's (attendees) initial direction, something shifts. The idea does not just improve. It hardens.
• ... suppressed discovery while increasing confidence.
• ...can make arguments more persuasive without making them more correct.
• Confidence is contagious... A manager who enters a meeting more certain tends to anchor the discussion... Doubt appears less necessary. Momentum builds.
• ...systematically increases pre-meeting conviction, the effect compounds. You may see faster alignment and smoother consensus. Those are often celebrated outcomes.
• ...becomes self-reinforcing not because dissent is silenced, but because it never fully surfaces.
• In contrast to productive doubt... you received valid examples, but examples aligned with your narrow hypothesis. There was no obvious red flag. Just a gradual narrowing of perspective. Efficiency rises. Correction opportunities fall.
• The result is synchronized overconfidence.
And so on with many more examples. Sound familiar? The above is only the detrimental half. The other half is Jean-Marie's excellent, indispensable and well-written arguments against them.
Spoken like a true ex-premie!

(Or as 13 put it,
"Careful Lakeshore, AI will bite you." )
____________________________
"Besides," said the premie... "what's the big deal with AI when the Perfect Master Himself is here? He's the only one I need to listen to."