I initially thought along the lines of having multiple AI back ends answering the same problem and then the resulting answers would be voted upon by multiple AI back ends and if one of them starts hallucinating, two of them will say that guy's hallucinating. If the two of them are hallucinating then a red flag will be thrown up And the answer will be presented as untrustworthy. But if you trigger three then your problem is really s******* and I'm sorry.

So then the concept of mixture of agents showed up simply to attempt to get the best answers possible. It sends the same question to multiple back ends and then aggregates them. It answers your question and it simultaneously presents a variety of answers that are alternatives to educate you on the possibilities.


https://youtu.be/aoikSxHXBYw?si=JnjVH0oO8roS9kQN

I don't know if it'll handle hallucinations right now, but I can guarantee you they are thinking about it. There will be some type of boundary checking sooner or later when there are multiple AI agents working in concert but using a different back ends.

It failed on the snake game but is incredible on logic puzzles in general. I saw Claude 3.5 do the snake game in a single shot and it was incredible.