PolyThink: A Multi-Agent AI System to Eliminate Hallucinations
Excited to announce PolyThink Alpha's early access! Our multi-agent AI system fights hallucinations with consensus-driven, accurate answers from multiple models. I'd love for you to join the waitlist at https://www.polyth.ink/ as I'm planning to randomly roll out invites starting May. Feedback will shape our final launch! I'd love thoughts and suggestions too! What would you like to see here?
Isn’t this basically the Swiss cheese model? If your two input AIs hallucinate, or your consensus AI misunderstands the input, you will still have confabulations in the output?
From all my testing, this never really happened even once honestly, plus the judge model (that I've kept strictly a reasoning model) also evaluates individually before "judging" the consensus.
I have this same thought, and have tried similar approaches.
OP: Have you trained or fine tuned a model that specifically reasons the worker model inputs against the user input? Or is this basically just taking a model and turning the temperature down to near 0?
Honestly, personal use cases. I am a STEM student and deal with a lot of "hard" questions that are about 60% of the time miscalculated by LLMs, I used to manually paste in approaches from say ChatGPT to DeepSeek and now grok and asked them what do you think is better. I created this out of necessity to automate this then realized how cool it can be if it scales further haha
Eventually yes, that's the plan! It's extremely good with code too, especially with more vague requests, tends to take about 2-3 rounds but almost always gets a great approach.
OP: Have you trained or fine tuned a model that specifically reasons the worker model inputs against the user input? Or is this basically just taking a model and turning the temperature down to near 0?