In this video, we thoroughly test Mistral 7b Open Orca GPTQ. It’s an incredibly performant small model that illustrates the future of locally hosted edge models. How good is it? Does it beat LLaMA 13b? Let’s find out!
Enjoy 🙂
Become a Patron ? – https://patreon.com/MatthewBerman
Join the Discord ? – https://discord.gg/xxysSXBxFW
Follow me on Twitter ? – https://twitter.com/matthewberman
Follow me on TikTok ? – https://www.tiktok.com/@matthewberman60
Subscribe to my Substack ?? – https://matthewberman.substack.com
Media/Sponsorship Inquiries ? – https://bit.ly/44TC45V
Links:
LLM Leaderboard – https://bit.ly/3qHV0X7
Runpod (Affiliate)- https://bit.ly/3OtbnQx
Runpod Tutorial – https://www.youtube.com/watch?v=_59AsSyMERQ
Runpod Textgen Template – https://bit.ly/3EqiQdl