TTL #175 - Running LLMs locally: practical setups from development to production
Large Language Models are changing how we build applications that work with content. However, using them in development cycle is not trivial. Privacy, performance, and infrastructure are the main parts of the equation.
In this session, we focus on practical ways to run LLMs on-premises, from development to production.