| 
	     You are here  | 
        simonwillison.net | ||
| | | | | 
            
              www.allenpike.com
             | 
        |
| | | | | ||
| | | | | 
            
              www.markhneedham.com
             | 
        |
| | | | | In this post, we'll learn how to use LLMs on the command line with Simon Willison's llm library. | |
| | | | | 
            
              www.modular.com
             | 
        |
| | | | | The adoption of AI by enterprises has surged significantly over the last couple years, particularly with the advent of Generative AI (GenAI) and Large Language Models (LLMs). Most enterprises start by prototyping and building proof-of-concept products (POCs), using all-in-one API endpoints provided by big tech companies like OpenAI and Google, among others. However, as these companies transition to full-scale production, many are looking for ways to control their AI infrastructure. This requires the ability to effectively manage and deploy PyTorch. | |
| | | | | 
            
              blog.chand1012.dev
             | 
        |
| | | LangChain is a powerful library for Python and Javascript/Typescript that allows you to quickly prototype large language model applications. It allows you to chain together LLM tasks (hence the name) and even allows you to run autonomous agents quickly and easily. Today we will be going over the basics of chains, so you can hit the ground running with your newest LLM projects! Prerequisites Python 3.9 3.10 and up have some issues with some of LangChain's modules.... | ||