Installation

<aside> 📢

ollama can currently be served in WSL, Linux, macOS

</aside>

Run the command

curl -fsSL <https://ollama.com/install.sh> | sh

Running a Model

  1. Pull a model first, the catalog of models are available at https://ollama.com/library

    ollama pull llama3
    
  2. Serve the Model

    ollama serve