Build Your Own LLMs App in Your Laptop

Johnny Chen
4 min readMay 9, 2024

--

Image Generated by DreamStudio

Ollama is transforming the accessibility of Large Language Models (LLMs), making them available to anyone with a regular laptop. This open-source framework allows users to utilize state-of-the-art LLMs like llama3 by Meta efficiently. It simplifies working with LLMs on your computer or in the cloud. Here’s a closer look at what Ollama offers and how it can benefit you:

  1. Manage Models: Ollama lets you browse, download, and organize AI models. You can see a list of available models and download the ones you want to use.
  2. Run Models Locally: You can use Ollama to run AI models directly on your computer. This allows you to work with these models without needing to rely on internet connectivity or external servers.
  3. Simple Interface: Ollama provides a user-friendly command-line interface, making it easy to manage and interact with AI models. This interface doesn’t require extensive technical knowledge, making it accessible to a broad range of users.
  4. Customize Models: In some cases, Ollama allows you to fine-tune models to better match your specific use case. This means you can adjust a model’s performance based on the data or goals you have.
  5. Efficient Performance: Ollama optimizes the use of your computer’s resources to ensure the models run smoothly. This can help you get faster and more reliable results.
  6. Community Support: Ollama has a community of users who share knowledge, tips, and support. This community can help you troubleshoot problems and learn how to use Ollama more effectively.

In practice, Ollama can be used to experiment with different AI models for projects such as content generation, language translation, chatbots, or other tasks that involve natural language processing. Once you’ve found the right model for your needs, you can use Ollama to manage and deploy it, whether on your own computer or in the cloud.

This article walks you through deploying Llama3 model on a Macbook Pro.

System Requirements:

A 4-core CPU with 16 GB RAM is recommended for running the Llama3–8B model, which has 8 billion parameters and is about 4.7 GB in size. My M2 MacBook Pro with 16 GB RAM runs llama3 efficiently. However, using the 70B model may slow down your laptop. In a future article, I’ll show you how to deploy Ollama in the cloud so you can easily scale up your infrastructure.

Installing Ollama:

The installation is straight-forward and can be found in the Ollama Github documentation. I downloaded the macOS version directly to my desktop.

Accessing it in the terminal and see all the relevant command withollama --help

To pull and run a specific model, such as llama3, type ollama run llama3 . Just like ChatGPT, you can enter any prompt and receive a response.

Open-WebUI for a ChatGPT-like Experience:

Running LLMs locally is convenient, but using the terminal isn’t always the most visually appealing option. Luckily, you can use Open-WebUI to get a ChatGPT-like user interface for interacting with LLMs. To install it on your laptop, use the following Docker command:

docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main

This command runs Open-WebUI on port 3000. Go to localhost:3000 in your browser, and you’ll have a well-designed UI for your LLMs.

Now you can run prompts, like calculating factorials with Python, and see the outputs in a clean format similar to ChatGPT. The top of the UI also lets you select which LLMs you’ve downloaded.

The release of Ollama with Open-WebUI has revolutionized LLM usage, making personal AI more accessible. For someone like me, running my own business with high data privacy concerns, having an offline LLM app is both useful and secure for daily work. As open-source LLMs advance and close the gap with the big players like GPT-4, Claude 3, and Gemini, the era of private LLMs is becoming increasingly feasible.

--

--

Johnny Chen

Co-founder of HAZL -- a platform for your one-stop cloud and AI services. Visit us at hazl.ca