GitHub Copilot has revolutionized the way developers code with AI-powered suggestions, but it’s not the only option available anymore. Ollama is a powerful, free alternative that gives you complete control by running locally on your hardware without sharing your data. In this guide, we’ll show you how to set up Ollama on a Windows desktop equipped with an NVIDIA 4060TI GPU (8GB VRAM) and a Ryzen 5 5600G processor, and integrate it with the Cursor IDE on a different machine for an enhanced development experience. This effectively turns the windows machine into a LLM server.
Why Choose Ollama over github copilot?
- High Performance: Since Ollama runs on your local machine/ server, you’ll experience lower latency and faster response times compared to cloud-based solutions.
- After testing, on my hardware completions take on average under a second and chats produce results almost instantly. Results may vary based on hardware
- Privacy Guaranteed: Ollama doesn’t use telemetry, ensuring that your data stays secure and private.
- Model Flexibility: You can choose and configure any model—in this setup, we’ll use
llama3.1:8b
for chat andqwen2.5-coder:1.5b
for autocompletion. - SOC 2 Compliance: This compliance provides reassurance about the security and reliability of your setup.
- Cost-Free: Unlike GitHub Copilot, Ollama is completely free to use.
- Cross-Platform Compatibility: It works on any operating system, making it accessible to all developers.
- Simple Installation: Setting up Ollama is straightforward, even on advanced hardware configurations.
Prerequisites
Before starting, make sure you have the following:
- A Windows desktop with:
- NVIDIA 4060TI GPU (8GB VRAM)
- Ryzen 5 5600G processor
- At least 16GB of RAM (recommended)
- Note: some experimenting may be necessary since performance is pretty dependent on hardware.You may have better success using a different/smaller model if your hardware is different. On the other hand if you have better hardware you can use bigger models.
- Cursor IDE, or any other VSCode based IDE installed on a secondary machine
- The
continue
extension installed in your IDE - A local network connecting the two machines
Steps
Server:
In this example we’re using windows as the server’s OS however this can be run on OSX, linux and windows.
- Download and run Ollama on your windows machine
- To verify it’s installation open a terminal and type
ollama
. You should see a help section. You can also go tolocalhost:11434
– you should seeOllama is running
- To verify it’s installation open a terminal and type
- Run in your terminal:
ollama pull llama3.1:8b
ollama pull qwen2.5-coder:1.5b
- Add
OLLAMA_HOST=0.0.0.0
to your environment variables.- This enables external networking for ollama
- If you need, you may need to make an exception in your window’s firewall for port 11434
- Extract the local IP of the computer and save it for later
Second Computer
- Install the Continue extension in VSCode
- Open the Continue Config by opening the continue chat and clicking the gear icon in the top right of the panel
- Edit your config with the following parameters
{
"models": [
{
"model": "llama3.1:8b",
"title": "Ollama (llama)",
"completionOptions": {},
"apiBase": "http://<LOCAL IP>:11434",
"provider": "ollama"
}
],
"tabAutocompleteModel": {
"model": "qwen2.5-coder:1.5b",
"title": "Ollama (Quen)",
"completionOptions": {},
"apiBase": "http://<LOCAL IP>:11434",
"provider": "ollama"
},
"allowAnonymousTelemetry": false,
...
}
Local
Running llama locally is also an option! Simply follow the 1 and 2 steps to download and install ollama. The download the Continue extention and follow the wizard to connect your local ollama instance to continue. If you want to change models you will need to revisit the Continue config to specify which models you’d like to use.
Conclusion
Switching to Ollama offers significant advantages over GitHub Copilot, including better performance, enhanced privacy, lower latency, and unparalleled flexibility. By running locally, you retain full control over your coding environment while saving costs. Follow this guide to set up Ollama on your Windows desktop and integrate it with a secondary MacBook Pro using Cursor IDE and the Continue extension for a streamlined, networked development workflow.
Leave a Reply