Looking for a powerful, free, and private alternative to GitHub Copilot? Ollama offers local AI coding capabilities without telemetry, using models like llama3.1:8b for chat and qwen2.5-coder:1.5b for autocompletion. Learn how to set up Ollama on a Windows PC with an NVIDIA 4060TI and integrate it with Cursor IDE on a MacBook Pro for cross-network…
While getting ready to roll out DrivnBye into private beta I spent some time working on some pipelines to automate a lot of our release work. I originally implemented the recommended way of utilizing Expo cloud to build our app binaries for android and iOS within a github action however it took less than 3…
After years of being on the fence with buying a racing sim I finally decided “it’s now or never” and I pulled the trigger. After a couple weeks of settling in I realized there is a lot to be desired in terms of realism; I also wanted an excuse to break out the 3D printer…
Recently I came across a need to add a watermark to photos in my React native + expo app when photos are shared outside of my app. My goal was to make watermarks customizable, reusable and most importantly it needs to play nice with expo & EAS. There are a couple options on npm that…
Social media applications requires live data if you want a good User experience but creating your own websocket server is tedious. Entering Centrifugo a prebuilt production ready websocket server ready to scale with you.
Lately I have been spending my free time working on a mobile application for car enthusiasts – DrivnBye. This application has forced me out of my comfort zone with every feature I work on. We’re a couple months from release on the iOS and Android app store and we decided to start stress testing our…
In the realm of artificial intelligence and natural language processing, there are these extraordinary tools known as large language models (LLMs). They possess an almost magical ability to understand context, nuances, and generate remarkably human-like text. The exciting news is that now you have the opportunity to fully utilize LLMs by running them on your…