DeepSeek R1 is available in six sizes on Ollama — from a compact 1.5B model that runs on almost any machine, to a 70B version that rivals the full model’s capabilities. Choosing the right ...
AI for Business — Practical Guides & Reviews
Artificial intelligence is moving from buzzword to business tool. This section cuts through the hype and focuses on how UK businesses — particularly in wholesale distribution, building supplies, and electrical wholesale — can use AI to work smarter right now.
What We Cover
- AI for Wholesale Distribution — Smarter stock management, pricing decisions, and customer query handling using AI tools
- AI for Builders Merchants — From yard to counter, how AI is changing operations for builders merchants across the UK
- AI for Electrical Wholesalers — Managing thousands of SKUs, pricing complexity, and customer service with AI assistance
- AI Guides — Step-by-step guides on using ChatGPT, AI tools, and automation for real business tasks
- Sales & Pricing Intelligence — How to use AI to analyse sales trends, predict stock shortages, and make smarter pricing decisions
Our Approach
We focus on practical application over theory. Every guide is written with real business workflows in mind — the kind of operations run by UK SMEs who don't have dedicated data science teams but want to use AI effectively from day one.
DeepSeek R1 is one of the most significant open-source AI models released in 2025. It’s a reasoning model — like OpenAI’s o1 — that thinks through problems step by step before ...
LangChain is the most widely used framework for building LLM-powered applications. Combining it with Ollama gives you a fully local, private AI pipeline — no API keys, no data leaving your machi...
Running Ollama on a Raspberry Pi lets you have a private, always-on local AI server that costs pennies to run. It won’t match the speed of a desktop GPU, but for small models and non-time-critic...
Running Ollama in Docker lets you deploy local LLMs on any machine or server without installing anything directly on the host. It’s the cleanest approach for server deployments, CI pipelines, or...
Running AI code assistance locally with Ollama and VS Code gives you GitHub Copilot-style autocomplete and chat — without sending your code to any external server. This guide covers two main app...
Ollama’s local REST API makes it straightforward to call local LLMs from Python — either directly with the requests library, via the official Ollama Python package, or through the OpenAI S...
When it comes to running local LLMs with Ollama, two models come up in almost every conversation: Llama 3 from Meta and Mistral from Mistral AI. Both are excellent open-source models that run well on ...
Ollama and GPT4All are two of the most popular ways to run AI models locally — but they serve quite different audiences. This comparison covers everything you need to know to pick the right tool...
