- Details
- Written by Super User
- Category: (Un)Perplexed Spready
Olmo2 7B and Mistral 7B: two viable LLM options for (Un)Perplexed Spready on low-spec HW
Ollama platform provides multitude of LLM models which you can utilize with (Un)Perplexed Spready software, depending on your hardware constraints.
Our testings were focused on which models provide best performances on old, low-spec hardware. Two models shown best ratio of results quality and performance, making them ideal for low grade hardware, such are old office laptops. These two models are: Mistral 7B (https://ollama.com/library/mistral) and Olmo2 7B (https://ollama.com/library/olmo2:7b).
Comparative Analysis of Mistral 7B and OLMo2 7B on the Ollama Platform
The rapid evolution of open-source large language models (LLMs) has created a dynamic landscape where models like Mistral 7B and OLMo2 7B compete for dominance in performance, efficiency, and accessibility. This report provides a comprehensive comparison of these two 7-billion-parameter models within the context of the Ollama platform, focusing on architectural innovations, benchmark performance, computational efficiency, and practical applications.
Architectural Innovations and Training Methodologies
Mistral 7B: Efficiency Through Attention Mechanisms
Mistral 7B, developed by Mistral AI, employs two key attention mechanisms to optimize performance. Grouped-query attention (GQA) reduces memory bandwidth requirements during inference by grouping queries, enabling faster token generation without sacrificing accuracy[1][3]. Sliding window attention (SWA) allows the model to process sequences of arbitrary length by focusing on a sliding window of tokens, effectively balancing computational cost and context retention[3][8]. These innovations enable Mistral 7B to outperform larger models like Llama 2 13B while maintaining lower hardware requirements[3][8].
The model was trained on 2 trillion tokens and fine-tuned on publicly available instruction datasets, resulting in strong generalization capabilities[3][13]. Its Apache 2.0 license ensures broad accessibility for both commercial and research use[3][8].
OLMo2 7B: Transparency and Staged Training
OLMo2 7B, released by the Allen Institute for AI, prioritizes full transparency by providing access to training data (Dolma 1.7), model weights, and training logs[4][10]. The model introduces a two-stage training process: an initial phase focused on data diversity and a subsequent phase emphasizing data quality through precise filtering[4][7]. This approach, combined with architectural refinements, enables OLMo2 7B to achieve a 24-point improvement on MMLU compared to its predecessor[4][10].
Key architectural upgrades include an expanded context window of 4,096 tokens (double Mistral’s 2,048) and optimized transformer layers that reduce memory usage during training[4][7]. The model’s training on up to 5 trillion tokens ensures robust performance across academic benchmarks, particularly in mathematical reasoning and world knowledge[5][10].
Performance Across Benchmark Categories
Commonsense Reasoning and Knowledge Retention
- Mistral 7B: Excels in commonsense reasoning tasks, outperforming Llama 2 13B by 15% on aggregated benchmarks like HellaSwag and ARC-Challenge[3][8]. However, its smaller parameter count limits knowledge compression, resulting in performance parity with Llama 2 13B on trivia-based benchmarks[3].
- OLMo2 7B: Demonstrates superior performance in knowledge-intensive tasks, scoring 52 on MMLU compared to Mistral’s 48.5[4][10]. This advantage stems from Dolma 1.7’s diverse data sources, including academic papers and curated web content[4][7].
Mathematical and Coding Proficiency
- Mistral 7B: Achieves 45.2% accuracy on GSM8K (8-shot) and approaches CodeLlama 7B’s performance on HumanEval, making it suitable for code-generation tasks[3][13].
- OLMo2 7B: Outperforms Llama 2 13B on GSM8K (52% vs. 48%) but lags behind Mistral in coding benchmarks due to less emphasis on code-specific datasets[4][10].
Instruction Following and Chat Optimization
- Mistral 7B Instruct: Fine-tuned for dialogue, this variant scores 7.6 on MT-Bench, surpassing all 7B chat models and matching 13B counterparts[3][8].
- OLMo2 7B-Instruct: While detailed benchmarks are scarce, early user reports indicate strong performance in structured output generation, though it requires explicit prompt engineering to match Mistral’s conversational fluidity[5][17].
Computational Efficiency and Hardware Requirements
Memory and Throughput
- Mistral 7B: Requires 8GB of RAMfor baseline operation, generating ~90 tokens/second on an M1 MacBook Pro with 16GB RAM[6][15]. The GQA architecture reduces VRAM usage by 30% compared to standard attention mechanisms[3][8].
- OLMo2 7B: Demands 10GB of RAMdue to its larger context window, achieving ~65 tokens/second on equivalent hardware[10][17]. However, its efficient gradient checkpointing allows training on consumer GPUs with 24GB VRAM[4][7].
Quantization Support
Both models support 4-bit quantization via Ollama:
- Mistral’s Q4_K_M variant maintains 98% of base model accuracy[1][14].
- OLMo2’s Q4_0 quantization shows a 5% drop in MMLU scores but remains viable for real-time applications[10][17].
Practical Applications on Ollama
Deployment Workflows
- Mistral 7B:
- ollama run mistral
curl -X POST http://localhost:11434/api/generate -d '{
"model": "mistral",
"prompt": "Explain quantum entanglement"
}' - Supports function callingvia raw mode for API integrations[1][6].
- OLMo2 7B:
- ollama run olmo2:7b
curl -X POST http://localhost:11434/api/generate -d '{
"model": "olmo2:7b",
"prompt": "Summarize the causes of the French Revolution"
}' - Requires explicit system prompts for optimal performance[7][10].
Use Case Comparison
Category |
Mistral 7B Strengths |
OLMo2 7B Advantages |
Real-time Chat |
Lower latency, better dialogue flow |
Higher factual accuracy |
Code Generation |
Near-CodeLlama performance |
Limited code-specific optimization |
Academic Research |
Sufficient for most tasks |
Superior in MMLU/STEM benchmarks |
Hardware Constraints |
Runs on 8GB RAM |
Requires 10GB+ RAM for full context |
Community Reception and Ecosystem Support
Mistral 7B Adoption
- Ollama Integration: Downloaded 4.1 million times, with extensive community tutorials for M1/M2 deployment[6][15].
- Fine-tuning Ecosystem: Over 200 derivative models on Hugging Face, including MedLlama2 for medical QA[12][14].
OLMo2 7B Research Impact
- Transparency Push: Full training data release has enabled 50+ academic papers analyzing data biases[9][18].
- Benchmark Contributions: Introduced OLMES evaluation framework, providing granular metrics for model comparison[5][10].
Conclusion and Recommendations
Mistral 7B and OLMo2 7B represent divergent philosophies in LLM development—the former prioritizing real-world efficiency, the latter emphasizing academic rigor and transparency. For Ollama users:
- Choose Mistral 7Bfor:
- Low-latency chat applications
- Code-assisted development
- Hardware-constrained environments
- Opt for OLMo2 7Bwhen:
- Factual accuracy in STEM domains is critical
- Research reproducibility matters
- Longer context windows (4K tokens) are required
Future developments may narrow these gaps, but as of March 2025, this dichotomy persists, offering users complementary tools depending on their specific needs[8][10][15].
Citations:
[1] https://ollama.com/library/mistral
[2] https://ollama.com/library/llama2:7b
[3] https://mistral.ai/news/announcing-mistral-7b
[4] https://allenai.org/blog/olmo-1-7-7b-a-24-point-improvement-on-mmlu-92b43f7d269d
[5] https://www.youtube.com/watch?v=aVubNJ-e7sw
[6] https://wandb.ai/byyoung3/ml-news/reports/How-to-Run-Mistral-7B-on-an-M1-Mac-With-Ollama--Vmlldzo2MTg4MjA0
[7] https://ollama.com/library/olmo2:7b/blobs/803b5adc3448
[8] https://www.e2enetworks.com/blog/mistral-7b-vs-llama2-which-performs-better-and-why
[9] https://www.reddit.com/r/LocalLLaMA/comments/1agd78d/olmo_open_language_model/
[10] https://ollama.com/library/olmo2:7b
[11] https://mybyways.com/blog/a-game-with-mistral-7b-using-ollama
[12] https://ollama.com/library/medllama2:7b
[13] https://www.promptingguide.ai/models/mistral-7b
[14] https://ollama.com/models
[15] https://news.ycombinator.com/item?id=42877860
[16] https://ollama.com/library
[17] https://ollama.com/darkmoon/olmo:7B-instruct-q6-k
[18] https://github.com/ollama/ollama/issues/2337
[19] https://ollama.com/library/mistral:7b
[20] https://ollama.com/library/mistral-openorca:7b
[21] https://www.reddit.com/r/ollama/comments/1hiqs9r/comparison_llama_32_vs_gemma_2_vs_mistral/
[22] https://patloeber.com/typing-assistant-llm/
[23] https://ollama.com/library/llama2:7b/blobs/8934d96d3f08
[24] https://ollama.com/spooknik/hermes-2-pro-mistral-7b
[25] https://ollama.com/library/mistral:7b-instruct-q5_K_S/blobs/ed11eda7790d
[26] https://ollama.com/library/wizardlm2:7b
[27] https://news.ycombinator.com/item?id=39451236
[28] https://ollama.com/cas/nous-hermes-2-mistral-7b-dpo
[29] https://github.com/ollama/ollama/issues/6960
[30] https://www.datacamp.com/blog/top-small-language-models
[31] https://github.com/ollama/ollama/issues/7863
[32] https://cheatsheet.md/llm-leaderboard/best-open-source-llm
[33] https://www.restack.io/p/lm-studio-vs-ollama-answer-ai-development-trends
[34] https://www.reddit.com/r/LocalLLaMA/comments/1fmcnpy/olmoe_7b_is_fast_on_lowend_gpu_and_cpu/
[35] https://allenai.org/olmo
[36] https://news.ycombinator.com/item?id=39223467
Get Started!
Join the revolution today. Let (Un)Perplexed Spready free you from manual data crunching and unlock the full potential of AI—right inside your spreadsheet. Whether you're a business analyst, a researcher, or just an enthusiast, our powerful integration will change the way you work with data.
You can find more practical information on how to setup and use the (Un)Perplexed Spready software here: Using (Un)Perplexed Spready
Download
Download the (Un)Perplexed Spready software: Download (Un)Perplexed Spready
Request Free Evaluation Period
When you run the application, you will be presented with the About form, where you will find automatically generated Machine Code for your computer. Send us an email with specifying your machine code and ask for a trial license. We will send you trial license key, that will unlock the premium AI functions for a limited time period.
Contact us on following email:
Sales Contact
Purchase commercial license
For a price of two beers a month, you can have a faithful co-worker, that is, the AI-whispering spreadsheet software, to work the hard job, while you drink your coffee!.
You can purchase the commercial license here: Purchase License for (Un)Perplexed Spready
Further Reading
Leveraging AI on Low-Spec Computers: A Guide to Ollama Models for (Un)Perplexed Spready
Download (Un)Perplexed Spready
Purchase License for (Un)Perplexed Spready
- Details
- Written by Super User
- Category: (Un)Perplexed Spready
Leveraging AI on Low-Spec Computers: A Guide to Ollama Models for (Un)Perplexed Spready
The era of AI-powered productivity has arrived, but not everyone has access to high-performance computing hardware. For those working with standard office laptops or low-spec computers, accessing advanced AI capabilities might seem out of reach. This comprehensive guide explores how you can harness the power of AI locally through Ollama models and integrate them with (Un)Perplexed Spready – an innovative spreadsheet tool designed for AI-assisted data analysis.
The Promise of Local AI on Standard Hardware
Artificial intelligence has transformed from a niche technology to an essential productivity tool. While cloud-based AI services like ChatGPT have garnered widespread attention, running AI models locally offers distinct advantages: enhanced privacy, no subscription costs, offline capability, and complete control over your data. However, resource constraints on typical office computers present challenges that require strategic model selection and optimization.
Understanding Ollama and Its Importance for Local AI
Ollama is an open-source platform that simplifies running large language models (LLMs) locally on personal computers. It serves as a user-friendly bridge between complex AI technology and everyday users, handling the technical aspects of model management and inference. Rather than requiring specialized knowledge, Ollama allows anyone to download, install, and interact with various AI models through simple commands.
For office laptop users, Ollama represents a practical pathway to AI capabilities without expensive hardware upgrades. The platform supports numerous models of varying sizes and specializations, including several specifically optimized for lower-resource environments.
Best Ollama Models for Low-Spec Office Computers
When selecting an AI model for a standard office laptop, balancing capability with resource efficiency becomes crucial. Based on extensive research and real-world testing, these models emerge as top contenders for low-spec hardware:
Mistral 7B: The Balanced Performer
Mistral 7B represents an excellent middle ground between resource efficiency and general-purpose capability. Though larger than DeepScaleR at 7 billion parameters, it's carefully optimized for faster inference on consumer hardware. Mistral models are particularly well-regarded for their text generation quality and versatility across diverse tasks[1].
On typical office laptops, Mistral 7B can achieve usable performance, though response speeds will vary significantly based on hardware specifications. Users report speeds ranging from 3-70 tokens per second depending on hardware configuration[11]. Importantly, Mistral 7B can run on systems with 8GB of RAM, making it accessible to a wide range of office computers.
Olmo2 7B: Also The Balanced Performer
OLMo 2 is a new family of 7B and 13B models trained on up to 5T tokens. These models are on par with or better than equivalently sized fully open models, and competitive with open-weight models such as Llama 3.1 on English academic benchmarks.
In our tests it showed similar performance to Mistral 7B, with, subjectively, slightly better results of formula calculations. This model might become our favorite!
DeepScaleR 1.5B: The Lightweight Champion
DeepScaleR 1.5B stands out as a resource-efficient option available through Ollama. With only 1.5 billion parameters, it's specifically designed for efficient computation while maintaining impressive performance. This model achieves 43.1% Pass@1 accuracy on mathematical benchmarks like AIME 2024, surpassing many larger models including some of OpenAI's offerings[8].
What makes DeepScaleR particularly suited for low-spec computers is its optimization for long-context processing coupled with minimal resource requirements. For spreadsheet calculations, which often involve structured data and defined parameters, DeepScaleR delivers remarkable efficiency without overwhelming system resources.
Our tests, however, shown that it is considerable slower in execution in comparison to Mistral 7B or Olmo2 7B.
DeepSeek-R1 7B: The Reasoning Specialist
DeepSeek-R1 7B is engineered specifically for reasoning tasks, mathematics, and code operations. Designed as a dense model with 7 billion parameters, it excels in scenarios requiring logical analysis and structured thinking – precisely the kind of tasks often encountered in spreadsheet work[1].
While requiring similar resources to Mistral 7B, DeepSeek-R1 focuses its capabilities on analytical reasoning rather than general-purpose text generation. For office users primarily concerned with data analysis and formula calculations, this specialization can deliver superior results despite resource limitations.
Orca-mini: The Entry-Level Option
For the most severely resource-constrained systems, Orca-mini models provide a viable entry point. Available in 3B, 7B, and 13B parameter sizes, the smallest 3B variant can run on systems with minimal RAM. While performance will be noticeably limited compared to larger models, Orca-mini can handle basic queries and simple analytical tasks[14].
The model's architecture is based on Llama and trained on datasets derived from GPT-4 explanation traces, giving it reasonable capability despite its compact size. For users with older office laptops (4GB RAM), Orca-mini represents perhaps the only viable option for local AI deployment.
Phi3 and Phi3.5 models: Another Entry-Level Option
Phi3 and Phi3.5 models should be considered as another entry-level options, but our tests showed pretty bad quality of results, so we don't recommend it
Our Verdict and Recommendation
Our tests on old hardware shown best results with Mistral 7B and Olmo2 7B. If we need to choose one model only, our decision would be Olmo2 7B. It shown us good quality of results paired with fast execution.
More in-depth comparison of the two models: Comparative Analysis of Mistral 7B and OLMo2 7B on the Ollama Platform
Hardware Considerations for Running AI Models Locally
Understanding your hardware capabilities is crucial for setting realistic expectations about AI model performance. Here are the key factors that influence how effectively Ollama models will run on your machine:
Memory Requirements
RAM availability represents the most significant constraint for running local AI models. As a general guideline:
- 3B parameter models typically require 4-6GB of RAM
- 7B models generally need at least 8GB of RAM
- 13B models require approximately 16GB of RAM
- Larger models (70B+) demand 48GB or more[14]
Without sufficient RAM, models either won't load at all or will experience severe performance degradation due to memory swapping.
Processing Power: CPU vs. GPU
While any modern multi-core CPU can technically run these models, performance varies dramatically. Without a GPU, the CPU handles all computation, significantly impacting response time. One user testing Mistral on a 2017 laptop without a dedicated GPU reported response streaming at approximately "one character every four seconds"[2].
For optimal performance, a dedicated NVIDIA GPU with CUDA support provides dramatic acceleration. However, most office laptops lack discrete graphics cards. In these cases, model choice and optimization become even more critical.
The Reality of CPU-Only Operation
Running Ollama on systems without dedicated GPUs is possible but comes with performance limitations. Tests on a Raspberry Pi 5 achieved approximately 3 tokens per second with DeepSeek-R1 7B[11], while a mid-range laptop with an i5 1240P processor managed about 6.44 tokens per second with Qwen 7B[11].
While these speeds are significantly slower than cloud-based alternatives, they remain useful for non-time-sensitive tasks. The tradeoff between processing speed and privacy/cost considerations is one each user must evaluate based on their specific needs.
Introducing (Un)Perplexed Spready: AI-Powered Spreadsheet Revolution
(Un)Perplexed Spready represents a groundbreaking approach to integrating AI capabilities directly within spreadsheet workflows. This innovative tool allows users to leverage language models through custom spreadsheet formulas, transforming how unstructured data can be processed and analyzed[1].
The software's tagline effectively captures its value proposition: "The AI-Powered Spreadsheet Revolution." Its primary appeal lies in addressing a common frustration: the tedious manual work often required to extract meaningful insights from unstructured data in spreadsheets[1]. Rather than processing information row by row, (Un)Perplexed Spready allows AI models to understand context and meaning, automating tasks that previously required human intervention.
Core AI Integration Features
At the heart of (Un)Perplexed Spready are three sets of custom functions that connect spreadsheets directly to AI capabilities:
- PERPLEXITY Functions: These connect to commercial-grade AI through Perplexity.ai's API, offering high-quality responses for market analysis, data synthesis, and more.
- ASK_LOCAL Functions: Perhaps most relevant for office laptop users, these functions allow direct interaction with locally installed Ollama models. This creates a completely self-contained AI workflow without external dependencies or API fees.
- ASK_REMOTE Functions: Currently available in demonstration mode, these functions access remotely hosted AI models, balancing local resource limitations with external processing power[1].
Each function type is available in three variants (1, 2, or 3 inputs), allowing for flexible data processing across different spreadsheet scenarios.
Practical Setup: Connecting (Un)Perplexed Spready with Ollama
Implementing this AI-enhanced spreadsheet workflow involves several straightforward steps:
1. Install Ollama
Begin by downloading and installing Ollama from their official website (ollama.com). The platform is available for Windows, macOS, and Linux, with installation typically requiring just a few clicks or commands.
2. Select and Install an Appropriate Model
Based on your hardware specifications, choose one of the recommended models. For most office laptops, DeepScaleR 1.5B or Mistral 7B represents a good starting point. Install your chosen model with a simple command:
ollama run olmo2:7b
or
ollama run mistral
or
ollama run deepscaler
During first execution, Ollama automatically downloads the model if it isn't already installed.
3. Download and Setup (Un)Perplexed Spready
Obtain (Un)Perplexed Spready from the developer's website (matasoft.hr). While basic spreadsheet functionality is free, utilizing the AI-powered functions requires a commercial license. A free evaluation period is available by contacting the developers[1].
4. Create AI-Powered Spreadsheet Formulas
Once both applications are installed and running, you can begin using the ASK_LOCAL functions within your spreadsheets. These work similarly to standard formulas but include instructions for the AI model. Examples include:
=ASK_LOCAL1(A2, "From product description extract the product measure (mass, volume, size etc.) and express it in S.I. units")
=ASK_LOCAL2(A2, B2, "Are two products having same color?")
=ASK_LOCAL3(C2, D2, E2, "Classify Input1 and Input2 by Input3.")
The AI model processes the cell content according to your instructions and returns results directly in the spreadsheet[1].
Optimizing Performance on Low-Spec Hardware
Achieving usable performance on standard office laptops requires strategic optimization. Consider these practical approaches:
Model Selection Strategy
For spreadsheet formula calculations specifically, DeepScaleR 1.5B emerges as the optimal choice for low-spec hardware. Its smaller size and optimization for mathematical reasoning make it particularly well-suited for spreadsheet operations while requiring minimal resources[8]. For systems with at least 8GB of RAM, Mistral 7B offers greater versatility while maintaining reasonable performance.
Resource Management Techniques
Several approaches can help maximize performance on limited hardware:
- Close unnecessary applications when running AI-powered spreadsheet formulas
- Process data in smaller batches rather than entire datasets simultaneously
- Simplify prompts to focus on specific analytical tasks
- Utilize quantized models (identified by "q4" in their names) for improved efficiency
- Schedule resource-intensive AI tasks during periods of lower computer usage
Performance Expectations by Hardware Category
Based on real-world testing across different hardware configurations, here's what users can expect:
- Entry-level office laptops (4GB RAM, older i3/i5 processors):
- Will struggle with most 7B models
- Can potentially run DeepScaleR 1.5B at usable speeds
- Best suited for simpler, non-time-sensitive analysis tasks
- Consider using ASK_REMOTE as an alternative to local processing
- Mid-range office laptops (8GB RAM, recent i5/i7 processors):
- Can run Mistral 7B and Olmo2 7B at acceptable speeds (approximately 5-10 tokens per second)
- DeepSeek-R1 7B viable for specialized reasoning tasks
- Suitable for regular use with properly optimized prompts
- May experience slowdowns with complex or lengthy operations
- Higher-end office laptops (16GB RAM, current-gen processors):
- Can comfortably run any 7B model at good speeds
- May handle 13B models at slower but usable rates
- Suitable for daily integration into spreadsheet workflows
- Consider multiple models for different specialized tasks
Case Studies: Real-World Performance
While comprehensive benchmarks for every combination of model and hardware aren't available, several real-world examples provide useful reference points:
- Intel i5-7200U CPU @ 2.50GHz, 4GB RAM(2017 Lenovo Yoga):
- Successfully ran TinyLlama but at extremely slow speeds (approximately one character every four seconds)
- Larger models like Mistral became impractical on this hardware[2]
- Raspberry Pi 5 (overclocked to 2800MHz):
- Achieved approximately 3 tokens per second with DeepSeek-R1 7B
- Demonstrates viability even on very modest hardware[11]
- Intel i5 1240P laptop without dedicated GPU:
- Achieved 6.44 tokens per second with Qwen 7B
- Found larger 14B models "excruciatingly slow"[11]
- Intel N100 mini PC:
- Considered minimally viable for running smaller models
- Users report it may "fall short" for continuous LLM usage[7]
These examples illustrate that while performance varies dramatically across hardware configurations, useful functionality remains possible even on modest systems with appropriate model selection.
Future-Proofing: The Evolving Landscape of Local AI
The field of local AI deployment is advancing rapidly. Just two years ago, running any significant AI model locally seemed impossible on consumer hardware. Today, even older laptops can process smaller models, and the trajectory suggests continued improvement[3].
For office laptop users, several developments are particularly promising:
- Increasingly efficient models: Researchers continue to develop more efficient architectures that deliver better performance with fewer resources.
- Improved quantization techniques: Methods for reducing model precision without sacrificing significant capability continue to advance.
- Specialized hardware acceleration: Even integrated graphics chips are increasingly optimized for AI workloads.
- Task-specific models: Rather than general-purpose models, specialized versions focused on specific tasks (like spreadsheet calculations) may offer better performance at smaller sizes.
Conclusion: Practical Recommendations for Office Laptop Users
For those looking to implement AI-powered spreadsheet workflows on standard office hardware, these practical recommendations will help maximize success:
- Start with Mistral 7B or Olmo2 7B as a general-purpose model if your system has at least 8GB of RAM and you need versatility beyond mathematical operations[1].
- Experiment with DeepScaleR 1.5B for spreadsheet formula calculations on low-spec hardware. Its efficiency and mathematical reasoning capabilities make it ideal for this specific use case[8].
- Design clear, specific prompts for your ASK_LOCAL formulas. Well-crafted instructions improve both response quality and processing speed.
- Implement progressive adoption, starting with simple analytical tasks and gradually expanding as you understand your hardware's capabilities.
- Explore hybrid approaches when appropriate, using local models for sensitive data and potentially leveraging cloud services for more intensive operations.
The integration of Ollama with (Un)Perplexed Spready represents a significant advancement in making AI accessible to everyday users with standard hardware. While performance limitations exist, the ability to leverage AI capabilities locally without specialized equipment opens new possibilities for data analysis and productivity enhancement. As both hardware and software continue to evolve, these capabilities will only improve, making this an ideal time to begin exploring the potential of local AI in your spreadsheet workflows.
References
- (Un)Perplexed Spready: The AI - Powered Spreadsheet Revolution[1]
- Running Ollama without a GPU[2]
- I can now run a GPT-4 class model on my laptop[3]
- Best GPU VPS for Ollama: GPUMart's RTX A4000 VPS[4]
- Running LLMs on Ollama with an RTX 3060 Ti GPU Server[6]
- Future-proofing HA with local LLMs: Best compact, low-power hardware[7]
- deepscaler - Ollama[8]
- Laptop for ollama - Reddit[9]
- Which Ollama Model Is Best For YOU? - YouTube[10]
- Running Deepseek-r1 7b distilled model locally in a PC with no GPU[11]
- Minimum spec for ollama with llama 3.2 3B - LowEndTalk[12]
- Run Generative AI models on your Laptop with Ollama[13]
- orca-mini - Ollama[14]
Citations:
[1] https://matasoft.hr/qtrendcontrol/index.php/un-perplexed-spready
[2] https://www.seanmcp.com/articles/running-ollama-without-a-gpu/
[3] https://simonwillison.net/2024/Dec/9/llama-33-70b/
[4] https://www.gpu-mart.com/blog/best-gpu-vps-for-ollama
[5] https://matasoft.hr/qtrendcontrol/index.php/un-perplexed-spready
[6] https://www.databasemart.com/blog/ollama-gpu-benchmark-rtx3060ti
[7] https://community.home-assistant.io/t/future-proofing-ha-with-local-llms-best-compact-low-power-hardware/790393
[8] https://ollama.com/library/deepscaler
[9] https://www.reddit.com/r/ollama/comments/1byuwq6/laptop_for_ollama/
[10] https://www.youtube.com/watch?v=FQTorLqMyMU
[11] https://www.reddit.com/r/ollama/comments/1i9smk3/running_deepseekr1_7b_distilled_model_locally_in/
[12] https://lowendtalk.com/discussion/201172/minimum-spec-for-ollama-with-llama-3-2-3b
[13] https://www.handsonarchitect.com/2024/09/run-generative-ai-models-on-your-laptop.html
[14] https://ollama.com/library/orca-mini
[15] https://www.youtube.com/watch?v=69Bd3TEiPnk
[16] https://ollama.com/library/llama2
[17] https://dev.to/shayy/run-deepseek-locally-on-your-laptop-37hl
[18] https://ollama.com/library/mistral-small
[19] https://ollama.com
[20] https://ollama.com/library/phi3:mini-4k
[21] https://www.reddit.com/r/LocalLLaMA/comments/14q5n5c/any_option_for_a_low_end_pc/
[22] https://www.freecodecamp.org/news/how-to-run-open-source-llms-on-your-own-computer-using-ollama/
[23] https://ollama.com/models
[24] https://github.com/ollama/ollama/issues/2860
[25] https://github.com/ollama/ollama/issues/6008
[26] https://buttondown.com/ainews/archive/ainews-deepseek-v2-beats-mixtral-8x22b/
[27] https://www.gpu-mart.com/blog/run-llms-with-ollama
[28] https://www.youtube.com/watch?v=UiyVf-McEaQ
[29] https://discuss.huggingface.co/t/recommended-hardware-for-running-llms-locally/66029
[30] https://ollama.com/library
[31] https://www.tomsguide.com/ai/ollama-just-made-it-easier-to-use-ai-on-your-laptop-with-no-internet-required
[32] https://www.youtube.com/watch?v=NAoE_cYElCk
Get Started!
Join the revolution today. Let (Un)Perplexed Spready free you from manual data crunching and unlock the full potential of AI—right inside your spreadsheet. Whether you're a business analyst, a researcher, or just an enthusiast, our powerful integration will change the way you work with data.
You can find more practical information on how to setup and use the (Un)Perplexed Spready software here: Using (Un)Perplexed Spready
Download
Download the (Un)Perplexed Spready software: Download (Un)Perplexed Spready
Request Free Evaluation Period
When you run the application, you will be presented with the About form, where you will find automatically generated Machine Code for your computer. Send us an email with specifying your machine code and ask for a trial license. We will send you trial license key, that will unlock the premium AI functions for a limited time period.
Contact us on following email:
Sales Contact
Purchase commercial license
For a price of two beers a month, you can have a faithful co-worker, that is, the AI-whispering spreadsheet software, to work the hard job, while you drink your coffee!.
You can purchase the commercial license here: Purchase License for (Un)Perplexed Spready
Further Reading
Comparative Analysis of Mistral 7B and OLMo2 7B on the Ollama Platform
Download (Un)Perplexed Spready
Purchase License for (Un)Perplexed Spready
- Details
- Written by Super User
- Category: (Un)Perplexed Spready
(Un)Perplexed Spready – The AI - Powered Spreadsheet Revolution
Enjoy your life and drink your coffee, while the AI is working for you! It was time, after all, don't you think you deserve it?
Why Choose (Un)Perplexed Spready?
-
Revolutionary Integration: Get the best of spreadsheets and AI combined, all in one intuitive interface.
-
Market-Leading Flexibility: Choose between Perplexity AI and locally installed LLM models, or even our remote free model hosted on our server—whichever suits your needs.
-
User-Centric Design: With familiar spreadsheet features seamlessly merged with AI capabilities, your productivity is bound to soar.
-
It's powerful - only your imagination and HW specs are limits to what can you do with the help of AI! If data is new gold, then (Un)Perplexed Spready is your miner!
- It's fun - sure it is fun, when you are drinking coffee and scrolling news, while AI is doing the hard job. Drink your coffee and let the AI works for you!
Get Started!
Join the revolution today. Let (Un)Perplexed Spready free you from manual data crunching and unlock the full potential of AI—right inside your spreadsheet. Whether you're a business analyst, a researcher, or just an enthusiast, our powerful integration will change the way you work with data.
Download
You can downloa Windows or Linux executables from here: Download (Un)Perplexed SpreadyRequest Free Evaluation Period
When you run the application, you will be presented with the About form, where you will find automatically generated Machine Code for your computer. Send us an email with specifying your machine code and ask for a trial license. We will send you trial license key, that will unlock the premium AI functions for a limited time period.
Contact us on following email:
Sales Contact
Purchase commercial license
While regular spreadsheet functionalities are completely free, advanced AI-driven functions require activation of commercial license.
For a price of two beers a month, you can have a faithful co-worker, that is, the AI-whispering spreadsheet software, to work the hard job, while you drink your coffee!.
You can purchase the commercial license here:
a) 1 (One) Month Subscription
1 (One) Month Subscription to (Un)Perplexed Spready
b) 1 (One) Year Subscription
1 (One) Year Subscription to (Un)Perplexed Spready
Commercial license is bounded to one physical user working on one physical computer. Therefore, during license ordering process you will need to provide your "Machine Code" presented in the "About" form, in order that we can generate personalized license key for you.
- Details
- Written by Super User
- Category: (Un)Perplexed Spready
(Un)Perplexed Spready: The AI - Powered Spreadsheet Revolution
Have you ever found yourself despairing in tedious job of extracting information from some huge, big, boring spreadsheet... thinking how nice it would be if you had somebody else to do it for you? Cursing your life, while trying to make some sense from unstructured, messy data? Snail's pace going through the rows of the table, wondering whether you will finish it before the end of universe, or you will continue the job even in the black hole over the horizon of space-time?
Yes, we know that feeling. Sometimes there is just no spreadsheet function, no formula to do the job automatically, there is no automated way of extracting useful information from a messy dataset...only human can do it, manually. Row by row, until it is done. No machine can understand the meaning of unorganized textual data, no spreadsheet formula can help you managing such data...no machine...wait!
Is that really true anymore? I mean, today, in the age when we are witnessing rising of the Artificial Intelligence everywhere? I mean, would you go around the world by foot or by plane? Would you till the land by hand or by tractor? Would you ride a horse or drive a car, to visit your mother far away?
No, my friends, there is no reason to despair anymore, there is no reason to do the hard, tedious job of manual data extraction and categorization in spreadsheets, today when we have advanced AI (Artificial Intelligence) models available. But how? How can you make that funny clever AI chat-bot to do something useful instead of babbling nonsense?
Well, let us present you a solution - the (UN)PERPLEXED SPREADY, a spreadsheet software whispering to Artificial Intelligence, freeing you from hardship of manual spreadsheet work!
Enjoy your life and drink your coffee, while the AI is working for you! It was time, after all, don't you think you deserve it?
Harness the power of advanced language models (LLM) directly within your spreadsheets!
Imagine opening your favorite spreadsheet and having the power of cutting-edge AI models at your fingertips—right inside your cells. (Un)Perplexed Spready is not your average spreadsheet software. It’s a next-generation tool that integrates state-of-the-art language models into its custom formulas, letting you perform advanced data analysis, generate insights, and even craft intelligent responses, all while you go about your daily work (...or just drink your coffee).
This isn’t sci-fi. This is (Un)Perplexed Spready—the spreadsheet software that laughs at the limits of Excel, Google Sheets, or anything you’ve used before.
“But How Is This Even Possible?”
Meet Your New AI Co-Pilots:
-
PERPLEXITY functions: Tap into commercial-grade AI (Perplexity.ai) for market analysis, real-time data synthesis, or competitive intelligence.
-
ASK_LOCAL formulas: Run free, private AI models (via Ollama) directly on your machine. Mistral, DeepSeek, Llama2—your choice. Crunch data offline, no API fees, no delays.
-
ASK_REMOTE (demo mode): Test ideas on our server-
What Makes (Un)Perplexed Spready Unique?
Direct AI models Integration:
At the heart of (Un)Perplexed Spready is its innovative support for custom functions such as:
-
PERPLEXITY: Call top-tier commercial AI directly (via Perplexity AI’s API)
-
ASK_LOCAL: Query locally installed AI models through the Ollama platform
-
ASK_REMOTE: Access remotely hosted AI model for on-the-fly analysis
With these functions, you can simply enter a formula like:
=PERPLEXITY1(A2, "What is the category of this product? Choose between following categories: meat, fruits, vegetables, bakery, dairy, others.")
or
=ASK_LOCAL1(A2, "From product description extract the product measure (mass, volume, size etc.) and express it in S.I. units")
or
=ASK_REMOTE1(A2, "What is aggregate phase of the product (solid, liquid, gas)?")
or
=ASK_LOCAL2(A2, B2, "Are two products having same color?")
or
=ASK_LOCAL2(A2, B2, "In what kind of packaging container is the product packed? Example of containers are: box, bottle, pouch, can, canister, bag, etc.")
or
=PERPLEXITY2(A2, B2, "What are common characteristics of the two products?")
or
=PERPLEXITY(A2, B2, C2, "How can we achieve Input1 from Input2, by utilizing Input3?")
or
=ASK_LOCAL3(C2, D2, E2,"Classify Input1 and Input2 by Input3.")
— and see powerful AI-generated output appear instantly in your spreadsheet!
Seamless Experience:
(Un)Perplexed Spready is designed with both beginners and advanced users in mind. Its user-friendly interface incorporates familiar spreadsheet features—such as cell formatting, copy/paste, auto-updating row heights, and advanced sorting—as well as innovative AI integration that brings a new level of interactivity. Every calculation, from basic arithmetic to AI-driven natural language queries, is processed swiftly and accurately.
Free and paid features
Basic spreadsheet functionality is completely free, but utilizing advanced AI-driven functions require purchasing and activation of commercial license, for a small fee.
Intelligent Licensing for Premium Features:
Only the premium AI-driven functions require a commercial license. This means while all basic spreadsheet features remain free, access to advanced AI-driven formulas like PERPLEXITY, ASK_LOCAL and ASK_REMOTE require purchasing commercial license.
When you launch (Un)Perplexed Spready, your license status is automatically verified—updating the main interface with clear feedback. For example, if your license is active, you’ll see “License: VALID” and a detailed status report; if not, you’ll be notified immediately that the premium features are locked. But, you will be able to use the program as a regular free spreadsheet software. Once you need advanced AI formulas, then you can purchase a license, for a price of two beers a month.
Once the license is validated, you can start using AI-powered functions in your formulas. Whether you’re analyzing large datasets or crafting creative reports.
Value Proposition and Business Use Cases
You can find more ind-depth analysis of what are the potential use cases and what is the benefit (Un)Perplexed Spready in following articles:
How It Works
Install or Integrate Advanced Artificial Intelligence (AI) LLM models into your spreadsheet
"(Un)Perplexed Spready" is a peculiar spreadsheet software introducing possibility to utilize advanced LLM AI models directly inside custom formulas. Isn't that cool?Imagine that - powerful AI models doing something useful for change, not only entartaining you in chat! We say: yeah, let AI do the hard work, while you drink your coffee and enjoy your life!
Currently, the following AI platforms are available via API:
1. Perplexity AI (https://www.perplexity.ai/) - a well known major commercial AI provider
2. Remotely hosted free Ollama based AI model (hosted on our server)
Currently we run Mistral:7b on our own server.
3. Locally installed free AI models (available by Ollama platform on https://ollama.com/)
Of the three options, most exciting option is the third one, which enables you to install any model available on Ollama platform (https://ollama.com/), run it on your local computer and immediately have it available to use it inside spreadsheets, by utilizing our (Un)Perplexed Spready software!
Isn't that cool? Oh, yes it is! The only limit is your HW. Fortunately, these days, Ollama provides models such as Mistral 7b, DeepScaler:1.5b, DeepSeek-1:7b and LLama2, which can run even on low-spec hardware, such as oldish office laptop. Choose from multitude of available models here: https://ollama.com/search
Find more detailed recommendations for choosing appropriate LLM model for low-spec office laptops here: Leveraging AI on Low-Spec Computers: A Guide to Ollama Models for (Un)Perplexed Spready
Embed Advanced Artificial Intelligence (AI) LLM models into your spreadsheet formulas
(Un)Perplexed Spready provides three corresponding set of formulas for these three AI options, in three variants differing in number of cell ranges input parameters, i.e. number of inputs for AI prompt.
1. Functions working with Perplexity AI
=PERPLEXITY1 (A2, "Some instruction to AI...")
=PERPLEXITY2 (A2, B2, "Some instruction to AI...")
=PERPLEXITY3 (A2, B2, C2, "Some instruction to AI...")
2. Functions working with locally installed AI model (by using Ollama platform)
=ASK_LOCAL1 (A2, "Some instruction to AI...")
=ASK_LOCAL2 (A2, B2, "Some instruction to AI...")
=ASK_LOCAL3 (A2, B2, C2, "Some instruction to AI...")
3. Functions working with remotelly hosted AI model (hosted on our server)
=ASK_REMOTE1 (A2, "Some instruction to AI...")
=ASK_REMOTE2 (A2, B2, "Some instruction to AI...")
=ASK_REMOTE3 (A2, B2, C2, "Some instruction to AI...")
Notice: Option 3 with AI model hosted on our server is at this stage intended only for demo and testing purposes, not for real production exploitation.
The hosting server has low hardware specifications, thus calculation is very slow.
We recommend you to install your own free AI model available on Ollama platform, according to your HW specs. This option will provide you highest flexibility and lowest costs. Of course, if you wish to have enterprise level accuracy of calculation, then choose Perplexity AI for superb speed and high quality of AI-driven answers.
Why Choose (Un)Perplexed Spready?
-
Revolutionary Integration: Get the best of spreadsheets and AI combined, all in one intuitive interface.
-
Market-Leading Flexibility: Choose between Perplexity AI and locally installed LLM models, or even our remote free model hosted on our server—whichever suits your needs.
-
User-Centric Design: With familiar spreadsheet features seamlessly merged with AI capabilities, your productivity is bound to soar.
-
It's powerful - only your imagination and HW specs are limits to what can you do with the help of AI! If data is new gold, then the (Un)Perplexed Spready is your miner!
- It's fun - sure it is fun, when you are drinking coffee and scrolling news, while AI is doing the hard job. Drink your coffee and let the AI works for you!
Get Started!
Join the revolution today. Let (Un)Perplexed Spready free you from manual data crunching and unlock the full potential of AI—right inside your spreadsheet. Whether you're a business analyst, a researcher, or just an enthusiast, our powerful integration will change the way you work with data.
You can find more practical information on how to setup and use the (Un)Perplexed Spready software here: Using (Un)Perplexed Spready
Download
Download the (Un)Perplexed Spready software: Download (Un)Perplexed Spready
Request Free Evaluation Period
When you run the application, you will be presented with the About form, where you will find automatically generated Machine Code for your computer. Send us an email with specifying your machine code and ask for a trial license. We will send you trial license key, that will unlock the premium AI functions for a limited time period.
Contact us on following email:
Sales Contact
Purchase commercial license
For a price of two beers a month, you can have a faithful co-worker, that is, the AI-whispering spreadsheet software, to work the hard job, while you drink your coffee!.
You can purchase the commercial license here: Purchase License for (Un)Perplexed Spready
Further Reading
Download (Un)Perplexed Spready
Purchase License for (Un)Perplexed Spready
Leveraging AI on Low-Spec Computers: A Guide to Ollama Models for (Un)Perplexed Spready
- Details
- Written by Super User
- Category: (Un)Perplexed Spready
Download (Un)Perplexed Spready - your AI - powered work companion!
Get Started!
Join the revolution today. Let (Un)Perplexed Spready free you from manual data crunching and unlock the full potential of AI—right inside your spreadsheet. Whether you're a business analyst, a researcher, or just an enthusiast, our powerful integration will change the way you work with data.
Download
Download for Windows OS
You can download setup executable from this link: https://matasoft.hr/spready.exe
You can download portable executable from this link: https://matasoft.hr/(Un)PerplexedSpreadySetup.exe
Download for Linux OS
You can download portable executable from this link: https://matasoft.hr/spready.bin
Request Free Evaluation Period
When you run the application, you will be presented with the About form, where you will find automatically generated Machine Code for your computer. Send us an email with specifying your machine code and ask for a trial license. We will send you trial license key, that will unlock the premium AI functions for a limited time period.
Contact us on following email:
Sales Contact
Purchase commercial license
For a price of two beers a month, you can have a faithful co-worker, that is, the AI-whispering spreadsheet software, to work the hard job, while you drink your coffee!.
You can purchase the commercial license here: Purchase License
Further Reading
Introduction to (Un)Perplexed Spready