Skip to content

PaperCodex

Subscribe

On-Device AI

MobileVLM: High-Performance Vision-Language AI That Runs Fast and Privately on Mobile Devices

MobileVLM: High-Performance Vision-Language AI That Runs Fast and Privately on Mobile Devices 1314

MobileVLM is a purpose-built vision-language model (VLM) engineered from the ground up for on-device deployment on smartphones and edge hardware.…

12/26/2025Multimodal Reasoning, On-Device AI, Visual Question Answering
Bitnet.cpp: Run 1.58-Bit LLMs at the Edge with Lossless Speed and Efficiency

Bitnet.cpp: Run 1.58-Bit LLMs at the Edge with Lossless Speed and Efficiency 24456

Large language models (LLMs) are becoming increasingly central to real-world applications—but their computational demands remain a major barrier for edge…

12/22/2025Edge Inference, Low-bit LLMs, On-Device AI
MiniRAG: Enable Small Language Models to Deliver Powerful RAG with Minimal Resources

MiniRAG: Enable Small Language Models to Deliver Powerful RAG with Minimal Resources 1605

Retrieval-Augmented Generation (RAG) has become a cornerstone technique for grounding language models in factual knowledge. However, traditional RAG pipelines struggle…

12/15/2025Knowledge Graph Reasoning, On-Device AI, Retrieval-Augmented Generation
Copyright © 2026 PaperCodex.
  • Facebook
  • YouTube
  • Twitter

PaperCodex