This post explains how Dropbox improved storage efficiency in Magic Pocket, their immutable exabyte-scale blob store, after a fragmentation incident caused by a new service.
This post explains how Dropbox reduced their server monorepo size from 87GB to 20GB (a 77% reduction), cutting clone times from over an hour to under 15 minutes.
Dropbox engineering shares how they used DSPy to optimize their LLM-based relevance judge for Dash, achieving significant cost and quality improvements.
This post explains how Dropbox Dash trains its search ranking model by combining small-scale human labeling with LLM-generated relevance judgments to produce training data at scale.
This article explores low-bit inference techniques that make large AI models faster and more cost-efficient to serve in production.
Dropbox hosted an executive roundtable on AI and engineering productivity, sharing lessons from their own AI adoption journey and cross-industry leadership perspectives.
Josh Clemm, VP of Engineering at Dropbox, explains how Dash uses knowledge graphs, MCP, and DSPy to build a universal work search and AI assistant.
Dropbox Dash built a custom hybrid feature store to power real-time AI ranking across tens of thousands of work documents.
Dropbox's 2025 Camp Dropbox Intern Program welcomed 43 interns from 27 colleges across multiple countries for a 12-week program focused on meaningful engineering contributions.
Dropbox Dash evolved from a traditional RAG-based enterprise search into an agentic AI system, requiring a new discipline called context engineering to manage what information models receive.
Dropbox has acquired AI startup Mobius Labs and is integrating their multimodal AI models, called Aana, into Dropbox Dash to enable deeper understanding of rich media content.
This post introduces Half-Quadratic Quantization (HQQ), a calibration-free quantization method for large machine learning models that achieves calibration-based quality at data-free speeds.