ScatterAI
Issue # · March 12, 2026

Also Worth Noting — 2026-03-12

Research

Also Worth Noting

04 [Reasoning] One Shared Model Efficiently Serves Many Different Users A new federated learning system trains a small set of shared models that can be quickly adapted to serve many users with very different data needs, without ever combining their private data. The core challenge is mathematically balancing competing goals across all users at once — something previous approaches handled with rough rules of thumb rather than principled optimization. This could make personalized AI cheaper to deploy at scale, especially in sensitive domains like healthcare or finance where data can’t leave a user’s device. link

05 [Video Gen] AI That Thinks and Watches Video at the Same Time Most video AI has to finish watching a clip before it can answer questions, but this system processes live video streams and responds simultaneously — no waiting. Separating the “watching” and “thinking” processes so they run in parallel is technically tricky because the model has to maintain a running memory of what it’s seen without losing context across multiple questions. This makes it possible to have natural back-and-forth conversations with an AI about something happening live, like a sports game or security feed, without awkward delays. link

06 [RAG] AI Agent That Reads Gene Activity to Generate Biology Hypotheses ELISA is an AI system that connects raw gene expression data directly to a conversational AI, letting scientists ask plain-language questions about what’s happening inside individual cells. Bridging these two worlds is hard because gene activity data and language models speak completely different dialects — ELISA translates between them without hiding its reasoning. Biologists studying diseases like cancer could use this to turn massive genomics datasets into testable hypotheses in hours instead of months. link

07 [RAG] New Benchmark Tests AI Legal Assistants on Chinese Law A new benchmark called Legal-DC was built to test how well AI systems can look up and explain Chinese legal documents. Most existing tests only evaluate part of the pipeline — either the search or the answer generation — but not how well they work together on the structured, clause-heavy language that real laws use. Better benchmarks like this push AI legal tools closer to being reliable enough for actual lawyers and citizens navigating China’s legal system. link

08 [RAG] Smarter Decoding Trick Makes AI Summaries Miss Less BLooP is a lightweight method that nudges AI language models to stay closer to the original text when writing summaries, without any extra training. The challenge is that LLMs naturally drift — they invent details or skip important ones — and fixing this usually requires expensive retraining, but BLooP intervenes at the word-generation step itself by rewarding choices that echo the source document’s key phrases. Anyone who relies on AI to summarize reports, articles, or documents could get more faithful, complete results without swapping out or retraining their existing model. link

09 [Video Gen] AI That Thinks While Watching Video, Not After A new system called Video Streaming Thinking lets AI models reason about video as it plays, rather than waiting until the clip ends to start thinking. The hard part is that existing “think-before-you-answer” techniques grind everything to a halt — VST solves this by running perception and reasoning in parallel, eliminating the latency penalty. Any application that needs instant responses to live video — security cameras, sports broadcasts, live customer support — gets dramatically more useful AI without the frustrating delay. link

10 [RAG] AI Agent Builds 3D Scenes From Plain-Text Descriptions SceneAssistant is a system that turns free-form text descriptions into fully realized 3D scenes without needing predefined rules about how objects should relate to each other. Most existing tools are locked to specific domains or require you to spell out spatial relationships manually, making truly open-ended scene creation nearly impossible. This could be a game-changer for game designers, architects, and filmmakers who want to prototype rich 3D environments just by describing them in plain English. link

11 [Evaluation] 360° AI Vision That Recognizes Objects It’s Never Seen A new system lets autonomous robots and vehicles build a full 3D map of their surroundings using all-around cameras, while also recognizing objects that weren’t in its training data. Most existing approaches only look forward and can only label objects from a fixed list — handling open-ended vocabulary and full 360° input simultaneously is a genuinely hard dual constraint to crack. This could make robots and self-driving systems meaningfully safer, since they’d no longer be blind to unexpected objects or to what’s happening behind and beside them. link

12 [Evaluation] One AI System to Fix Blur Across All Camera Lenses Current software that sharpens blurry or distorted photos has to be rebuilt from scratch every time it’s used with a new lens, which is slow and expensive. This benchmark tackles that bottleneck by creating a testing framework to measure whether a single correction system can work across many different lenses without retraining. Photographers, phone makers, and camera manufacturers could all benefit from AI that fixes optical flaws universally rather than one lens at a time. link

13 [Image Gen] Hidden Color Code Found Inside AI Image Generator Inside the chaotic math of a popular AI image generator called FLUX.1, scientists found that color is secretly organized into a clean, logical structure matching the familiar hue, saturation, and lightness system humans already use. This is surprising because the model was never explicitly taught to organize color this way — it emerged on its own from training. Understanding this hidden structure means developers can now precisely tune the color of AI-generated images without retraining the entire model. link

14 [RAG] Robotic Hand Uses Soft Joints and Rigid Links for Better Grip Engineers built a robotic hand that mimics human anatomy by placing flexible material only at the joints while keeping the structural links rigid — matching where impact and load actually occur. Getting this balance right is surprisingly difficult because most robot hands are either fully rigid (fragile on impact) or fully soft (imprecise and hard to control), and this hybrid approach uses rolling-contact joint surfaces to keep movement consistent and repeatable. A robotic hand that can handle real-world bumps and contact without breaking or losing accuracy is a key step toward robots that can reliably work alongside humans in homes, warehouses, and factories. link

15 [Evaluation] Why Language Models Lean Toward Truth Without Being Taught To A new theory explains why AI language models tend to prefer accurate statements even when trained on messy, mixed-quality data — it turns out models naturally favor information that compresses more efficiently, and true facts often happen to be more internally consistent. This is subtle because it means “truth-seeking” isn’t a built-in goal but an accidental side effect of how compression works mathematically. For anyone building AI systems where accuracy matters, this reframes the challenge: reliability isn’t guaranteed, and when false information is just as compressible as true information, models won’t reliably prefer the truth. link