Latest Posts

AI Machine Learning & Data Science Research

DeepMind’s Zipper: Fusing Unimodal Generative Models into Multimodal Powerhouses

In a new paper Zipper: A Multi-Tower Decoder Architecture for Fusing Modalities, a Google DeepMind research team introduces Zipper, a multi-tower decoder architecture. This architecture can flexibly combine multimodal generative models from independently pre-trained unimodal decoders and can be reused and repurposed in new multimodal combinations.

AI Machine Learning & Data Science Research

Meta’s Imagine Flash: Pioneering Ultra-Fast and High-Fidelity Images Generation Within 3 Steps

In a new paper Imagine Flash: Accelerating Emu Diffusion Models with Backward Distillation, a Meta GenAI research team introduces an innovative distillation framework aimed at enabling high-fidelity, diverse sample generation within just one to three steps. This framework surpasses existing competitors in both quantitative metrics and human evaluations.

AI Machine Learning & Data Science Research

Revolutionizing Video Understanding: Real-Time Captioning for Any Length with Google’s Streaming Model

In a new paper Streaming Dense Video Captioning, a Google research team proposes a streaming dense video captioning model, which revolutionizes dense video captioning by enabling the processing of videos of any length and making predictions before the entire video is fully analyzed, thus marking a significant advancement in the field.

AI Machine Learning & Data Science Research

Huawei & Peking U’s DiJiang: A Transformer Achieving LLaMA2-7B Performance at 1/50th the Training Cost

A research team from Huawei and Peking University introduces DiJiang, a groundbreaking Frequency Domain Kernelization approach, which facilitates the transition to a linear complexity model with minimal training overhead, achieving performance akin to LLaMA2-7B across various benchmarks, but at just 1/50th of the training cost.

AI Machine Learning & Data Science Research

Stanford’s VideoAgent Achieves New SOTA of Long-Form Video Understanding via Agent-Based System

In a new paper VideoAgent: Long-form Video Understanding with Large Language Model as Agent, a Stanford University research team introduces VideoAgent, an innovative approach simulates human comprehension of long-form videos through an agent-based system, showcasing superior effectiveness and efficiency compared to current state-of-the-art methods.

AI Machine Learning & Data Science Research

Embracing the Era of 1-Bit LLMs: Microsoft & UCAS’s BitNet b1.58 Redefines Efficiency

In a new paper The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits, a research team introduces a new variant of 1-bit LLMs called BitNet b1.58, which preserves the advantages of the original 1-bit BitNet while ushering in a novel computational paradigm that significantly enhances cost-effectiveness in terms of latency, memory usage, throughput, and energy consumption.

AI Machine Learning & Data Science Nature Language Tech Research

Nomic Embed: The Inaugural Open-Source Long Text Embedding Model Outshining OpenAI’s Finest

In a new paper Nomic Embed: Training a Reproducible Long Context Text Embedder, a Nomic AI research team introduces nomic-embed-text-v1, which marks the inception of the first fully reproducible, open-source, open-weights, open-data text embedding model, capable of handling an extensive context length of 8192 in English.