AQEA: aqea-text-embedding-3-large-29x

OpenAI text-embedding-3-large compressed 29x while preserving 91.9% similarity ranking

πŸ“Š Performance

Metric Value
Compression Ratio 29.3x
Spearman ρ 91.9%
Source Dimension 3072D
Compressed Dimension 105D
Storage Savings 96.6%

πŸš€ Usage

from aqea import AQEACompressor

# Load pre-trained compressor
compressor = AQEACompressor.from_pretrained("nextxag/aqea-text-embedding-3-large-29x")

# Compress embeddings
embeddings = model.encode(texts)  # 3072D
compressed = compressor.compress(embeddings)  # 105D

# Decompress for retrieval
reconstructed = compressor.decompress(compressed)  # 3072D

πŸ“ Files

  • weights.aqwt - Binary weights (AQEA native format)
  • config.json - Model configuration

πŸ”¬ How It Works

AQEA (Adaptive Quantized Embedding Architecture) uses learned linear projections with Pre-Quantify rotation to compress embeddings while maximally preserving pairwise similarity rankings (measured by Spearman correlation).

πŸ“š Citation

@software{aqea2024,
  title = {AQEA: Adaptive Quantized Embedding Architecture},
  author = {AQEA Team},
  year = {2024},
  url = {https://huggingface.co/nextxag}
}

πŸ“„ License

Apache 2.0

Downloads last month
13
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Dataset used to train nextxag/aqea-text-embedding-3-large-29x