TXT O SERIES COMPRESS ALL IMG

ONE PLATFORM.
TEXT. IMAGE. COMPRESSED.

O Series is the AI-to-AI compression platform. OGENTI compresses text protocols. OVISEN compresses image embeddings. Same MARL engine. Two output formats. Pay-per-token pricing.

GET STARTED SEE PRODUCTS
2
PRODUCTS
10x
TEXT COMPRESS
4x
IMAGE COMPRESS
95%
FIDELITY
.OGT
TEXT FORMAT
.OGE
IMAGE FORMAT
SCROLL

THE O SERIES PRODUCT LINE

Both products share the same MARL training engine, credit system, and AES-256-GCM encrypted adapter format. Choose what you need to compress.

TEXT

OGENTI

AI-to-AI text compression protocol. Agents learn to compress natural language into 3-token protocol messages. 10x compression, zero meaning loss.

  • Multi-model: Llama-3, Qwen-2.5, Mistral, Gemma
  • LoRA R=16 adapter training via MARL + PPO
  • 5-phase progressive curriculum (Warmup → Distill)
  • 512 emergent token vocabulary
  • ~3MB portable adapter per model
  • AES-256-GCM encrypted .OGT output
.OGT
IMAGE

OVISEN

AI-to-AI image embedding compressor. Converts raw images into high-dimensional compressed embeddings. Downstream AI agents never need the raw pixels.

  • Vision models: CLIP, SigLIP, DINOv2, EVA-02
  • Frozen backbone + learnable CompressionHead
  • Cooperative encoder-decoder MARL training
  • Configurable target dimensions (32D-512D)
  • Fidelity + compression + task reward signals
  • AES-256-GCM encrypted .OGE output
.OGE

AI AGENTS WASTE RESOURCES COMMUNICATING IN HUMAN FORMATS

Text agents burn tokens on grammar. Vision agents transmit raw megapixel images. Neither is necessary for AI-to-AI communication.

TOKEN WASTE

Text agents burn API tokens with natural language overhead. Every "please" and "sure" is money on fire. OGENTI compresses to 3 tokens.

RAW PIXEL WASTE

Vision agents transmit full images when only embeddings matter. A 5MB photo can be a 256-dim vector. OVISEN makes the conversion.

NO UNIFIED PROTOCOL

There's no TCP/IP for AI. Every framework invents its own format. O Series provides a standard: .OGT for text, .OGE for images.

SEE THE COMPRESSION

OGENTI: Natural language in, compressed protocol out. OVISEN: Raw image in, compressed embedding out.

o_series_session.log

MARL TRAINING ENGINE — SAME FOR BOTH

Both OGENTI and OVISEN use the same multi-agent reinforcement learning pipeline. Two cooperative agents (encoder + decoder) optimize competing objectives.

1

SELECT BACKBONE

OGENTI: Llama / Qwen / Gemma  |  OVISEN: CLIP / SigLIP / DINOv2

Choose a pre-trained model as the frozen backbone. The platform freezes all base weights — only the compression head is trained.

BACKBONE: FROZEN
2

UPLOAD DATASET

TEXT: CONVERSATIONS, CODE, DOMAIN  |  IMAGE: IMAGENET, COCO, CUSTOM

Select from built-in datasets or upload your own. The platform handles cleaning, formatting, and batching automatically.

FORMAT: AUTO-DETECT
3

MARL TRAINING (PPO)

ENCODER + DECODER CO-EVOLVE VIA REWARD SIGNALS

Two agents train cooperatively. The encoder compresses; the decoder reconstructs. Reward = fidelity x compression x task-performance. PPO optimizes both simultaneously.

REWARD: MULTI-OBJECTIVE
4

EXPORT ADAPTER

OGENTI -> .OGT  |  OVISEN -> .OGE

Trained weights are distilled into a portable adapter file. AES-256-GCM encrypted. Any compatible model can load the adapter and speak the protocol instantly.

ENCRYPTION: AES-256-GCM
5

DEPLOY & INFER

REST API  |  PAY-PER-TOKEN  |  AUTO-ROUTING

Every adapter gets a scalable REST API endpoint. The router automatically selects the cheapest compatible model. You only pay for compressed tokens generated.

BILLING: PAY-PER-TOKEN

THE O SERIES CLOUD PLATFORM

DATASET MANAGER

Upload text logs or image datasets. Auto-clean, auto-format, auto-batch for MARL training.

INGESTION

1-CLICK TRAINING

Select backbone + dataset + episodes. The platform provisions RunPod A100 clusters and runs the full PPO pipeline.

TRAINING

LIVE TELEMETRY

Watch agents learn in real-time. WebSocket streams reward curves, fidelity scores, compression ratios, and PPO loss.

MONITORING

.OGT + .OGE FORMATS

Portable encrypted adapters. .OGT for text protocols, .OGE for image embeddings. AES-256-GCM with unique keys.

EXPORT

API + ROUTER

Every adapter gets a REST API. The built-in Router Agent auto-selects the cheapest compatible model for each request.

DEPLOYMENT

PAY-PER-TOKEN

No GPU lock-in. Buy credits, spend on training and inference. Massive scale = massive savings through compression.

BILLING

WHAT YOU GET

Train on your own data through the dashboard. Get portable adapters that work with any compatible model.

10x
TEXT COMPRESS
OGENTI .OGT
4x
IMAGE COMPRESS
OVISEN .OGE
95%
FIDELITY
semantic accuracy
~3MB
ADAPTER SIZE
portable
6
VISION MODELS
CLIP/SigLIP/DINOv2
256
AES KEY
bit encryption

BUILD YOUR OWN COMPRESSION ADAPTERS

Pick a modality — text or image. Choose your backbone. Select a dataset. Buy credits. We handle the GPUs — you get an encrypted adapter.

pip install oseries && oseries.attach(your_model, adapter="my_adapter.ogt")