A benchmarking tool that compares OCR and data extraction capabilities of different large multimodal models such as gpt-4o, evaluating both text and json extraction accuracy. The goal of this benchmark is to publish a comprehensive benchmark of OCRaccuracy across traditional OCR providers and multimodal Language Models. The evaluation dataset and methodologies are all Open Source, and we encourage expanding this benchmark to encompass any additional providers. Open Source LLM Benchmark Results (Mar 2025) | Dataset Benchmark Results (Feb 2025) | Dataset The primary goal is to evaluate JSON extraction from documents. To evaluate this, the Omni benchmark runs Document ⇒ OCR ⇒ Extraction. Measuring how well a model can OCR a page, and return that content in a format that an LLM can parse. We use a modified json-diff to identify differences between predicted and ground truth JSON objects. You can review the evaluation/json.ts file to see the exact implementation. Accuracy is calculated as: $$\text{Accuracy} = 1 - \frac{\text{number of difference fields}}{\text{total fields}}$$ While the primary benchmark metric is JSON accuracy, we have included levenshtein distance as a measurement of text similarity between extracted and ground truth text. Lower distance indicates higher similarity. Note this scoring method heavily penalizes accurate text that does not conform to the exact layout of the ground truth data. In the example below, an LLM could decode both blocks of text without any issue. All the information is 100% accurate, but slight rearrangements of the header text (address, phone number, etc.) result in a large difference on edit distance scoring. Clone the repo and install dependencies: npm install Prepare your test data For local data, add individual files to the data folder. To pull from a DB, add DATABASE_URL in your .env Copy the models.example.yaml file to models.yaml. Set up API keys in .env for the models you want to test. Check out the supported models here....
First seen: 2025-04-01 19:47
Last seen: 2025-04-02 01:49