Trending
Bruno.'s Avatar

Bruno.

@bdagnino.com

Building Limai: automated data extraction from unstructured sources | Climate Tech | Ex @PachamaInc | Co-founder @MetricaSports | ๐Ÿ‡ฆ๐Ÿ‡ท๐Ÿ‡ช๐Ÿ‡ธ

27
Followers
20
Following
48
Posts
26.12.2023
Joined
Posts Following

Latest posts by Bruno. @bdagnino.com

Interesting, will check it out! Thanks for the recommendation.

18.12.2024 17:32 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

What's the best way to monitor LLMs that use the Gemini API?

I used to use langfuse but it doesn't seem work as nicely as it does with openai.

18.12.2024 16:38 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

In this post you'll learn how:

1. Build a simple benchmark to evaluate the performance of your models
2. How a single in-context examples allowed 4o-mini to out perform 4o
3. How to simple improve model quality, and latency at the same time.

Check it out!

www.limai.io/blog/example

18.12.2024 11:21 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Post image

Using Few Shot examples to boost LLM data extraction by over 50%?

If you spent countless hours fine-tuning prompts, testing different parsing libraries, and trying to craft perfect solutions only to get mediocre results, this is for you.

18.12.2024 11:21 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image

My 3 mantras to stay sane as an entrepreneur.

Always visible in my desk.

I should probably have a nicer version framed or something, but hey, who has time for that? ๐Ÿ˜‚

11.12.2024 08:26 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Yes, there are so many things going into the "real eval" that makes it super hard to properly capture.

05.12.2024 14:15 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Ohh nice! AlthoughI think that's a bit too much for my skill level ๐Ÿคฃ

05.12.2024 14:15 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Preview
demos/vision-extraction-validation.ipynb at main ยท limai-io/demos Contribute to limai-io/demos development by creating an account on GitHub.

Want to dive into the details?

Check out our full notebook for the code, results, and how we caught hallucinated outputs: github.com/limai-io/de...

Or letโ€™s chat! DM me or email bruno@limai.io to discuss how we can help build robust pipelines for your business. ๐Ÿš€

05.12.2024 11:06 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

The Takeaway

Vision-based models are powerful, but validation frameworks are critical for reliable results.

๐Ÿ’ก If youโ€™re building data pipelines, combine extraction with validation to ensure accuracy and trust.

05.12.2024 11:06 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Key Results

โœ… Vision models like Gemini handled layouts flexibly.

โœ… Validation caught hallucinations and ensured data accuracy.

โœ… Trustworthiness increased for complex documents like utility bills.

05.12.2024 11:06 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

How It Works

โ€ข Extract raw text using a PDF reader.

โ€ข Validate each extracted value (e.g., โ€œ160.69 โ‚ฌโ€) by searching for it in the raw text.

โ€ข Flag values that donโ€™t match as potential hallucinations.

05.12.2024 11:06 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

We combined:

1๏ธโƒฃ Vision-based extraction to handle complex layouts.

2๏ธโƒฃ Instructor-powered validation to cross-check extracted values against raw text from PDFs.

This ensured data was grounded in reality, not hallucinated.

05.12.2024 11:06 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

While vision models excel at "reading" layouts, they sometimes invent data.

E.g., instead of extracting "2.983 kW" for contracted power, the model returned "2.0 kW"โ€”a made-up value. ๐Ÿ˜ฌ

How do we prevent this?

05.12.2024 11:06 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Vision-based extraction is becoming the most promising path forward for Document AI.

These models handle complex layouts, tables, and multimodal inputs nativelyโ€”far beyond what rule-based parsing can achieve. But they also have challenges.

05.12.2024 11:06 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

๐Ÿš€ Preventing Hallucinations in Vision-Based Data Extraction

Vision models are coming up as the best way to deal with documents with complex layouts. On the flip side, they are more likely to hallucinate results.

How can we address that? With OCR based data validations. ๐Ÿ‘‡

05.12.2024 11:06 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

It feels like chess engines are so powerful now that they become a bit useless in chess commentary. Even GMs can't make sense of the eval bar sometimes. It would be better maybe to have a more "human" eval bar that actually helps the audience and commentators.

05.12.2024 10:14 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 2 ๐Ÿ“Œ 0

I love how chess player assign so much meaning, personality, and purpose to chess pieces throughout games. So much passion and emotions on a board game.

03.12.2024 13:01 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Post image

Super excited about PydanticAI. Looking forward to taking it out for a spin.

02.12.2024 16:36 ๐Ÿ‘ 1 ๐Ÿ” 1 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

That's an interesting question. The dataset I have is not big enough to try that. I suspect that indeed at some point it will start to regress.

02.12.2024 13:26 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

100%, more so when you have models like Gemini's family in which you can really put A LOT in the context window.

02.12.2024 13:15 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

If youโ€™re curious about how this approach can work for you, letโ€™s chat!

Weโ€™re offering free consulting calls this month to help businesses optimize their AI strategies.

๐Ÿ“ฉ bruno@limai.io or DM me!

02.12.2024 11:46 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Check it out here: https://www.limai.io/blog/example

02.12.2024 11:46 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

In our latests post we break down:
โœ… How we built a simple test dataset to evaluate accuracy.
โœ… Why adding examples worked so well (and why you should try it).
โœ… How this influenced our product's UX/UI strategy.

02.12.2024 11:46 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Thatโ€™s when we tried something so simple it felt obvious in hindsight: we added an example. The results were staggering:
โ€ข With a small model plus the example, accuracy leaped from 61% to 97%.
โ€ข We achieved this without fine-tuning or complex parsing techniques.

02.12.2024 11:46 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Even after a lot of work on prompt engineering and trying out parsing libraries our results were stuck at 61%-80% accuracyโ€”not enough for reliable use.

02.12.2024 11:46 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Czech utility bills. These documents had:
๐ŸŒŽ Non-English text (a hurdle for many LLMs)
๐Ÿงฎ Values that needed to be calculated (e.g., summing multiple rows for Heating or Cooling)
๐ŸŽฒ A mix of other fields like dates, addresses, and contracts details

02.12.2024 11:46 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

While building Limai 's data extraction product, we faced a tough challenge for a proof concept with a potential client: extracting complex data from

02.12.2024 11:46 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

๐Ÿš€ [NEW POST] Show, Donโ€™t Tell: How Dynamic Examples Boosted Accuracy from 61% to 97%

Ever spent hours fine-tuning prompts or testing document parsing libraries, only to end up with meh results? What if I told you that one simple change could drastically improve your results?

02.12.2024 11:46 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 1

https://arxiv.org/abs/2310.11244

29.11.2024 15:25 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Post image

Interesting paper on Entity Matching using LLMs. I think I'll work on a demo of this soon.

29.11.2024 15:25 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0