Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
In a Training Loop 🔄
1782
326
156
Stefan Schweter
PRO
stefan-it
Follow
bananas2030's profile picture
Huiyu's profile picture
dmitriif's profile picture
3,686 followers
·
388 following
https://schweter.bayern
stefan-it
stefan-it
AI & ML interests
Flair Library 💕, NER & PoS Tagging, LM Pretraining (mostly encoder-only & encoder-decoder), Historical Language Models, German Language Models, Bavarian NLP 🥨
Recent Activity
reacted
to
umarbutler
's
post
with ❤️
11 minutes ago
@abdurrahmanbutler and I just dropped Legal RAG Bench, the first benchmark for legal RAG systems to simultaneously evaluate hallucinations, retrieval failures, and reasoning errors. Our key takeaways are: 1. Embedding models, not generative models, are the primary driver of RAG accuracy. Switching from a general-purpose embedder like OpenAI's Text Embedding 3 Large to a legal domain embedder like Isaacus' Kanon 2 Embedder can raise accuracy by ~19 points. 2. Hallucinations are often triggered by retrieval failures. Fix your retrieval stack, and, in most cases, you end up fixing hallucinations. 3. Once you have a solid legal retrieval engine like Kanon 2 Embedder, it doesn’t matter as much what generative model you use; GPT-5.2 and Gemini 3.1 Pro perform relatively similarly, with Gemini 3.1 Pro achieving slightly better accuracy at the cost of more hallucinations. 4. Google's latest LLM, Gemini 3.1 Pro, is actually a bit worse than its predecessor at legal RAG, achieving 79.3% accuracy instead of 80.3%. These findings confirm what we already knew at Isaacus: that information retrieval sets the ceiling on the accuracy of legal RAG systems. It doesn’t matter how smart you are; you aren’t going to magically know what the penalty is for speeding in California without access to an up-to-date copy of the California Vehicle Code. Even still, to our knowledge, we’re the first to actually show this empirically. Unfortunately, as we highlight in our write-up, high-quality open legal benchmarks like Legal RAG Bench and our earlier MLEB are few and far between. In the interests of transparency, we have not only detailed exactly how we built Legal RAG Bench, but we’ve also released all of our data openly on Hugging Face. You can read our write up [here](https://isaacus.com/blog/legal-rag-bench), noting that we’ll soon be publishing it as a paper. Kudos to my brother @abdurrahmanbutler for serving as the lead author on this monumental release.
liked
a dataset
about 5 hours ago
Eurolingua/HPLT3_DE_0.9_Quantile_Adult_Filtered
reacted
to
qgallouedec
's
post
with 🔥
1 day ago
@CohereLabs just released 🌿 Tiny Aya: a fully open-source 3B parameter model that speaks 70+ languages 🌍! But there’s a catch: Tiny Aya is just a language model. It doesn’t support tool calling, the key capability that turns frontier models into powerful *agents*. So the real question is: How hard is it to turn Tiny Aya into an agent? Turns out… it’s simple, thanks to Hugging Face TRL. We’re sharing a hands-on example showing how to train Tiny Aya to turn it into a tool-calling agent using TRL, unlocking what could become the first *massively multilingual open agent*. Small model. Global reach. Agent capabilities. 👉 https://github.com/huggingface/trl/blob/main/examples/notebooks/sft_tool_calling.ipynb
View all activity
Organizations
stefan-it
's Spaces
2
Sort: Recently updated
Running
on
Zero
1
German nanochat Demo
🌍
Generate German text responses based on user input
Sleeping
1
hmLeaderboard
🏆