We already know prompt repetition is a handy hack to improve a decoder-only LMβs performance as it allows the model to βseeβ bidirectionally, an ability otherwise suppressed by the causal mask.
But what happens if we increase the number of repetitions? π€π§΅ @eaclmeeting.bsky.social #EACL2026
02.02.2026 12:04
π 5
π 4
π¬ 1
π 1
ππππ·
18.08.2025 10:51
π 0
π 0
π¬ 0
π 0
So, news becomes more positive as the years go by. Or does it? We trained sentiment classifiers on STONE & 24sata, then analyzed sentiment over 5 periods of the TL Retriever. We find that positivity rises at the expense of neutrality. But negativity in news headlines also increases.
15.07.2025 12:14
π 0
π 0
π¬ 1
π 0
We detect sentiment shift by swapping embeddings across periods. Using later-period embeddings in earlier periods results in increased positive sentiment. Using earlier-period embeddings in later periods results in decreased positive sentiment.
15.07.2025 12:14
π 0
π 0
π¬ 1
π 0
We wondered if the trained embeddings could tell us something about the shift in sentiment. Can we detect changes in positivity and negativity just using the trained embeddings? The answer is yes!
15.07.2025 12:14
π 0
π 0
π¬ 1
π 0
We identify words that change the most by their cumulative cosine distance scores within the last 25 years. For these words, we unveil the change in meaning by picking five nearest neighbors per period. We group the words into three major topics: EU, technology, and COVID.
15.07.2025 12:14
π 0
π 0
π¬ 1
π 0
We train embeddings using skip-gram with negative sampling (SGNS) method from Word2Vec. We align embeddings between different periods using Procrustes alignment. We validate the quality of embeddings on two word similarity datasets.
15.07.2025 12:14
π 1
π 0
π¬ 1
π 0
TakeLab Retriever
TakeLab Retriever
We leverage the TakeLab Retriever π (retriever.takelab.fer.hr) corpus of 10 million articles from Croatian news outlets, which we split into five equal periods (2000--2024).
Semantic change is measured using the cumulative cosine distance between embeddings in neighboring periods.
15.07.2025 12:14
π 0
π 0
π¬ 1
π 0
Despite traditional diachronic studies using corpora spanning centuries, we also find interesting results when training diachronic embeddings on only 25 years of news data. We detect words from 3 turbulent topicsβEU, Technology, and COVIDβwhose semantics were strongly affected.
15.07.2025 12:14
π 0
π 0
π¬ 1
π 0
π£π£ New preprint alert!!
Despite events in the world becoming bleaker, the news is⦠more positive?
We conduct a diachronic study of word embeddings trained on 10M Croatian news articles spanning 25 years and find some surprising results!
arxiv.org/abs/2506.13569
15.07.2025 12:14
π 2
π 2
π¬ 1
π 1
So, news becomes more positive as the years go by. Or does it? We trained sentiment classifiers on STONE & 24sata, then analyzed sentiment over 5 periods of the TL Retriever. We find that positivity rises at the expense of neutrality. But negativity in news headlines also increases.
15.07.2025 12:08
π 0
π 0
π¬ 1
π 0
We detect sentiment shift by swapping embeddings across periods. Using later-period embeddings in earlier periods results in increased positive sentiment. Using earlier-period embeddings in later periods results in decreased positive sentiment.
15.07.2025 12:08
π 0
π 0
π¬ 1
π 0
We wondered if the trained embeddings could tell us something about the shift in sentiment. Can we detect changes in positivity and negativity just using the trained embeddings? The answer is yes!
15.07.2025 12:08
π 0
π 0
π¬ 1
π 0
We identify words that change the most by their cumulative cosine distance scores within the last 25 years. For these words, we unveil the change in meaning by picking five nearest neighbors per period. We group the words into three major topics: EU, technology, and COVID.
15.07.2025 12:08
π 0
π 0
π¬ 1
π 0
We train embeddings using skip-gram with negative sampling (SGNS) method from Word2Vec. We align embeddings between different periods using Procrustes alignment. We validate the quality of embeddings on two word similarity datasets.
15.07.2025 12:08
π 1
π 0
π¬ 1
π 0
TakeLab Retriever
TakeLab Retriever
We leverage the TakeLab Retriever π (retriever.takelab.fer.hr) corpus of 10 million articles from Croatian news outlets, which we split into five equal periods (2000--2024).
Semantic change is measured using the cumulative cosine distance between embeddings in neighboring periods.
15.07.2025 12:08
π 0
π 0
π¬ 1
π 0
Despite traditional diachronic studies using corpora spanning centuries, we also find interesting results when training diachronic embeddings on only 25 years of news data. We detect words from 3 turbulent topicsβEU, Technology, and COVIDβwhose semantics were strongly affected.
15.07.2025 12:08
π 0
π 0
π¬ 1
π 0