Options
2025
Conference Paper
Title
Exploring Curriculum Learning for Languages: Lessons from Regular Language Tasks
Abstract
Despite its intuitive appeal, the effectiveness of data-level curriculum learning (CL) remains debated, mainly due to the absence of unambiguous notions of sample difficulty in real-world tasks. As a step towards a better understanding of the effective use of different curriculum strategies in natural language learning, we study CL in the context of regular languages, where both ground truth and sample difficulty can be precisely defined using deterministic finite automata. We consider two natural measures of difficulty: a data-driven metric based on input length and a task-specific metric derived from the automaton’s structure. Training RNNs and LSTMs across ten regular language classification tasks, we find that CL is not just beneficial but, in some cases, essential for generalisation. Surprisingly, straightforward data-driven curricula outperform more complex task-specific strategies, with the most successful approaches oversampling the shorter lengths early in training.
Author(s)