id
string
text
string
embedding
list
url
string
filename
string
page_num
int32
words
string
corrected_text
string
corrections
string
2512.17915_page_1
Supplementary Resources and Analysis for Automatic Speech Recognition Systems Trained on the Loquacious Dataset Nick Rossenbach∗†, Robin Schmitt∗†, Tina Raissi∗, Simon Berger∗†, Larissa Kleppel∗, Ralf Schlüter∗† ∗RWTH Aachen University, †AppTek.ai Aachen, Germany {lastname}@ml.rwth-aachen.de Abstract The recently publi...
[ -0.09084983915090561, -0.18505682051181793, 0.028442008420825005, -0.03862176090478897, 0.01164216734468937, 0.06078643351793289, -0.02839282713830471, -0.001238401047885418, -0.032731156796216965, -0.06897306442260742, 0.03225497156381607, -0.04918546974658966, 0.0016038675094023347, -0.0...
https://arxiv.org/pdf/2512.17915
2512.17915
1
[{"text": "Supplementary", "left": 86.01799774169922, "top": 82.55497741699219, "width": 104.06732940673828, "height": 14.346199035644531}, {"text": "Resources", "left": 194.07357788085938, "top": 82.55497741699219, "width": 73.35214233398438, "height": 14.346199035644531}, {"text": "and", "left": 271.4139709472656, "t...
[]
2512.17915_page_2
1.1. Contributions With this work, we benchmark several ASR archi- tectures with different label topologies using BPE and phoneme label units. We show the effect of different decoding methods, with open or closed vocabulary and optional use of a LM. For this pur- pose, we extend the Loquacious dataset with a pronunciat...
[ -0.03953583911061287, -0.14064234495162964, 0.057217247784137726, -0.03852261230349541, -0.04181374981999397, -0.014403225854039192, -0.01642632856965065, 0.017858434468507767, 0.005356563720852137, -0.07174739986658096, 0.013009521178901196, -0.07055637985467911, 0.01235112827271223, -0.0...
https://arxiv.org/pdf/2512.17915
2512.17915
2
[{"text": "1.1.", "left": 72.0, "top": 64.3185043334961, "width": 18.345840454101562, "height": 10.998703002929688}, {"text": "Contributions", "left": 101.34454345703125, "top": 64.3185043334961, "width": 72.09645080566406, "height": 10.998703002929688}, {"text": "With", "left": 81.96299743652344, "top": 84.08692932128...
[]
2512.17915_page_3
Table 1: Perplexities and OOV percentage of the different count-based LMs on the respective dev sets. All LMs are restricted to the 216k words vocabulary. Language Model n-gram count Loquacious Commonvoice LibriSpeech VoxPopuli Yodas OOV[%] PPL OOV[%] PPL OOV[%] PPL OOV[%] PPL OOV[%] PPL 3-gram pruned 36M 0.58 222 1.25...
[ -0.024678321555256844, -0.17939873039722443, -0.08653436601161957, -0.016259046271443367, -0.02198953740298748, 0.016747770830988884, 0.0017741707852110267, 0.07699503749608994, 0.06655903905630112, 0.023167412728071213, 0.07796350121498108, -0.030467409640550613, 0.03338629752397537, -0.0...
https://arxiv.org/pdf/2512.17915
2512.17915
3
[{"text": "Table", "left": 71.69100189208984, "top": 72.17292785644531, "width": 24.145538330078125, "height": 9.962600708007812}, {"text": "1:", "left": 98.60091400146484, "top": 72.17292785644531, "width": 8.383590698242188, "height": 9.962600708007812}, {"text": "Perplexities", "left": 110.6837387084961, "top": 72.1...
[]
2512.17915_page_4
Table 2: Recognition results for BPE ASR systems that work without additional LMs or lexicon. Abbrevia- tions for test sets: Commonvoice (CV), LibriSpeech (LS), VoxPopuli (VP), Yodas (YD). Data Architecture Decoding WER [%] Loquacious CV LS VP YD dev test test train.small 250h CTC Greedy 17.2 18.6 30.1 16.7 12.5 21.1 R...
[ -0.06810492277145386, -0.06766447424888611, -0.0348125658929348, -0.04632832109928131, 0.047958362847566605, 0.05099266767501831, -0.04427279159426689, 0.03987102583050728, -0.07576029002666473, -0.048726458102464676, -0.012135426513850689, -0.07095083594322205, 0.008172090165317059, -0.00...
https://arxiv.org/pdf/2512.17915
2512.17915
4
[{"text": "Table", "left": 71.69100189208984, "top": 72.17292785644531, "width": 23.599403381347656, "height": 9.962600708007812}, {"text": "2:", "left": 98.06333923339844, "top": 72.17292785644531, "width": 8.200790405273438, "height": 9.962600708007812}, {"text": "Recognition", "left": 109.97120666503906, "top": 72.1...
[]
2512.17915_page_5
4.2. Count-based Language Model We use the 216k words vocabulary from Section 2.1 and the pruned 4-gram LM from Section 2.3 to fur- ther improve on the baseline results as shown in Table 3. As expected using a lexical tree-based search with a language model yields much better results for models trained on train.small. ...
[ 0.004336571786552668, -0.115930937230587, 0.04757677763700485, 0.030209969729185104, 0.0312812402844429, 0.05556652322411537, -0.03969951719045639, 0.08469244837760925, 0.034532640129327774, -0.025250565260648727, 0.03423786535859108, -0.04807525500655174, 0.01157895103096962, 0.0383448936...
https://arxiv.org/pdf/2512.17915
2512.17915
5
[{"text": "4.2.", "left": 72.0, "top": 64.3185043334961, "width": 18.345840454101562, "height": 10.998703002929688}, {"text": "Count-based", "left": 101.34454345703125, "top": 64.3185043334961, "width": 67.21305847167969, "height": 10.998703002929688}, {"text": "Language", "left": 171.615234375, "top": 64.3185043334961...
[]
2512.17915_page_6
Table 3: Recognition results for different BPE ASR systems to compare the effect of vocabulary restricted search and the addition of the pruned 4-gram LM. Abbreviations for test sets: Commonvoice (CV), LibriSpeech (LS), VoxPopuli (VP), Yodas (YD). Data Architecture Vocab LM WER [%] Loquacious CV LS VP YD dev test test ...
[ -0.003550386056303978, -0.0599285252392292, -0.049036905169487, -0.05048362910747528, 0.04420754685997963, 0.05249686911702156, -0.05863851681351662, 0.07372786849737167, -0.015921100974082947, -0.01772783324122429, 0.06239372864365578, -0.0494658425450325, 0.010500562377274036, -0.0032909...
https://arxiv.org/pdf/2512.17915
2512.17915
6
[{"text": "Table", "left": 71.69100189208984, "top": 72.17292785644531, "width": 23.422271728515625, "height": 9.962600708007812}, {"text": "3:", "left": 97.63221740722656, "top": 72.17292785644531, "width": 8.142631530761719, "height": 9.962600708007812}, {"text": "Recognition", "left": 112.91185760498047, "top": 72.1...
[]
2512.17915_page_7
Table 5: Recognition results for different ASR systems comparing BPE to phoneme performance. Results of FH are excluding speed perturbation. All results include decoding with the pruned 4-gram LM. Abbrevi- ations for test sets: Commonvoice (CV), LibriSpeech (LS), VoxPopuli (VP), Yodas (YD). Data Architecture Label WER ...
[ -0.04637373983860016, -0.059191904962062836, -0.04366108775138855, -0.07091904431581497, 0.006375262513756752, 0.023094916716217995, -0.06334196776151657, 0.08433638513088226, -0.03668772801756859, -0.06069326400756836, 0.006718764081597328, -0.059695567935705185, 0.0017539648106321692, -0...
https://arxiv.org/pdf/2512.17915
2512.17915
7
[{"text": "Table", "left": 71.69100189208984, "top": 72.17292785644531, "width": 23.422271728515625, "height": 9.962600708007812}, {"text": "5:", "left": 97.74937438964844, "top": 72.17292785644531, "width": 8.142631530761719, "height": 9.962600708007812}, {"text": "Recognition", "left": 109.52397155761719, "top": 72.1...
[]
2512.17915_page_8
Table 9: Detailed WERs in terms of substitutions, insertions, and deletions on the LibriSpeech and Yodas dev sets for different models trained on train.small. CTC, mRNN-T, and FH results are with 4-gram LM. In case of phonemes, we use end-of-word augmentation. · Architecture Labels WER [%] LibriSpeech Yodas Tokens Numb...
[ -0.06503663957118988, -0.09698916226625443, 0.05206318199634552, -0.08792991936206818, -0.025583166629076004, 0.01454163808375597, 0.011478366330265999, 0.07806073129177094, 0.012806913815438747, -0.012431302107870579, 0.06227429583668709, -0.0695335865020752, -0.015324821695685387, -0.013...
https://arxiv.org/pdf/2512.17915
2512.17915
8
[{"text": "Table", "left": 71.69100189208984, "top": 72.17292785644531, "width": 23.422271728515625, "height": 9.962600708007812}, {"text": "9:", "left": 97.66150665283203, "top": 72.17292785644531, "width": 8.142631530761719, "height": 9.962600708007812}, {"text": "Detailed", "left": 112.9704360961914, "top": 72.17292...
[]
2512.17915_page_9
mance, and for certain research or applications it might be beneficial to also explore such systems. We discussed some aspects that make Loquacious more interesting for research compared to earlier academic datasets or other alternatives. Finally, we presented first results with multiple architectures that can be used ...
[ -0.042436376214027405, -0.022550757974386215, 0.024699769914150238, 0.02385099045932293, 0.012524338439106941, -0.04125654697418213, -0.09810199588537216, 0.09307502955198288, -0.01146545447409153, -0.0787128135561943, -0.09398803859949112, -0.021430617198348045, -0.03861740231513977, -0.0...
https://arxiv.org/pdf/2512.17915
2512.17915
9
[{"text": "mance,", "left": 72.0, "top": 64.80091857910156, "width": 33.12471008300781, "height": 9.962600708007812}, {"text": "and", "left": 107.91287994384766, "top": 64.80091857910156, "width": 16.850257873535156, "height": 9.962600708007812}, {"text": "for", "left": 127.54120635986328, "top": 64.80091857910156, "wi...
[]
2512.17915_page_10
Kenneth Heafield. 2011. KenLM: Faster and smaller language model queries. In Proceed- ings of the Sixth Workshop on Statistical Ma- chine Translation, pages 187–197, Edinburgh, Scotland. Proc. ACL. François Hernandez, Vincent Nguyen, Sahar Ghan- nay, Natalia A. Tomashenko, and Yannick Estève. 2018. TED-LIUM 3: Twice as...
[ -0.05252157896757126, -0.12413886934518814, 0.04521544650197029, 0.00046796395326964557, 0.023742327466607094, 0.1458677500486374, -0.019992223009467125, -0.009958178736269474, 0.05910591036081314, -0.08101895451545715, -0.0014520925469696522, -0.02399280108511448, -0.023093707859516144, -...
https://arxiv.org/pdf/2512.17915
2512.17915
10
[{"text": "Kenneth", "left": 72.0, "top": 64.80091857910156, "width": 37.405792236328125, "height": 9.962600708007812}, {"text": "Heafield.", "left": 114.91351318359375, "top": 64.80091857910156, "width": 40.09869384765625, "height": 9.962600708007812}, {"text": "2011.", "left": 160.51992797851562, "top": 64.8009185791...
[]
2512.22106_page_1
Pruning as a Game: Equilibrium-Driven Sparsification of Neural Networks Zubair Shah College of Science and Engineering Hamad Bin Khalifa University Doha, Qatar zshah@hbku.edu.qa Noaman Khan College of Science and Engineering Hamad Bin Khalifa University Doha, Qatar nokh88609@hbku.edu.qa Abstract Neural network pruning ...
[ 0.02021130546927452, 0.021239636465907097, -0.006247793324291706, 0.020280612632632256, 0.013715095818042755, 0.07885098457336426, 0.03449004516005516, -0.018417710438370705, 0.03548691049218178, -0.015263018198311329, -0.018680989742279053, 0.014186140149831772, 0.04264143481850624, 0.020...
https://arxiv.org/pdf/2512.22106
2512.22106
1
[{"text": "Pruning", "left": 156.4429931640625, "top": 77.8488540649414, "width": 60.27113342285156, "height": 17.21540069580078}, {"text": "as", "left": 221.01797485351562, "top": 77.8488540649414, "width": 15.304489135742188, "height": 17.21540069580078}, {"text": "a", "left": 240.62631225585938, "top": 77.8488540649...
[]
2512.22106_page_2
indispensable contributions, while others become increasingly redundant as training progresses. From this perspective, sparsity is not an externally enforced constraint, but an emergent property of competition and dominance among parameters. Motivated by this observation, we propose a game-theoretic formulation of neur...
[ -0.05434573441743851, -0.031066006049513817, -0.0470442958176136, 0.002026922767981887, 0.009740054607391357, 0.10963446646928787, 0.06007782742381096, -0.017383597791194916, 0.015914548188447952, -0.04066009074449539, -0.03845950588583946, 0.04531439393758774, 0.05002082884311676, 0.02468...
https://arxiv.org/pdf/2512.22106
2512.22106
2
[{"text": "indispensable", "left": 108.0, "top": 74.33230590820312, "width": 55.32112121582031, "height": 10.061729431152344}, {"text": "contributions,", "left": 166.806640625, "top": 74.33230590820312, "width": 55.971527099609375, "height": 10.061729431152344}, {"text": "while", "left": 226.51773071289062, "top": 74.3...
[]
2512.22106_page_3
2.5 Game-Theoretic Perspectives in Learning Game-theoretic concepts have been applied to model adversarial learning [17], multi-agent reinforce- ment learning [18, 19], distributed optimization [20], and federated learning [21]. However, their application to pruning as an equilibrium phenomenon remains unexplored. 2.6 ...
[ -0.036016520112752914, -0.03366038575768471, -0.05570220202207565, -0.030912959948182106, 0.019179780036211014, 0.06415548175573349, 0.011085791513323784, -0.06333967298269272, -0.027646316215395927, -0.008264320902526379, -0.03076357953250408, 0.00202948204241693, 0.06599901616573334, 0.0...
https://arxiv.org/pdf/2512.22106
2512.22106
3
[{"text": "2.5", "left": 108.0, "top": 74.31652069091797, "width": 12.4532470703125, "height": 9.962600708007812}, {"text": "Game-Theoretic", "left": 130.4158477783203, "top": 74.31652069091797, "width": 69.53897094726562, "height": 9.962600708007812}, {"text": "Perspectives", "left": 202.44546508789062, "top": 74.3165...
[]
2512.22106_page_4
3.3 From Optimization to Interaction Traditional pruning methods implicitly evaluate parameter groups in isolation by assigning importance scores derived from magnitude, gradients, or training trajectories. In contrast, our formulation emphasizes that the utility of a parameter group depends on the collective configura...
[ 0.013879363425076008, 0.0007282173028215766, -0.05910981446504593, -0.05576064810156822, 0.0022702247370034456, 0.05197698250412941, 0.03955334424972534, 0.021213704720139503, 0.005809319205582142, 0.007677589543163776, -0.04415026307106018, 0.023888448253273964, 0.059229493141174316, -0.0...
https://arxiv.org/pdf/2512.22106
2512.22106
4
[{"text": "3.3", "left": 108.0, "top": 74.31652069091797, "width": 12.4532470703125, "height": 9.962600708007812}, {"text": "From", "left": 130.4158477783203, "top": 74.31652069091797, "width": 23.611358642578125, "height": 9.962600708007812}, {"text": "Optimization", "left": 156.51785278320312, "top": 74.3165206909179...
[]
2512.22106_page_5
where β, γ, η ≥0 are hyperparameters controlling the strength of different cost components. The first term penalizes participation scaled by the ℓ2-norm of the parameter group, discouraging large magnitudes from dominating. The second term imposes an ℓ1-style sparsity cost, promoting exact zeros at equilibrium. The thi...
[ 0.020392993465065956, 0.010114687494933605, -0.011221171356737614, -0.06014055013656616, 0.05221623182296753, 0.09675704687833786, 0.10329578816890717, 0.03478222340345383, -0.00030341927777044475, 0.014414009638130665, -0.013862067833542824, -0.005518466699868441, 0.1388983279466629, 0.00...
https://arxiv.org/pdf/2512.22106
2512.22106
5
[{"text": "where", "left": 107.64099884033203, "top": 74.40748596191406, "width": 24.33863067626953, "height": 9.962600708007812}, {"text": "\u03b2,", "left": 134.47027587890625, "top": 74.17692565917969, "width": 8.936447143554688, "height": 9.962600708007812}, {"text": "\u03b3,", "left": 145.06051635742188, "top": 74...
[]
2512.22106_page_6
This condition has an intuitive interpretation: pruning occurs when the marginal contribution of a parameter group (left-hand side) is outweighed by sparsity costs and competition from other players (right-hand side). For networks with redundancy, many parameter groups will satisfy this condition, leading to sparse equ...
[ 0.025549372658133507, -0.04301818087697029, -0.03181174397468567, 0.003736303886398673, -0.0008567548939026892, 0.07401443272829056, 0.016763916239142418, 0.01697409525513649, 0.03107563778758049, -0.005297492258250713, -0.03199896216392517, 0.0836329385638237, 0.08465909957885742, -0.0065...
https://arxiv.org/pdf/2512.22106
2512.22106
6
[{"text": "This", "left": 122.94400024414062, "top": 74.47579193115234, "width": 17.394668579101562, "height": 9.872528076171875}, {"text": "condition", "left": 142.82362365722656, "top": 74.47579193115234, "width": 36.961212158203125, "height": 9.872528076171875}, {"text": "has", "left": 182.26979064941406, "top": 74....
[]
2512.22106_page_7
Algorithm 1 Equilibrium-Driven Pruning 1: Input: training data, initial parameters θ, initial participation s = 1 2: Output: pruned parameters ˜θ 3: Initialize participation variables si = 1 for all players i 4: for training iterations t = 1, . . . , T do 5: Update θ using gradient descent on L(θ, s) 6: Update s using ...
[ -0.02834988199174404, -0.017893454059958458, -0.03270605579018593, 0.00788962934166193, 0.015847094357013702, 0.06452427804470062, 0.017583461478352547, 0.010885252617299557, -0.014692379161715508, 0.002768621314316988, -0.017812039703130722, 0.011960430070757866, 0.07047783583402634, 0.04...
https://arxiv.org/pdf/2512.22106
2512.22106
7
[{"text": "Algorithm", "left": 107.64099884033203, "top": 74.00749969482422, "width": 44.27379608154297, "height": 9.962600708007812}, {"text": "1", "left": 154.4054412841797, "top": 74.00749969482422, "width": 4.981292724609375, "height": 9.962600708007812}, {"text": "Equilibrium-Driven", "left": 161.87738037109375, "...
[]
2512.22106_page_8
learned jointly with network parameters, controlling the effective contribution of each neuron during training. 7.2 Training and Pruning Procedure Models are trained for 20 epochs with batch size 128. Network weights are optimized using cross- entropy loss, while participation variables are optimized jointly using equi...
[ 0.0311575997620821, -0.022061511874198914, 0.034471459686756134, 0.02867172472178936, -0.005357325542718172, 0.07358526438474655, -0.026265164837241173, 0.0007288659689947963, 0.013570244424045086, -0.03974559158086777, 0.0010410951217636466, 0.022995086386799812, 0.023225408047437668, 0.0...
https://arxiv.org/pdf/2512.22106
2512.22106
8
[{"text": "learned", "left": 108.0, "top": 74.29180145263672, "width": 29.979202270507812, "height": 9.887596130371094}, {"text": "jointly", "left": 140.47174072265625, "top": 74.29180145263672, "width": 25.082443237304688, "height": 9.887596130371094}, {"text": "with", "left": 168.05653381347656, "top": 74.29180145263...
[]
2512.22106_page_9
Figure 1: Training dynamics of equilibrium-driven pruning under different utility configurations. The four-panel visualization shows the evolution of test accuracy, sparsity, mean participation value, and number of active neurons over training epochs. Configurations with insufficient cost pressure converge to dense equ...
[ 0.08712690323591232, -0.0832505002617836, -0.03849784657359123, -0.018694834783673286, -0.005307493731379509, 0.029745642095804214, 0.04283863306045532, 0.038892265409231186, 0.026050735265016556, -0.018456658348441124, -0.017036864534020424, 0.01819673553109169, 0.01520723756402731, 0.004...
https://arxiv.org/pdf/2512.22106
2512.22106
9
[{"text": "Figure", "left": 108.0, "top": 380.8193054199219, "width": 26.5325927734375, "height": 10.061737060546875}, {"text": "1:", "left": 137.46937561035156, "top": 380.8193054199219, "width": 7.9059295654296875, "height": 10.061737060546875}, {"text": "Training", "left": 149.35874938964844, "top": 380.819305419921...
[]
2512.22106_page_10
Figure 2: Distribution of neuron participation values at convergence. Histograms of final participation values for each configuration, with the pruning threshold ε = 0.01 shown as a red dashed line. Successful pruning configurations exhibit bimodal distributions with mass concentrated near zero and one, indicating near...
[ 0.05060781165957451, -0.04159187525510788, 0.03464743122458458, -0.027702946215867996, 0.021778175607323647, 0.020477376878261566, -0.005188940092921257, 0.012728867121040821, 0.028427017852663994, 0.001148004550486803, -0.008010094054043293, 0.008970965631306171, -0.012674878351390362, 0....
https://arxiv.org/pdf/2512.22106
2512.22106
10
[{"text": "Figure", "left": 108.0, "top": 380.2834167480469, "width": 25.492111206054688, "height": 9.862457275390625}, {"text": "2:", "left": 135.7572021484375, "top": 380.2834167480469, "width": 7.59588623046875, "height": 9.862457275390625}, {"text": "Distribution", "left": 146.33090209960938, "top": 380.28341674804...
[]
2512.22106_page_11
explicit conditioning, this consideration becomes increasingly important for scaling the method to larger networks. 10 Conclusion In this work, we proposed a game-theoretic perspective on neural network pruning, reframing sparsity as an equilibrium outcome of strategic interaction among parameter groups rather than as ...
[ 0.03514381870627403, -0.03785949572920799, 0.0012311068130657077, -0.030556581914424896, -0.020789824426174164, 0.06048249825835228, 0.01252683624625206, 0.016056960448622704, -0.036590591073036194, -0.028894327580928802, -0.04459273815155029, 0.02010858617722988, 0.07241503894329071, 0.00...
https://arxiv.org/pdf/2512.22106
2512.22106
11
[{"text": "explicit", "left": 108.0, "top": 74.34728240966797, "width": 30.214187622070312, "height": 10.041984558105469}, {"text": "conditioning,", "left": 140.7244415283203, "top": 74.34728240966797, "width": 53.70738220214844, "height": 10.041984558105469}, {"text": "this", "left": 196.94207763671875, "top": 74.3472...
[]
2512.22106_page_12
[12] S. Zhang, M. Wang, S. Liu, P.-Y. Chen, and J. Xiong. Why lottery ticket wins? A theoretical perspective of sample complexity on pruned neural networks. In NeurIPS, 2021. [13] T. Chen, Y. Sui, and X. Chen, et al. A unified lottery ticket hypothesis for graph neural networks. In ICML, 2021. [14] E. Frantar and D. Al...
[ -0.09865571558475494, 0.01642058789730072, 0.04661652818322182, -0.00001551791501697153, 0.08227498084306717, 0.06127409264445305, -0.02914333902299404, 0.033962443470954895, 0.020161598920822144, 0.0007700207061134279, 0.010264607146382332, 0.056109439581632614, 0.05245498940348625, 0.014...
https://arxiv.org/pdf/2512.22106
2512.22106
12
[{"text": "[12]", "left": 108.0, "top": 74.40748596191406, "width": 16.597686767578125, "height": 9.962600708007812}, {"text": "S.", "left": 129.57899475097656, "top": 74.34353637695312, "width": 8.166366577148438, "height": 10.046920776367188}, {"text": "Zhang,", "left": 140.22769165039062, "top": 74.34353637695312, "...
[]